1. 21 6月, 2018 4 次提交
  2. 10 8月, 2017 1 次提交
    • P
      locking/atomic: Fix atomic_set_release() for 'funny' architectures · 9d664c0a
      Peter Zijlstra 提交于
      Those architectures that have a special atomic_set implementation also
      need a special atomic_set_release(), because for the very same reason
      WRITE_ONCE() is broken for them, smp_store_release() is too.
      
      The vast majority is architectures that have spinlock hash based atomic
      implementation except hexagon which seems to have a hardware 'feature'.
      
      The spinlock based atomics should be SC, that is, none of them appear to
      place extra barriers in atomic_cmpxchg() or any of the other SC atomic
      primitives and therefore seem to rely on their spinlock implementation
      being SC (I did not fully validate all that).
      
      Therefore, the normal atomic_set() is SC and can be used at
      atomic_set_release().
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile]
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: davem@davemloft.net
      Cc: james.hogan@imgtec.com
      Cc: jejb@parisc-linux.org
      Cc: rkuo@codeaurora.org
      Cc: vgupta@synopsys.com
      Link: http://lkml.kernel.org/r/20170609110506.yod47flaav3wgoj5@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9d664c0a
  3. 15 4月, 2017 1 次提交
  4. 01 10月, 2016 2 次提交
  5. 20 6月, 2016 1 次提交
  6. 16 6月, 2016 2 次提交
  7. 02 6月, 2016 2 次提交
  8. 09 5月, 2016 1 次提交
  9. 23 9月, 2015 1 次提交
    • P
      atomic, arch: Audit atomic_{read,set}() · 62e8a325
      Peter Zijlstra 提交于
      This patch makes sure that atomic_{read,set}() are at least
      {READ,WRITE}_ONCE().
      
      We already had the 'requirement' that atomic_read() should use
      ACCESS_ONCE(), and most archs had this, but a few were lacking.
      All are now converted to use READ_ONCE().
      
      And, by a symmetry and general paranoia argument, upgrade atomic_set()
      to use WRITE_ONCE().
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: james.hogan@imgtec.com
      Cc: linux-kernel@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: will.deacon@arm.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      62e8a325
  10. 07 8月, 2015 1 次提交
  11. 04 8月, 2015 3 次提交
  12. 27 7月, 2015 3 次提交
  13. 25 6月, 2015 2 次提交
    • V
      ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock · a5c8b52a
      Vineet Gupta 提交于
      A quad core SMP build could get into hardware livelock with concurrent
      LLOCK/SCOND. Workaround that by adding a PREFETCHW which is serialized by
      SCU (System Coherency Unit). It brings the cache line in Exclusive state
      and makes others invalidate their lines. This gives enough time for
      winner to complete the LLOCK/SCOND, before others can get the line back.
      
      The prefetchw in the ll/sc loop is not nice but this is the only
      software workaround for current version of RTL.
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      a5c8b52a
    • V
      ARC: add smp barriers around atomics per Documentation/atomic_ops.txt · 2576c28e
      Vineet Gupta 提交于
       - arch_spin_lock/unlock were lacking the ACQUIRE/RELEASE barriers
         Since ARCv2 only provides load/load, store/store and all/all, we need
         the full barrier
      
       - LLOCK/SCOND based atomics, bitops, cmpxchg, which return modified
         values were lacking the explicit smp barriers.
      
       - Non LLOCK/SCOND varaints don't need the explicit barriers since that
         is implicity provided by the spin locks used to implement the
         critical section (the spin lock barriers in turn are also fixed in
         this commit as explained above
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      2576c28e
  14. 10 5月, 2015 1 次提交
  15. 13 10月, 2014 1 次提交
  16. 14 8月, 2014 1 次提交
  17. 18 4月, 2014 1 次提交
  18. 12 1月, 2014 1 次提交
  19. 11 2月, 2013 1 次提交