1. 02 6月, 2016 1 次提交
  2. 09 5月, 2016 1 次提交
  3. 23 9月, 2015 1 次提交
    • P
      atomic, arch: Audit atomic_{read,set}() · 62e8a325
      Peter Zijlstra 提交于
      This patch makes sure that atomic_{read,set}() are at least
      {READ,WRITE}_ONCE().
      
      We already had the 'requirement' that atomic_read() should use
      ACCESS_ONCE(), and most archs had this, but a few were lacking.
      All are now converted to use READ_ONCE().
      
      And, by a symmetry and general paranoia argument, upgrade atomic_set()
      to use WRITE_ONCE().
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: james.hogan@imgtec.com
      Cc: linux-kernel@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: will.deacon@arm.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      62e8a325
  4. 07 8月, 2015 1 次提交
  5. 04 8月, 2015 3 次提交
  6. 27 7月, 2015 3 次提交
  7. 25 6月, 2015 2 次提交
    • V
      ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock · a5c8b52a
      Vineet Gupta 提交于
      A quad core SMP build could get into hardware livelock with concurrent
      LLOCK/SCOND. Workaround that by adding a PREFETCHW which is serialized by
      SCU (System Coherency Unit). It brings the cache line in Exclusive state
      and makes others invalidate their lines. This gives enough time for
      winner to complete the LLOCK/SCOND, before others can get the line back.
      
      The prefetchw in the ll/sc loop is not nice but this is the only
      software workaround for current version of RTL.
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      a5c8b52a
    • V
      ARC: add smp barriers around atomics per Documentation/atomic_ops.txt · 2576c28e
      Vineet Gupta 提交于
       - arch_spin_lock/unlock were lacking the ACQUIRE/RELEASE barriers
         Since ARCv2 only provides load/load, store/store and all/all, we need
         the full barrier
      
       - LLOCK/SCOND based atomics, bitops, cmpxchg, which return modified
         values were lacking the explicit smp barriers.
      
       - Non LLOCK/SCOND varaints don't need the explicit barriers since that
         is implicity provided by the spin locks used to implement the
         critical section (the spin lock barriers in turn are also fixed in
         this commit as explained above
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      2576c28e
  8. 10 5月, 2015 1 次提交
  9. 13 10月, 2014 1 次提交
  10. 14 8月, 2014 1 次提交
  11. 18 4月, 2014 1 次提交
  12. 12 1月, 2014 1 次提交
  13. 11 2月, 2013 1 次提交