1. 04 8月, 2015 1 次提交
  2. 25 6月, 2015 2 次提交
    • V
      ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock · a5c8b52a
      Vineet Gupta 提交于
      A quad core SMP build could get into hardware livelock with concurrent
      LLOCK/SCOND. Workaround that by adding a PREFETCHW which is serialized by
      SCU (System Coherency Unit). It brings the cache line in Exclusive state
      and makes others invalidate their lines. This gives enough time for
      winner to complete the LLOCK/SCOND, before others can get the line back.
      
      The prefetchw in the ll/sc loop is not nice but this is the only
      software workaround for current version of RTL.
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      a5c8b52a
    • V
      ARC: add smp barriers around atomics per Documentation/atomic_ops.txt · 2576c28e
      Vineet Gupta 提交于
       - arch_spin_lock/unlock were lacking the ACQUIRE/RELEASE barriers
         Since ARCv2 only provides load/load, store/store and all/all, we need
         the full barrier
      
       - LLOCK/SCOND based atomics, bitops, cmpxchg, which return modified
         values were lacking the explicit smp barriers.
      
       - Non LLOCK/SCOND varaints don't need the explicit barriers since that
         is implicity provided by the spin locks used to implement the
         critical section (the spin lock barriers in turn are also fixed in
         this commit as explained above
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      2576c28e
  3. 10 5月, 2015 1 次提交
  4. 13 10月, 2014 1 次提交
  5. 14 8月, 2014 1 次提交
  6. 18 4月, 2014 1 次提交
  7. 12 1月, 2014 1 次提交
  8. 11 2月, 2013 1 次提交