1. 17 8月, 2017 1 次提交
  2. 14 6月, 2016 1 次提交
    • P
      locking/spinlock, arch: Update and fix spin_unlock_wait() implementations · 726328d9
      Peter Zijlstra 提交于
      This patch updates/fixes all spin_unlock_wait() implementations.
      
      The update is in semantics; where it previously was only a control
      dependency, we now upgrade to a full load-acquire to match the
      store-release from the spin_unlock() we waited on. This ensures that
      when spin_unlock_wait() returns, we're guaranteed to observe the full
      critical section we waited on.
      
      This fixes a number of spin_unlock_wait() users that (not
      unreasonably) rely on this.
      
      I also fixed a number of ticket lock versions to only wait on the
      current lock holder, instead of for a full unlock, as this is
      sufficient.
      
      Furthermore; again for ticket locks; I added an smp_rmb() in between
      the initial ticket load and the spin loop testing the current value
      because I could not convince myself the address dependency is
      sufficient, esp. if the loads are of different sizes.
      
      I'm more than happy to remove this smp_rmb() again if people are
      certain the address dependency does indeed work as expected.
      
      Note: PPC32 will be fixed independently
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: chris@zankel.net
      Cc: cmetcalf@mellanox.com
      Cc: davem@davemloft.net
      Cc: dhowells@redhat.com
      Cc: james.hogan@imgtec.com
      Cc: jejb@parisc-linux.org
      Cc: linux@armlinux.org.uk
      Cc: mpe@ellerman.id.au
      Cc: ralf@linux-mips.org
      Cc: realmz6@gmail.com
      Cc: rkuo@codeaurora.org
      Cc: rth@twiddle.net
      Cc: schwidefsky@de.ibm.com
      Cc: tony.luck@intel.com
      Cc: vgupta@synopsys.com
      Cc: ysato@users.sourceforge.jp
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      726328d9
  3. 29 4月, 2015 1 次提交
    • C
      tile: modify arch_spin_unlock_wait() semantics · 14c3dec2
      Chris Metcalf 提交于
      Rather than trying to wait until all possible lockers have
      unlocked the lock, we now only wait until the current locker
      (if any) has released the lock.
      
      The old code was correct, but the new code works more like the x86
      code and thus hopefully is more appropriate under contention.
      See commit 78bff1c8 ("x86/ticketlock: Fix spin_unlock_wait()
      livelock") for x86.
      Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com>
      14c3dec2
  4. 10 5月, 2013 1 次提交
  5. 13 3月, 2012 1 次提交
  6. 11 3月, 2011 1 次提交
    • C
      arch/tile: fix deadlock bugs in rwlock implementation · 3c5ead52
      Chris Metcalf 提交于
      The first issue fixed in this patch is that pending rwlock write locks
      could lock out new readers; this could cause a deadlock if a read lock was
      held on cpu 1, a write lock was then attempted on cpu 2 and was pending,
      and cpu 1 was interrupted and attempted to re-acquire a read lock.
      The write lock code was modified to not lock out new readers.
      
      The second issue fixed is that there was a narrow race window where a tns
      instruction had been issued (setting the lock value to "1") and the store
      instruction to reset the lock value correctly had not yet been issued.
      In this case, if an interrupt occurred and the same cpu then tried to
      manipulate the lock, it would find the lock value set to "1" and spin
      forever, assuming some other cpu was partway through updating it.  The fix
      is to enforce an interrupt critical section around the tns/store pair.
      
      In addition, this change now arranges to always validate that after
      a readlock we have not wrapped around the count of readers, which
      is only eight bits.
      
      Since these changes make the rwlock "fast path" code heavier weight,
      I decided to move all the rwlock code all out of line, leaving only the
      conventional spinlock code with fastpath inlines.  Since the read_lock
      and read_trylock implementations ended up very similar, I just expressed
      read_lock in terms of read_trylock.
      
      As part of this change I also eliminate support for the now-obsolete
      tns_atomic mode.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      3c5ead52
  7. 15 11月, 2010 1 次提交
    • C
      arch/tile: fix rwlock so would-be write lockers don't block new readers · 24f3f6b5
      Chris Metcalf 提交于
      This avoids a deadlock in the IGMP code where one core gets a read
      lock, another core starts trying to get a write lock (thus blocking
      new readers), and then the first core tries to recursively re-acquire
      the read lock.
      
      We still try to preserve some degree of balance by giving priority
      to additional write lockers that come along while the lock is held
      for write, so they can all complete quickly and return the lock to
      the readers.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      24f3f6b5
  8. 05 6月, 2010 1 次提交