1. 29 4月, 2015 1 次提交
    • C
      tile: modify arch_spin_unlock_wait() semantics · 14c3dec2
      Chris Metcalf 提交于
      Rather than trying to wait until all possible lockers have
      unlocked the lock, we now only wait until the current locker
      (if any) has released the lock.
      
      The old code was correct, but the new code works more like the x86
      code and thus hopefully is more appropriate under contention.
      See commit 78bff1c8 ("x86/ticketlock: Fix spin_unlock_wait()
      livelock") for x86.
      Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com>
      14c3dec2
  2. 10 5月, 2013 1 次提交
  3. 13 3月, 2012 1 次提交
  4. 11 3月, 2011 1 次提交
    • C
      arch/tile: fix deadlock bugs in rwlock implementation · 3c5ead52
      Chris Metcalf 提交于
      The first issue fixed in this patch is that pending rwlock write locks
      could lock out new readers; this could cause a deadlock if a read lock was
      held on cpu 1, a write lock was then attempted on cpu 2 and was pending,
      and cpu 1 was interrupted and attempted to re-acquire a read lock.
      The write lock code was modified to not lock out new readers.
      
      The second issue fixed is that there was a narrow race window where a tns
      instruction had been issued (setting the lock value to "1") and the store
      instruction to reset the lock value correctly had not yet been issued.
      In this case, if an interrupt occurred and the same cpu then tried to
      manipulate the lock, it would find the lock value set to "1" and spin
      forever, assuming some other cpu was partway through updating it.  The fix
      is to enforce an interrupt critical section around the tns/store pair.
      
      In addition, this change now arranges to always validate that after
      a readlock we have not wrapped around the count of readers, which
      is only eight bits.
      
      Since these changes make the rwlock "fast path" code heavier weight,
      I decided to move all the rwlock code all out of line, leaving only the
      conventional spinlock code with fastpath inlines.  Since the read_lock
      and read_trylock implementations ended up very similar, I just expressed
      read_lock in terms of read_trylock.
      
      As part of this change I also eliminate support for the now-obsolete
      tns_atomic mode.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      3c5ead52
  5. 15 11月, 2010 1 次提交
    • C
      arch/tile: fix rwlock so would-be write lockers don't block new readers · 24f3f6b5
      Chris Metcalf 提交于
      This avoids a deadlock in the IGMP code where one core gets a read
      lock, another core starts trying to get a write lock (thus blocking
      new readers), and then the first core tries to recursively re-acquire
      the read lock.
      
      We still try to preserve some degree of balance by giving priority
      to additional write lockers that come along while the lock is held
      for write, so they can all complete quickly and return the lock to
      the readers.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      24f3f6b5
  6. 05 6月, 2010 1 次提交