• C
    arch/tile: fix deadlock bugs in rwlock implementation · 3c5ead52
    Chris Metcalf 提交于
    The first issue fixed in this patch is that pending rwlock write locks
    could lock out new readers; this could cause a deadlock if a read lock was
    held on cpu 1, a write lock was then attempted on cpu 2 and was pending,
    and cpu 1 was interrupted and attempted to re-acquire a read lock.
    The write lock code was modified to not lock out new readers.
    
    The second issue fixed is that there was a narrow race window where a tns
    instruction had been issued (setting the lock value to "1") and the store
    instruction to reset the lock value correctly had not yet been issued.
    In this case, if an interrupt occurred and the same cpu then tried to
    manipulate the lock, it would find the lock value set to "1" and spin
    forever, assuming some other cpu was partway through updating it.  The fix
    is to enforce an interrupt critical section around the tns/store pair.
    
    In addition, this change now arranges to always validate that after
    a readlock we have not wrapped around the count of readers, which
    is only eight bits.
    
    Since these changes make the rwlock "fast path" code heavier weight,
    I decided to move all the rwlock code all out of line, leaving only the
    conventional spinlock code with fastpath inlines.  Since the read_lock
    and read_trylock implementations ended up very similar, I just expressed
    read_lock in terms of read_trylock.
    
    As part of this change I also eliminate support for the now-obsolete
    tns_atomic mode.
    Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
    3c5ead52
spinlock_32.h 3.4 KB