• W
    arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks · 38b850a7
    Will Deacon 提交于
    spin_is_locked has grown two very different use-cases:
    
    (1) [The sane case] API functions may require a certain lock to be held
        by the caller and can therefore use spin_is_locked as part of an
        assert statement in order to verify that the lock is indeed held.
        For example, usage of assert_spin_locked.
    
    (2) [The insane case] There are two locks, where a CPU takes one of the
        locks and then checks whether or not the other one is held before
        accessing some shared state. For example, the "optimized locking" in
        ipc/sem.c.
    
    In the latter case, the sequence looks like:
    
      spin_lock(&sem->lock);
      if (!spin_is_locked(&sma->sem_perm.lock))
        /* Access shared state */
    
    and requires that the spin_is_locked check is ordered after taking the
    sem->lock. Unfortunately, since our spinlocks are implemented using a
    LDAXR/STXR sequence, the read of &sma->sem_perm.lock can be speculated
    before the STXR and consequently return a stale value.
    
    Whilst this hasn't been seen to cause issues in practice, PowerPC fixed
    the same issue in 51d7d520 ("powerpc: Add smp_mb() to
    arch_spin_is_locked()") and, although we did something similar for
    spin_unlock_wait in d86b8da0 ("arm64: spinlock: serialise
    spin_unlock_wait against concurrent lockers") that doesn't actually take
    care of ordering against local acquisition of a different lock.
    
    This patch adds an smp_mb() to the start of our arch_spin_is_locked and
    arch_spin_unlock_wait routines to ensure that the lock value is always
    loaded after any other locks have been taken by the current CPU.
    Reported-by: NPeter Zijlstra <peterz@infradead.org>
    Signed-off-by: NWill Deacon <will.deacon@arm.com>
    38b850a7
spinlock.h 7.4 KB