• T
    locking/lockdep: Handle statically initialized PER_CPU locks properly · 383776fa
    Thomas Gleixner 提交于
    If a PER_CPU struct which contains a spin_lock is statically initialized
    via:
    
    DEFINE_PER_CPU(struct foo, bla) = {
    	.lock = __SPIN_LOCK_UNLOCKED(bla.lock)
    };
    
    then lockdep assigns a seperate key to each lock because the logic for
    assigning a key to statically initialized locks is to use the address as
    the key. With per CPU locks the address is obvioulsy different on each CPU.
    
    That's wrong, because all locks should have the same key.
    
    To solve this the following modifications are required:
    
     1) Extend the is_kernel/module_percpu_addr() functions to hand back the
        canonical address of the per CPU address, i.e. the per CPU address
        minus the per CPU offset.
    
     2) Check the lock address with these functions and if the per CPU check
        matches use the returned canonical address as the lock key, so all per
        CPU locks have the same key.
    
     3) Move the static_obj(key) check into look_up_lock_class() so this check
        can be avoided for statically initialized per CPU locks.  That's
        required because the canonical address fails the static_obj(key) check
        for obvious reasons.
    Reported-by: NMike Galbraith <efault@gmx.de>
    Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
    [ Merged Dan's fixups for !MODULES and !SMP into this patch. ]
    Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Dan Murphy <dmurphy@ti.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Link: http://lkml.kernel.org/r/20170227143736.pectaimkjkan5kow@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
    383776fa
module.h 21.9 KB