1. 19 6月, 2015 1 次提交
  2. 17 4月, 2015 1 次提交
    • P
      lockdep: Make print_lock() robust against concurrent release · d7bc3197
      Peter Zijlstra 提交于
      During sysrq's show-held-locks command it is possible that
      hlock_class() returns NULL for a given lock. The result is then (after
      the warning):
      
      	|BUG: unable to handle kernel NULL pointer dereference at 0000001c
      	|IP: [<c1088145>] get_usage_chars+0x5/0x100
      	|Call Trace:
      	| [<c1088263>] print_lock_name+0x23/0x60
      	| [<c1576b57>] print_lock+0x5d/0x7e
      	| [<c1088314>] lockdep_print_held_locks+0x74/0xe0
      	| [<c1088652>] debug_show_all_locks+0x132/0x1b0
      	| [<c1315c48>] sysrq_handle_showlocks+0x8/0x10
      
      This *might* happen because the thread on the other CPU drops the lock
      after we are looking ->lockdep_depth and ->held_locks points no longer
      to a lock that is held.
      
      The fix here is to simply ignore it and continue.
      Reported-by: NAndreas Messerschmid <andreas@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d7bc3197
  3. 23 3月, 2015 1 次提交
    • P
      lockdep: Fix the module unload key range freeing logic · 35a9393c
      Peter Zijlstra 提交于
      Module unload calls lockdep_free_key_range(), which removes entries
      from the data structures. Most of the lockdep code OTOH assumes the
      data structures are append only; in specific see the comments in
      add_lock_to_list() and look_up_lock_class().
      
      Clearly this has only worked by accident; make it work proper. The
      actual scenario to make it go boom would involve the memory freed by
      the module unlock being re-allocated and re-used for a lock inside of
      a rcu-sched grace period. This is a very unlikely scenario, still
      better plug the hole.
      
      Use RCU list iteration in all places and ammend the comments.
      
      Change lockdep_free_key_range() to issue a sync_sched() between
      removal from the lists and returning -- which results in the memory
      being freed. Further ensure the callers are placed correctly and
      comment the requirements.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Tsyvarev <tsyvarev@ispras.ru>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      35a9393c
  4. 03 10月, 2014 1 次提交
    • P
      locking/lockdep: Revert qrwlock recusive stuff · 8acd91e8
      Peter Zijlstra 提交于
      Commit f0bab73c ("locking/lockdep: Restrict the use of recursive
      read_lock() with qrwlock") changed lockdep to try and conform to the
      qrwlock semantics which differ from the traditional rwlock semantics.
      
      In particular qrwlock is fair outside of interrupt context, but in
      interrupt context readers will ignore all fairness.
      
      The problem modeling this is that read and write side have different
      lock state (interrupts) semantics but we only have a single
      representation of these. Therefore lockdep will get confused, thinking
      the lock can cause interrupt lock inversions.
      
      So revert it for now; the old rwlock semantics were already imperfectly
      modeled and the qrwlock extra won't fit either.
      
      If we want to properly fix this, I think we need to resurrect the work
      by Gautham did a few years ago that split the read and write state of
      locks:
      
         http://lwn.net/Articles/332801/
      
      FWIW the locking selftest that would've failed (and was reported by
      Borislav earlier) is something like:
      
        RL(X1);	/* IRQ-ON */
        LOCK(A);
        UNLOCK(A);
        RU(X1);
      
        IRQ_ENTER();
        RL(X1);	/* IN-IRQ */
        RU(X1);
        IRQ_EXIT();
      
      At which point it would report that because A is an IRQ-unsafe lock we
      can suffer the following inversion:
      
      	CPU0		CPU1
      
      	lock(A)
      			lock(X1)
      			lock(A)
      	<IRQ>
      	 lock(X1)
      
      And this is 'wrong' because X1 can recurse (assuming the above lock are
      in fact read-lock) but lockdep doesn't know about this.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: ego@linux.vnet.ibm.com
      Cc: bp@alien8.de
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/20140930132600.GA7444@worktop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8acd91e8
  5. 13 8月, 2014 1 次提交
    • W
      locking/lockdep: Restrict the use of recursive read_lock() with qrwlock · f0bab73c
      Waiman Long 提交于
      Unlike the original unfair rwlock implementation, queued rwlock
      will grant lock according to the chronological sequence of the lock
      requests except when the lock requester is in the interrupt context.
      Consequently, recursive read_lock calls will now hang the process if
      there is a write_lock call somewhere in between the read_lock calls.
      
      This patch updates the lockdep implementation to look for recursive
      read_lock calls. A new read state (3) is used to mark those read_lock
      call that cannot be recursively called except in the interrupt
      context. The new read state does exhaust the 2 bits available in
      held_lock:read bit field. The addition of any new read state in the
      future may require a redesign of how all those bits are squeezed
      together in the held_lock structure.
      Signed-off-by: NWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1407345722-61615-2-git-send-email-Waiman.Long@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f0bab73c
  6. 17 7月, 2014 1 次提交
  7. 06 5月, 2014 1 次提交
  8. 14 2月, 2014 2 次提交
  9. 10 2月, 2014 3 次提交
    • O
      lockdep: Change mark_held_locks() to check hlock->check instead of lockdep_no_validate · 34d0ed5e
      Oleg Nesterov 提交于
      The __lockdep_no_validate check in mark_held_locks() adds the subtle
      and (afaics) unnecessary difference between no-validate and check==0.
      And this looks even more inconsistent because __lock_acquire() skips
      mark_irqflags()->mark_lock() if !check.
      
      Change mark_held_locks() to check hlock->check instead.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182013.GA26505@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      34d0ed5e
    • O
      lockdep: Don't create the wrong dependency on hlock->check == 0 · 1b5ff816
      Oleg Nesterov 提交于
      Test-case:
      
      	DEFINE_MUTEX(m1);
      	DEFINE_MUTEX(m2);
      	DEFINE_MUTEX(mx);
      
      	void lockdep_should_complain(void)
      	{
      		lockdep_set_novalidate_class(&mx);
      
      		// m1 -> mx -> m2
      		mutex_lock(&m1);
      		mutex_lock(&mx);
      		mutex_lock(&m2);
      		mutex_unlock(&m2);
      		mutex_unlock(&mx);
      		mutex_unlock(&m1);
      
      		// m2 -> m1 ; should trigger the warning
      		mutex_lock(&m2);
      		mutex_lock(&m1);
      		mutex_unlock(&m1);
      		mutex_unlock(&m2);
      	}
      
      this doesn't trigger any warning, lockdep can't detect the trivial
      deadlock.
      
      This is because lock(&mx) correctly avoids m1 -> mx dependency, it
      skips validate_chain() due to mx->check == 0. But lock(&m2) wrongly
      adds mx -> m2 and thus m1 -> m2 is not created.
      
      rcu_lock_acquire()->lock_acquire(check => 0) is fine due to read == 2,
      so currently only __lockdep_no_validate__ can trigger this problem.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182010.GA26498@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1b5ff816
    • O
      lockdep: Make held_lock->check and "int check" argument bool · fb9edbe9
      Oleg Nesterov 提交于
      The "int check" argument of lock_acquire() and held_lock->check are
      misleading. This is actually a boolean: 2 means "true", everything
      else is "false".
      
      And there is no need to pass 1 or 0 to lock_acquire() depending on
      CONFIG_PROVE_LOCKING, __lock_acquire() checks prove_locking at the
      start and clears "check" if !CONFIG_PROVE_LOCKING.
      
      Note: probably we can simply kill this member/arg. The only explicit
      user of check => 0 is rcu_lock_acquire(), perhaps we can change it to
      use lock_acquire(trylock =>, read => 2). __lockdep_no_validate means
      check => 0 implicitly, but we can change validate_chain() to check
      hlock->instance->key instead. Not to mention it would be nice to get
      rid of lockdep_set_novalidate_class().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182006.GA26495@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb9edbe9
  10. 27 11月, 2013 1 次提交
  11. 13 11月, 2013 1 次提交
  12. 06 11月, 2013 1 次提交
  13. 25 9月, 2013 1 次提交
  14. 12 5月, 2013 1 次提交
  15. 26 4月, 2013 2 次提交
  16. 08 4月, 2013 1 次提交
  17. 01 4月, 2013 1 次提交
    • P
      Revert "lockdep: check that no locks held at freeze time" · dbf520a9
      Paul Walmsley 提交于
      This reverts commit 6aa97070.
      
      Commit 6aa97070 ("lockdep: check that no locks held at freeze time")
      causes problems with NFS root filesystems.  The failures were noticed on
      OMAP2 and 3 boards during kernel init:
      
        [ BUG: swapper/0/1 still has locks held! ]
        3.9.0-rc3-00344-ga937536b #1 Not tainted
        -------------------------------------
        1 lock held by swapper/0/1:
         #0:  (&type->s_umount_key#13/1){+.+.+.}, at: [<c011e84c>] sget+0x248/0x574
      
        stack backtrace:
          rpc_wait_bit_killable
          __wait_on_bit
          out_of_line_wait_on_bit
          __rpc_execute
          rpc_run_task
          rpc_call_sync
          nfs_proc_get_root
          nfs_get_root
          nfs_fs_mount_common
          nfs_try_mount
          nfs_fs_mount
          mount_fs
          vfs_kern_mount
          do_mount
          sys_mount
          do_mount_root
          mount_root
          prepare_namespace
          kernel_init_freeable
          kernel_init
      
      Although the rootfs mounts, the system is unstable.  Here's a transcript
      from a PM test:
      
        http://www.pwsan.com/omap/testlogs/test_v3.9-rc3/20130317194234/pm/37xxevm/37xxevm_log.txt
      
      Here's what the test log should look like:
      
        http://www.pwsan.com/omap/testlogs/test_v3.8/20130218214403/pm/37xxevm/37xxevm_log.txt
      
      Mailing list discussion is here:
      
        http://lkml.org/lkml/2013/3/4/221
      
      Deal with this for v3.9 by reverting the problem commit, until folks can
      figure out the right long-term course of action.
      Signed-off-by: NPaul Walmsley <paul@pwsan.com>
      Cc: Mandeep Singh Baines <msb@chromium.org>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: <maciej.rutecki@gmail.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ben Chan <benchan@chromium.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbf520a9
  18. 24 3月, 2013 1 次提交
    • K
      Export __lockdep_no_validate__ · ea6749c7
      Kent Overstreet 提交于
      Hack, but bcache needs a way around lockdep for locking during garbage
      collection - we need to keep multiple btree nodes locked for coalescing
      and rw_lock_nested() isn't really sufficient or appropriate here.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Ingo Molnar <mingo@redhat.com>
      ea6749c7
  19. 28 2月, 2013 1 次提交
  20. 19 2月, 2013 2 次提交
  21. 13 9月, 2012 1 次提交
  22. 22 2月, 2012 1 次提交
  23. 12 12月, 2011 1 次提交
  24. 07 12月, 2011 1 次提交
  25. 06 12月, 2011 3 次提交
  26. 14 11月, 2011 1 次提交
  27. 08 11月, 2011 1 次提交
    • S
      lockdep: Show subclass in pretty print of lockdep output · e5e78d08
      Steven Rostedt 提交于
      The pretty print of the lockdep debug splat uses just the lock name
      to show how the locking scenario happens. But when it comes to
      nesting locks, the output becomes confusing which takes away the point
      of the pretty printing of the lock scenario.
      
      Without displaying the subclass info, we get the following output:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(slock-AF_INET);
                                      lock(slock-AF_INET);
                                      lock(slock-AF_INET);
         lock(slock-AF_INET);
      
        *** DEADLOCK ***
      
      The above looks more of a A->A locking bug than a A->B B->A.
      By adding the subclass to the output, we can see what really happened:
      
       other info that might help us debug this:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(slock-AF_INET);
                                      lock(slock-AF_INET/1);
                                      lock(slock-AF_INET);
         lock(slock-AF_INET/1);
      
        *** DEADLOCK ***
      
      This bug was discovered while tracking down a real bug caught by lockdep.
      
      Link: http://lkml.kernel.org/r/20111025202049.GB25043@hostway.ca
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NSimon Kirby <sim@hostway.ca>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e5e78d08
  28. 29 9月, 2011 1 次提交
    • P
      rcu: Restore checks for blocking in RCU read-side critical sections · b3fbab05
      Paul E. McKenney 提交于
      Long ago, using TREE_RCU with PREEMPT would result in "scheduling
      while atomic" diagnostics if you blocked in an RCU read-side critical
      section.  However, PREEMPT now implies TREE_PREEMPT_RCU, which defeats
      this diagnostic.  This commit therefore adds a replacement diagnostic
      based on PROVE_RCU.
      
      Because rcu_lockdep_assert() and lockdep_rcu_dereference() are now being
      used for things that have nothing to do with rcu_dereference(), rename
      lockdep_rcu_dereference() to lockdep_rcu_suspicious() and add a third
      argument that is a string indicating what is suspicious.  This third
      argument is passed in from a new third argument to rcu_lockdep_assert().
      Update all calls to rcu_lockdep_assert() to add an informative third
      argument.
      
      Also, add a pair of rcu_lockdep_assert() calls from within
      rcu_note_context_switch(), one complaining if a context switch occurs
      in an RCU-bh read-side critical section and another complaining if a
      context switch occurs in an RCU-sched read-side critical section.
      These are present only if the PROVE_RCU kernel parameter is enabled.
      
      Finally, fix some checkpatch whitespace complaints in lockdep.c.
      
      Again, you must enable PROVE_RCU to see these new diagnostics.  But you
      are enabling PROVE_RCU to check out new RCU uses in any case, aren't you?
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b3fbab05
  29. 18 9月, 2011 1 次提交
  30. 09 8月, 2011 1 次提交
  31. 04 8月, 2011 3 次提交