1. 23 11月, 2015 1 次提交
    • P
      treewide: Remove old email address · 90eec103
      Peter Zijlstra 提交于
      There were still a number of references to my old Red Hat email
      address in the kernel source. Remove these while keeping the
      Red Hat copyright notices intact.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      90eec103
  2. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  3. 23 9月, 2015 1 次提交
  4. 19 6月, 2015 2 次提交
    • P
      lockdep: Implement lock pinning · a24fc60d
      Peter Zijlstra 提交于
      Add a lockdep annotation that WARNs if you 'accidentially' unlock a
      lock.
      
      This is especially helpful for code with callbacks, where the upper
      layer assumes a lock remains taken but a lower layer thinks it maybe
      can drop and reacquire the lock.
      
      By unwittingly breaking up the lock, races can be introduced.
      
      Lock pinning is a lockdep annotation that helps with this, when you
      lockdep_pin_lock() a held lock, any unlock without a
      lockdep_unpin_lock() will produce a WARN. Think of this as a relative
      of lockdep_assert_held(), except you don't only assert its held now,
      but ensure it stays held until you release your assertion.
      
      RFC: a possible alternative API would be something like:
      
        int cookie = lockdep_pin_lock(&foo);
        ...
        lockdep_unpin_lock(&foo, cookie);
      
      Where we pick a random number for the pin_count; this makes it
      impossible to sneak a lock break in without also passing the right
      cookie along.
      
      I've not done this because it ends up generating code for !LOCKDEP,
      esp. if you need to pass the cookie around for some reason.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ktkhai@parallels.com
      Cc: rostedt@goodmis.org
      Cc: juri.lelli@gmail.com
      Cc: pang.xunlei@linaro.org
      Cc: oleg@redhat.com
      Cc: wanpeng.li@linux.intel.com
      Cc: umgwanakikbuti@gmail.com
      Link: http://lkml.kernel.org/r/20150611124743.906731065@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      a24fc60d
    • P
      lockdep: Simplify lock_release() · e0f56fd7
      Peter Zijlstra 提交于
      lock_release() takes this nested argument that's mostly pointless
      these days, remove the implementation but leave the argument a
      rudiment for now.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ktkhai@parallels.com
      Cc: rostedt@goodmis.org
      Cc: juri.lelli@gmail.com
      Cc: pang.xunlei@linaro.org
      Cc: oleg@redhat.com
      Cc: wanpeng.li@linux.intel.com
      Cc: umgwanakikbuti@gmail.com
      Link: http://lkml.kernel.org/r/20150611124743.840411606@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e0f56fd7
  5. 07 6月, 2015 1 次提交
  6. 03 6月, 2015 1 次提交
    • B
      lockdep: Do not break user-visible string · 92ae1837
      Borislav Petkov 提交于
      Remove the line-break in the user-visible string and add the
      missing space in this error message:
      
        WARNING: lockdep init error! lock-(console_sem).lock was acquiredbefore lockdep_init
      
      Also:
      
        - don't yell, it's just a debug warning
      
        - denote references to function calls with '()'
      
        - standardize the lock name quoting
      
        - and finish the sentence.
      
      The result:
      
        WARNING: lockdep init error: lock '(console_sem).lock' was acquired before lockdep_init().
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150602133827.GD19887@pd.tnic
      [ Added a few more stylistic tweaks to the error message. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      92ae1837
  7. 17 4月, 2015 1 次提交
    • P
      lockdep: Make print_lock() robust against concurrent release · d7bc3197
      Peter Zijlstra 提交于
      During sysrq's show-held-locks command it is possible that
      hlock_class() returns NULL for a given lock. The result is then (after
      the warning):
      
      	|BUG: unable to handle kernel NULL pointer dereference at 0000001c
      	|IP: [<c1088145>] get_usage_chars+0x5/0x100
      	|Call Trace:
      	| [<c1088263>] print_lock_name+0x23/0x60
      	| [<c1576b57>] print_lock+0x5d/0x7e
      	| [<c1088314>] lockdep_print_held_locks+0x74/0xe0
      	| [<c1088652>] debug_show_all_locks+0x132/0x1b0
      	| [<c1315c48>] sysrq_handle_showlocks+0x8/0x10
      
      This *might* happen because the thread on the other CPU drops the lock
      after we are looking ->lockdep_depth and ->held_locks points no longer
      to a lock that is held.
      
      The fix here is to simply ignore it and continue.
      Reported-by: NAndreas Messerschmid <andreas@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d7bc3197
  8. 23 3月, 2015 1 次提交
    • P
      lockdep: Fix the module unload key range freeing logic · 35a9393c
      Peter Zijlstra 提交于
      Module unload calls lockdep_free_key_range(), which removes entries
      from the data structures. Most of the lockdep code OTOH assumes the
      data structures are append only; in specific see the comments in
      add_lock_to_list() and look_up_lock_class().
      
      Clearly this has only worked by accident; make it work proper. The
      actual scenario to make it go boom would involve the memory freed by
      the module unlock being re-allocated and re-used for a lock inside of
      a rcu-sched grace period. This is a very unlikely scenario, still
      better plug the hole.
      
      Use RCU list iteration in all places and ammend the comments.
      
      Change lockdep_free_key_range() to issue a sync_sched() between
      removal from the lists and returning -- which results in the memory
      being freed. Further ensure the callers are placed correctly and
      comment the requirements.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Tsyvarev <tsyvarev@ispras.ru>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      35a9393c
  9. 03 10月, 2014 1 次提交
    • P
      locking/lockdep: Revert qrwlock recusive stuff · 8acd91e8
      Peter Zijlstra 提交于
      Commit f0bab73c ("locking/lockdep: Restrict the use of recursive
      read_lock() with qrwlock") changed lockdep to try and conform to the
      qrwlock semantics which differ from the traditional rwlock semantics.
      
      In particular qrwlock is fair outside of interrupt context, but in
      interrupt context readers will ignore all fairness.
      
      The problem modeling this is that read and write side have different
      lock state (interrupts) semantics but we only have a single
      representation of these. Therefore lockdep will get confused, thinking
      the lock can cause interrupt lock inversions.
      
      So revert it for now; the old rwlock semantics were already imperfectly
      modeled and the qrwlock extra won't fit either.
      
      If we want to properly fix this, I think we need to resurrect the work
      by Gautham did a few years ago that split the read and write state of
      locks:
      
         http://lwn.net/Articles/332801/
      
      FWIW the locking selftest that would've failed (and was reported by
      Borislav earlier) is something like:
      
        RL(X1);	/* IRQ-ON */
        LOCK(A);
        UNLOCK(A);
        RU(X1);
      
        IRQ_ENTER();
        RL(X1);	/* IN-IRQ */
        RU(X1);
        IRQ_EXIT();
      
      At which point it would report that because A is an IRQ-unsafe lock we
      can suffer the following inversion:
      
      	CPU0		CPU1
      
      	lock(A)
      			lock(X1)
      			lock(A)
      	<IRQ>
      	 lock(X1)
      
      And this is 'wrong' because X1 can recurse (assuming the above lock are
      in fact read-lock) but lockdep doesn't know about this.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: ego@linux.vnet.ibm.com
      Cc: bp@alien8.de
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/20140930132600.GA7444@worktop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8acd91e8
  10. 13 8月, 2014 1 次提交
    • W
      locking/lockdep: Restrict the use of recursive read_lock() with qrwlock · f0bab73c
      Waiman Long 提交于
      Unlike the original unfair rwlock implementation, queued rwlock
      will grant lock according to the chronological sequence of the lock
      requests except when the lock requester is in the interrupt context.
      Consequently, recursive read_lock calls will now hang the process if
      there is a write_lock call somewhere in between the read_lock calls.
      
      This patch updates the lockdep implementation to look for recursive
      read_lock calls. A new read state (3) is used to mark those read_lock
      call that cannot be recursively called except in the interrupt
      context. The new read state does exhaust the 2 bits available in
      held_lock:read bit field. The addition of any new read state in the
      future may require a redesign of how all those bits are squeezed
      together in the held_lock structure.
      Signed-off-by: NWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1407345722-61615-2-git-send-email-Waiman.Long@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f0bab73c
  11. 17 7月, 2014 1 次提交
  12. 06 5月, 2014 1 次提交
  13. 14 2月, 2014 2 次提交
  14. 10 2月, 2014 3 次提交
    • O
      lockdep: Change mark_held_locks() to check hlock->check instead of lockdep_no_validate · 34d0ed5e
      Oleg Nesterov 提交于
      The __lockdep_no_validate check in mark_held_locks() adds the subtle
      and (afaics) unnecessary difference between no-validate and check==0.
      And this looks even more inconsistent because __lock_acquire() skips
      mark_irqflags()->mark_lock() if !check.
      
      Change mark_held_locks() to check hlock->check instead.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182013.GA26505@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      34d0ed5e
    • O
      lockdep: Don't create the wrong dependency on hlock->check == 0 · 1b5ff816
      Oleg Nesterov 提交于
      Test-case:
      
      	DEFINE_MUTEX(m1);
      	DEFINE_MUTEX(m2);
      	DEFINE_MUTEX(mx);
      
      	void lockdep_should_complain(void)
      	{
      		lockdep_set_novalidate_class(&mx);
      
      		// m1 -> mx -> m2
      		mutex_lock(&m1);
      		mutex_lock(&mx);
      		mutex_lock(&m2);
      		mutex_unlock(&m2);
      		mutex_unlock(&mx);
      		mutex_unlock(&m1);
      
      		// m2 -> m1 ; should trigger the warning
      		mutex_lock(&m2);
      		mutex_lock(&m1);
      		mutex_unlock(&m1);
      		mutex_unlock(&m2);
      	}
      
      this doesn't trigger any warning, lockdep can't detect the trivial
      deadlock.
      
      This is because lock(&mx) correctly avoids m1 -> mx dependency, it
      skips validate_chain() due to mx->check == 0. But lock(&m2) wrongly
      adds mx -> m2 and thus m1 -> m2 is not created.
      
      rcu_lock_acquire()->lock_acquire(check => 0) is fine due to read == 2,
      so currently only __lockdep_no_validate__ can trigger this problem.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182010.GA26498@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1b5ff816
    • O
      lockdep: Make held_lock->check and "int check" argument bool · fb9edbe9
      Oleg Nesterov 提交于
      The "int check" argument of lock_acquire() and held_lock->check are
      misleading. This is actually a boolean: 2 means "true", everything
      else is "false".
      
      And there is no need to pass 1 or 0 to lock_acquire() depending on
      CONFIG_PROVE_LOCKING, __lock_acquire() checks prove_locking at the
      start and clears "check" if !CONFIG_PROVE_LOCKING.
      
      Note: probably we can simply kill this member/arg. The only explicit
      user of check => 0 is rcu_lock_acquire(), perhaps we can change it to
      use lock_acquire(trylock =>, read => 2). __lockdep_no_validate means
      check => 0 implicitly, but we can change validate_chain() to check
      hlock->instance->key instead. Not to mention it would be nice to get
      rid of lockdep_set_novalidate_class().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182006.GA26495@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb9edbe9
  15. 27 11月, 2013 1 次提交
  16. 13 11月, 2013 1 次提交
  17. 06 11月, 2013 1 次提交
  18. 25 9月, 2013 1 次提交
  19. 12 5月, 2013 1 次提交
  20. 26 4月, 2013 2 次提交
  21. 08 4月, 2013 1 次提交
  22. 01 4月, 2013 1 次提交
    • P
      Revert "lockdep: check that no locks held at freeze time" · dbf520a9
      Paul Walmsley 提交于
      This reverts commit 6aa97070.
      
      Commit 6aa97070 ("lockdep: check that no locks held at freeze time")
      causes problems with NFS root filesystems.  The failures were noticed on
      OMAP2 and 3 boards during kernel init:
      
        [ BUG: swapper/0/1 still has locks held! ]
        3.9.0-rc3-00344-ga937536b #1 Not tainted
        -------------------------------------
        1 lock held by swapper/0/1:
         #0:  (&type->s_umount_key#13/1){+.+.+.}, at: [<c011e84c>] sget+0x248/0x574
      
        stack backtrace:
          rpc_wait_bit_killable
          __wait_on_bit
          out_of_line_wait_on_bit
          __rpc_execute
          rpc_run_task
          rpc_call_sync
          nfs_proc_get_root
          nfs_get_root
          nfs_fs_mount_common
          nfs_try_mount
          nfs_fs_mount
          mount_fs
          vfs_kern_mount
          do_mount
          sys_mount
          do_mount_root
          mount_root
          prepare_namespace
          kernel_init_freeable
          kernel_init
      
      Although the rootfs mounts, the system is unstable.  Here's a transcript
      from a PM test:
      
        http://www.pwsan.com/omap/testlogs/test_v3.9-rc3/20130317194234/pm/37xxevm/37xxevm_log.txt
      
      Here's what the test log should look like:
      
        http://www.pwsan.com/omap/testlogs/test_v3.8/20130218214403/pm/37xxevm/37xxevm_log.txt
      
      Mailing list discussion is here:
      
        http://lkml.org/lkml/2013/3/4/221
      
      Deal with this for v3.9 by reverting the problem commit, until folks can
      figure out the right long-term course of action.
      Signed-off-by: NPaul Walmsley <paul@pwsan.com>
      Cc: Mandeep Singh Baines <msb@chromium.org>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: <maciej.rutecki@gmail.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ben Chan <benchan@chromium.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbf520a9
  23. 24 3月, 2013 1 次提交
    • K
      Export __lockdep_no_validate__ · ea6749c7
      Kent Overstreet 提交于
      Hack, but bcache needs a way around lockdep for locking during garbage
      collection - we need to keep multiple btree nodes locked for coalescing
      and rw_lock_nested() isn't really sufficient or appropriate here.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Ingo Molnar <mingo@redhat.com>
      ea6749c7
  24. 28 2月, 2013 1 次提交
  25. 19 2月, 2013 2 次提交
  26. 13 9月, 2012 1 次提交
  27. 22 2月, 2012 1 次提交
  28. 12 12月, 2011 1 次提交
  29. 07 12月, 2011 1 次提交
  30. 06 12月, 2011 3 次提交
  31. 14 11月, 2011 1 次提交
  32. 08 11月, 2011 1 次提交
    • S
      lockdep: Show subclass in pretty print of lockdep output · e5e78d08
      Steven Rostedt 提交于
      The pretty print of the lockdep debug splat uses just the lock name
      to show how the locking scenario happens. But when it comes to
      nesting locks, the output becomes confusing which takes away the point
      of the pretty printing of the lock scenario.
      
      Without displaying the subclass info, we get the following output:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(slock-AF_INET);
                                      lock(slock-AF_INET);
                                      lock(slock-AF_INET);
         lock(slock-AF_INET);
      
        *** DEADLOCK ***
      
      The above looks more of a A->A locking bug than a A->B B->A.
      By adding the subclass to the output, we can see what really happened:
      
       other info that might help us debug this:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(slock-AF_INET);
                                      lock(slock-AF_INET/1);
                                      lock(slock-AF_INET);
         lock(slock-AF_INET/1);
      
        *** DEADLOCK ***
      
      This bug was discovered while tracking down a real bug caught by lockdep.
      
      Link: http://lkml.kernel.org/r/20111025202049.GB25043@hostway.ca
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NSimon Kirby <sim@hostway.ca>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e5e78d08