1. 28 2月, 2013 1 次提交
  2. 19 2月, 2013 2 次提交
  3. 13 9月, 2012 1 次提交
  4. 22 2月, 2012 1 次提交
  5. 12 12月, 2011 1 次提交
  6. 07 12月, 2011 1 次提交
  7. 06 12月, 2011 3 次提交
  8. 14 11月, 2011 1 次提交
  9. 08 11月, 2011 1 次提交
    • S
      lockdep: Show subclass in pretty print of lockdep output · e5e78d08
      Steven Rostedt 提交于
      The pretty print of the lockdep debug splat uses just the lock name
      to show how the locking scenario happens. But when it comes to
      nesting locks, the output becomes confusing which takes away the point
      of the pretty printing of the lock scenario.
      
      Without displaying the subclass info, we get the following output:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(slock-AF_INET);
                                      lock(slock-AF_INET);
                                      lock(slock-AF_INET);
         lock(slock-AF_INET);
      
        *** DEADLOCK ***
      
      The above looks more of a A->A locking bug than a A->B B->A.
      By adding the subclass to the output, we can see what really happened:
      
       other info that might help us debug this:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(slock-AF_INET);
                                      lock(slock-AF_INET/1);
                                      lock(slock-AF_INET);
         lock(slock-AF_INET/1);
      
        *** DEADLOCK ***
      
      This bug was discovered while tracking down a real bug caught by lockdep.
      
      Link: http://lkml.kernel.org/r/20111025202049.GB25043@hostway.ca
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NSimon Kirby <sim@hostway.ca>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e5e78d08
  10. 29 9月, 2011 1 次提交
    • P
      rcu: Restore checks for blocking in RCU read-side critical sections · b3fbab05
      Paul E. McKenney 提交于
      Long ago, using TREE_RCU with PREEMPT would result in "scheduling
      while atomic" diagnostics if you blocked in an RCU read-side critical
      section.  However, PREEMPT now implies TREE_PREEMPT_RCU, which defeats
      this diagnostic.  This commit therefore adds a replacement diagnostic
      based on PROVE_RCU.
      
      Because rcu_lockdep_assert() and lockdep_rcu_dereference() are now being
      used for things that have nothing to do with rcu_dereference(), rename
      lockdep_rcu_dereference() to lockdep_rcu_suspicious() and add a third
      argument that is a string indicating what is suspicious.  This third
      argument is passed in from a new third argument to rcu_lockdep_assert().
      Update all calls to rcu_lockdep_assert() to add an informative third
      argument.
      
      Also, add a pair of rcu_lockdep_assert() calls from within
      rcu_note_context_switch(), one complaining if a context switch occurs
      in an RCU-bh read-side critical section and another complaining if a
      context switch occurs in an RCU-sched read-side critical section.
      These are present only if the PROVE_RCU kernel parameter is enabled.
      
      Finally, fix some checkpatch whitespace complaints in lockdep.c.
      
      Again, you must enable PROVE_RCU to see these new diagnostics.  But you
      are enabling PROVE_RCU to check out new RCU uses in any case, aren't you?
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b3fbab05
  11. 18 9月, 2011 1 次提交
  12. 09 8月, 2011 1 次提交
  13. 04 8月, 2011 3 次提交
  14. 22 7月, 2011 1 次提交
  15. 22 6月, 2011 1 次提交
  16. 07 6月, 2011 1 次提交
  17. 22 4月, 2011 7 次提交
  18. 31 3月, 2011 1 次提交
  19. 20 1月, 2011 1 次提交
    • T
      lockdep: Move early boot local IRQ enable/disable status to init/main.c · 2ce802f6
      Tejun Heo 提交于
      During early boot, local IRQ is disabled until IRQ subsystem is
      properly initialized.  During this time, no one should enable
      local IRQ and some operations which usually are not allowed with
      IRQ disabled, e.g. operations which might sleep or require
      communications with other processors, are allowed.
      
      lockdep tracked this with early_boot_irqs_off/on() callbacks.
      As other subsystems need this information too, move it to
      init/main.c and make it generally available.  While at it,
      toggle the boolean to early_boot_irqs_disabled instead of
      enabled so that it can be initialized with %false and %true
      indicates the exceptional condition.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110120110635.GB6036@htj.dyndns.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ce802f6
  20. 19 10月, 2010 2 次提交
    • H
      lockdep: Check the depth of subclass · 4ba053c0
      Hitoshi Mitake 提交于
      Current look_up_lock_class() doesn't check the parameter "subclass".
      This rarely rises problems because the main caller of this function,
      register_lock_class(), checks it.
      
      But register_lock_class() is not the only function which calls
      look_up_lock_class(). lock_set_class() and its callees also call it.
      And lock_set_class() doesn't check this parameter.
      
      This will rise problems when the the value of subclass is larger than
      MAX_LOCKDEP_SUBCLASSES. Because the address (used as the key of class)
      caliculated with too large subclass has a probability to point
      another key in different lock_class_key.
      
      Of course this problem depends on the memory layout and
      occurs with really low probability.
      Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Dmitry Torokhov <dtor@mail.ru>
      Cc: Vojtech Pavlik <vojtech@ucw.cz>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1286958626-986-1-git-send-email-mitake@dcl.info.waseda.ac.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ba053c0
    • H
      lockdep: Add improved subclass caching · 62016250
      Hitoshi Mitake 提交于
      Current lockdep_map only caches one class with subclass == 0,
      and looks up hash table of classes when subclass != 0.
      
      It seems that this has no problem because the case of
      subclass != 0 is rare. But locks of struct rq are
      acquired with subclass == 1 when task migration is executed.
      Task migration is high frequent event, so I modified lockdep
      to cache subclasses.
      
      I measured the score of perf bench sched messaging.
      This patch has slightly but certain (order of milli seconds
      or 10 milli seconds) effect when lots of tasks are running.
      I'll show the result in the tail of this description.
      
      NR_LOCKDEP_CACHING_CLASSES specifies how many classes can be
      cached in the instances of lockdep_map.
      I discussed with Peter Zijlstra in LinuxCon Japan about
      this approach and he taught me that caching every subclasses(8)
      is cleary waste of memory. So number of cached classes
      should be configurable.
      
      === Score comparison of benchmarks ===
      # "min" means best score, and "max" means worst score
      
      for i in `seq 1 10`; do ./perf bench -f simple sched messaging; done
      
      before: min: 0.565000, max: 0.583000, avg: 0.572500
      after:  min: 0.559000, max: 0.568000, avg: 0.563300
      
      # with more processes
      for i in `seq 1 10`; do ./perf bench -f simple sched messaging -g 40; done
      
      before: min: 2.274000, max: 2.298000, avg: 2.286300
      after:  min: 2.242000, max: 2.270000, avg: 2.259700
      Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1286269311-28336-2-git-send-email-mitake@dcl.info.waseda.ac.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      62016250
  21. 17 8月, 2010 1 次提交
  22. 09 6月, 2010 1 次提交
  23. 22 5月, 2010 1 次提交
  24. 11 5月, 2010 1 次提交
  25. 09 5月, 2010 2 次提交
    • F
      tracing: Drop the nested field from lock_release event · 93135439
      Frederic Weisbecker 提交于
      Drop the nested field as we don't use it. Every nested state can
      be computed from a state machine on post processing already.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      93135439
    • F
      tracing: Drop lock_acquired waittime field · 883a2a31
      Frederic Weisbecker 提交于
      Drop the waittime field from the lock_acquired event, we can
      calculate it by substracting the lock_acquired event timestamp
      with the matching lock_acquire one.
      
      It is not needed and takes useless space in the traces.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      883a2a31
  26. 07 5月, 2010 1 次提交
    • Y
      lockdep: Reduce stack_trace usage · 4726f2a6
      Yong Zhang 提交于
      When calling check_prevs_add(), if all validations passed
      add_lock_to_list() will add new lock to dependency tree and
      alloc stack_trace for each list_entry.
      
      But at this time, we are always on the same stack, so stack_trace
      for each list_entry has the same value. This is redundant and eats
      up lots of memory which could lead to warning on low
      MAX_STACK_TRACE_ENTRIES.
      
      Use one copy of stack_trace instead.
      
      V2: As suggested by Peter Zijlstra, move save_trace() from
          check_prevs_add() to check_prev_add().
          Add tracking for trylock dependence which is also redundant.
      Signed-off-by: NYong Zhang <yong.zhang0@windriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100504065711.GC10784@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4726f2a6
  27. 04 5月, 2010 1 次提交