1. 07 12月, 2011 1 次提交
  2. 29 9月, 2011 1 次提交
    • P
      rcu: Restore checks for blocking in RCU read-side critical sections · b3fbab05
      Paul E. McKenney 提交于
      Long ago, using TREE_RCU with PREEMPT would result in "scheduling
      while atomic" diagnostics if you blocked in an RCU read-side critical
      section.  However, PREEMPT now implies TREE_PREEMPT_RCU, which defeats
      this diagnostic.  This commit therefore adds a replacement diagnostic
      based on PROVE_RCU.
      
      Because rcu_lockdep_assert() and lockdep_rcu_dereference() are now being
      used for things that have nothing to do with rcu_dereference(), rename
      lockdep_rcu_dereference() to lockdep_rcu_suspicious() and add a third
      argument that is a string indicating what is suspicious.  This third
      argument is passed in from a new third argument to rcu_lockdep_assert().
      Update all calls to rcu_lockdep_assert() to add an informative third
      argument.
      
      Also, add a pair of rcu_lockdep_assert() calls from within
      rcu_note_context_switch(), one complaining if a context switch occurs
      in an RCU-bh read-side critical section and another complaining if a
      context switch occurs in an RCU-sched read-side critical section.
      These are present only if the PROVE_RCU kernel parameter is enabled.
      
      Finally, fix some checkpatch whitespace complaints in lockdep.c.
      
      Again, you must enable PROVE_RCU to see these new diagnostics.  But you
      are enabling PROVE_RCU to check out new RCU uses in any case, aren't you?
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b3fbab05
  3. 18 9月, 2011 1 次提交
  4. 09 8月, 2011 1 次提交
  5. 04 8月, 2011 3 次提交
  6. 22 7月, 2011 1 次提交
  7. 22 6月, 2011 1 次提交
  8. 07 6月, 2011 1 次提交
  9. 22 4月, 2011 7 次提交
  10. 31 3月, 2011 1 次提交
  11. 20 1月, 2011 1 次提交
    • T
      lockdep: Move early boot local IRQ enable/disable status to init/main.c · 2ce802f6
      Tejun Heo 提交于
      During early boot, local IRQ is disabled until IRQ subsystem is
      properly initialized.  During this time, no one should enable
      local IRQ and some operations which usually are not allowed with
      IRQ disabled, e.g. operations which might sleep or require
      communications with other processors, are allowed.
      
      lockdep tracked this with early_boot_irqs_off/on() callbacks.
      As other subsystems need this information too, move it to
      init/main.c and make it generally available.  While at it,
      toggle the boolean to early_boot_irqs_disabled instead of
      enabled so that it can be initialized with %false and %true
      indicates the exceptional condition.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110120110635.GB6036@htj.dyndns.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ce802f6
  12. 19 10月, 2010 2 次提交
    • H
      lockdep: Check the depth of subclass · 4ba053c0
      Hitoshi Mitake 提交于
      Current look_up_lock_class() doesn't check the parameter "subclass".
      This rarely rises problems because the main caller of this function,
      register_lock_class(), checks it.
      
      But register_lock_class() is not the only function which calls
      look_up_lock_class(). lock_set_class() and its callees also call it.
      And lock_set_class() doesn't check this parameter.
      
      This will rise problems when the the value of subclass is larger than
      MAX_LOCKDEP_SUBCLASSES. Because the address (used as the key of class)
      caliculated with too large subclass has a probability to point
      another key in different lock_class_key.
      
      Of course this problem depends on the memory layout and
      occurs with really low probability.
      Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Dmitry Torokhov <dtor@mail.ru>
      Cc: Vojtech Pavlik <vojtech@ucw.cz>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1286958626-986-1-git-send-email-mitake@dcl.info.waseda.ac.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ba053c0
    • H
      lockdep: Add improved subclass caching · 62016250
      Hitoshi Mitake 提交于
      Current lockdep_map only caches one class with subclass == 0,
      and looks up hash table of classes when subclass != 0.
      
      It seems that this has no problem because the case of
      subclass != 0 is rare. But locks of struct rq are
      acquired with subclass == 1 when task migration is executed.
      Task migration is high frequent event, so I modified lockdep
      to cache subclasses.
      
      I measured the score of perf bench sched messaging.
      This patch has slightly but certain (order of milli seconds
      or 10 milli seconds) effect when lots of tasks are running.
      I'll show the result in the tail of this description.
      
      NR_LOCKDEP_CACHING_CLASSES specifies how many classes can be
      cached in the instances of lockdep_map.
      I discussed with Peter Zijlstra in LinuxCon Japan about
      this approach and he taught me that caching every subclasses(8)
      is cleary waste of memory. So number of cached classes
      should be configurable.
      
      === Score comparison of benchmarks ===
      # "min" means best score, and "max" means worst score
      
      for i in `seq 1 10`; do ./perf bench -f simple sched messaging; done
      
      before: min: 0.565000, max: 0.583000, avg: 0.572500
      after:  min: 0.559000, max: 0.568000, avg: 0.563300
      
      # with more processes
      for i in `seq 1 10`; do ./perf bench -f simple sched messaging -g 40; done
      
      before: min: 2.274000, max: 2.298000, avg: 2.286300
      after:  min: 2.242000, max: 2.270000, avg: 2.259700
      Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1286269311-28336-2-git-send-email-mitake@dcl.info.waseda.ac.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      62016250
  13. 17 8月, 2010 1 次提交
  14. 09 6月, 2010 1 次提交
  15. 22 5月, 2010 1 次提交
  16. 11 5月, 2010 1 次提交
  17. 09 5月, 2010 2 次提交
    • F
      tracing: Drop the nested field from lock_release event · 93135439
      Frederic Weisbecker 提交于
      Drop the nested field as we don't use it. Every nested state can
      be computed from a state machine on post processing already.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      93135439
    • F
      tracing: Drop lock_acquired waittime field · 883a2a31
      Frederic Weisbecker 提交于
      Drop the waittime field from the lock_acquired event, we can
      calculate it by substracting the lock_acquired event timestamp
      with the matching lock_acquire one.
      
      It is not needed and takes useless space in the traces.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      883a2a31
  18. 07 5月, 2010 1 次提交
    • Y
      lockdep: Reduce stack_trace usage · 4726f2a6
      Yong Zhang 提交于
      When calling check_prevs_add(), if all validations passed
      add_lock_to_list() will add new lock to dependency tree and
      alloc stack_trace for each list_entry.
      
      But at this time, we are always on the same stack, so stack_trace
      for each list_entry has the same value. This is redundant and eats
      up lots of memory which could lead to warning on low
      MAX_STACK_TRACE_ENTRIES.
      
      Use one copy of stack_trace instead.
      
      V2: As suggested by Peter Zijlstra, move save_trace() from
          check_prevs_add() to check_prev_add().
          Add tracking for trylock dependence which is also redundant.
      Signed-off-by: NYong Zhang <yong.zhang0@windriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100504065711.GC10784@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4726f2a6
  19. 04 5月, 2010 1 次提交
  20. 01 5月, 2010 1 次提交
  21. 06 4月, 2010 1 次提交
    • F
      lockstat: Make lockstat counting per cpu · bd6d29c2
      Frederic Weisbecker 提交于
      Locking statistics are implemented using global atomic
      variables. This is usually fine unless some path write them very
      often.
      
      This is the case for the function and function graph tracers
      that disable irqs for each entry saved (except if the function
      tracer is in preempt disabled only mode).
      And calls to local_irq_save/restore() increment
      hardirqs_on_events and hardirqs_off_events stats (or similar
      stats for redundant versions).
      
      Incrementing these global vars for each function ends up in too
      much cache bouncing if lockstats are enabled.
      
      To solve this, implement the debug_atomic_*() operations using
      per cpu vars.
      
       -v2: Use per_cpu() instead of get_cpu_var() to fetch the desired
            cpu vars on debug_atomic_read()
      
       -v3: Store the stats in a structure. No need for local_t as we
            are NMI/irq safe.
      
       -v4: Fix tons of build errors. I thought I had tested it but I
            probably forgot to select the relevant config.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1270505417-8144-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      bd6d29c2
  22. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  23. 29 3月, 2010 1 次提交
  24. 10 3月, 2010 1 次提交
    • F
      lockdep: Move lock events under lockdep recursion protection · db2c4c77
      Frederic Weisbecker 提交于
      There are rcu locked read side areas in the path where we submit
      a trace event. And these rcu_read_(un)lock() trigger lock events,
      which create recursive events.
      
      One pair in do_perf_sw_event:
      
      __lock_acquire
            |
            |--96.11%-- lock_acquire
            |          |
            |          |--27.21%-- do_perf_sw_event
            |          |          perf_tp_event
            |          |          |
            |          |          |--49.62%-- ftrace_profile_lock_release
            |          |          |          lock_release
            |          |          |          |
            |          |          |          |--33.85%-- _raw_spin_unlock
      
      Another pair in perf_output_begin/end:
      
      __lock_acquire
            |--23.40%-- perf_output_begin
            |          |          __perf_event_overflow
            |          |          perf_swevent_overflow
            |          |          perf_swevent_add
            |          |          perf_swevent_ctx_event
            |          |          do_perf_sw_event
            |          |          perf_tp_event
            |          |          |
            |          |          |--55.37%-- ftrace_profile_lock_acquire
            |          |          |          lock_acquire
            |          |          |          |
            |          |          |          |--37.31%-- _raw_spin_lock
      
      The problem is not that much the trace recursion itself, as we have a
      recursion protection already (though it's always wasteful to recurse).
      But the trace events are outside the lockdep recursion protection, then
      each lockdep event triggers a lock trace, which will trigger two
      other lockdep events. Here the recursive lock trace event won't
      be taken because of the trace recursion, so the recursion stops there
      but lockdep will still analyse these new events:
      
      To sum up, for each lockdep events we have:
      
      	lock_*()
      	     |
                   trace lock_acquire
                        |
                        ----- rcu_read_lock()
                        |          |
                        |          lock_acquire()
                        |          |
                        |          trace_lock_acquire() (stopped)
                        |          |
      		  |          lockdep analyze
                        |
                        ----- rcu_read_unlock()
                                   |
                                   lock_release
                                   |
                                   trace_lock_release() (stopped)
                                   |
                                   lockdep analyze
      
      And you can repeat the above two times as we have two rcu read side
      sections when we submit an event.
      
      This is fixed in this patch by moving the lock trace event under
      the lockdep recursion protection.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      db2c4c77
  25. 04 3月, 2010 1 次提交
  26. 26 2月, 2010 1 次提交
  27. 25 2月, 2010 1 次提交
  28. 27 1月, 2010 1 次提交
    • O
      lockdep: Fix check_usage_backwards() error message · 48d50674
      Oleg Nesterov 提交于
      Lockdep has found the real bug, but the output doesn't look right to me:
      
      > =========================================================
      > [ INFO: possible irq lock inversion dependency detected ]
      > 2.6.33-rc5 #77
      > ---------------------------------------------------------
      > emacs/1609 just changed the state of lock:
      >  (&(&tty->ctrl_lock)->rlock){+.....}, at: [<ffffffff8127c648>] tty_fasync+0xe8/0x190
      > but this lock took another, HARDIRQ-unsafe lock in the past:
      >  (&(&sighand->siglock)->rlock){-.....}
      
      "HARDIRQ-unsafe" and "this lock took another" looks wrong, afaics.
      
      >   ... key      at: [<ffffffff81c054a4>] __key.46539+0x0/0x8
      >   ... acquired at:
      >    [<ffffffff81089af6>] __lock_acquire+0x1056/0x15a0
      >    [<ffffffff8108a0df>] lock_acquire+0x9f/0x120
      >    [<ffffffff81423012>] _raw_spin_lock_irqsave+0x52/0x90
      >    [<ffffffff8127c1be>] __proc_set_tty+0x3e/0x150
      >    [<ffffffff8127e01d>] tty_open+0x51d/0x5e0
      
      The stack-trace shows that this lock (ctrl_lock) was taken under
      ->siglock (which is hopefully irq-safe).
      
      This is a clear typo in check_usage_backwards() where we tell the print a
      fancy routine we're forwards.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100126181641.GA10460@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d50674
  29. 15 12月, 2009 2 次提交