1. 22 4月, 2011 5 次提交
    • S
      lockdep: Replace "Bad BFS generated tree" message with something less cryptic · 6be8c393
      Steven Rostedt 提交于
      The message of "Bad BFS generated tree" is a bit confusing.
      Replace it with a more sane error message.
      
      Thanks to Peter Zijlstra for helping me come up with a better
      message.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110421014300.135521252@goodmis.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      6be8c393
    • S
      lockdep: Print a nicer description for irq inversion bugs · dad3d743
      Steven Rostedt 提交于
      Irq inversion and irq dependency bugs are only subtly
      different. The diffenerence lies where the interrupt occurred.
      
      For irq dependency:
      
      	irq_disable
      	lock(A)
      	lock(B)
      	unlock(B)
      	unlock(A)
      	irq_enable
      
      	lock(B)
      	unlock(B)
      
       	<interrupt>
      	  lock(A)
      
      The interrupt comes in after it has been established that lock A
      can be held when taking an irq unsafe lock. Lockdep detects the
      problem when taking lock A in interrupt context.
      
      With the irq_inversion the irq happens before it is established
      and lockdep detects the problem with the taking of lock B:
      
       	<interrupt>
      	  lock(A)
      
      	irq_disable
      	lock(A)
      	lock(B)
      	unlock(B)
      	unlock(A)
      	irq_enable
      
      	lock(B)
      	unlock(B)
      
      Since the problem with the locking logic for both of these issues
      is in actuality the same, they both should report the same scenario.
      This patch implements that and prints this:
      
      other info that might help us debug this:
      
      Chain exists of:
        &rq->lock --> lockA --> lockC
      
       Possible interrupt unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(lockC);
                                     local_irq_disable();
                                     lock(&rq->lock);
                                     lock(lockA);
        <Interrupt>
          lock(&rq->lock);
      
       *** DEADLOCK ***
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110421014259.910720381@goodmis.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      dad3d743
    • S
      lockdep: Print a nicer description for simple deadlocks · 48702ecf
      Steven Rostedt 提交于
      Lockdep output can be pretty cryptic, having nicer output
      can save a lot of head scratching. When a simple deadlock
      scenario is detected by lockdep (lock A -> lock A) we now
      get the following new output:
      
      other info that might help us debug this:
       Possible unsafe locking scenario:
      
             CPU0
             ----
        lock(&(lock)->rlock);
        lock(&(lock)->rlock);
      
       *** DEADLOCK ***
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110421014259.643930104@goodmis.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      48702ecf
    • S
      lockdep: Print a nicer description for normal deadlocks · f4185812
      Steven Rostedt 提交于
      The lockdep output can be pretty cryptic, having nicer output
      can save a lot of head scratching. When a normal deadlock
      scenario is detected by lockdep (lock A -> lock B and there
      exists a place where lock B -> lock A) we now get the following
      new output:
      
      other info that might help us debug this:
      
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(lockB);
                                     lock(lockA);
                                     lock(lockB);
        lock(lockA);
      
       *** DEADLOCK ***
      
      On cases where there's a deeper chair, it shows the partial
      chain that can cause the issue:
      
      Chain exists of:
        lockC --> lockA --> lockB
      
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(lockB);
                                     lock(lockA);
                                     lock(lockB);
        lock(lockC);
      
       *** DEADLOCK ***
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110421014259.380621789@goodmis.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      f4185812
    • S
      lockdep: Print a nicer description for irq lock inversions · 3003eba3
      Steven Rostedt 提交于
      Locking order inversion due to interrupts is a subtle problem.
      
      When an irq lockiinversion discovered by lockdep it currently
      reports something like:
      
      [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
      
      ... and then prints out the locks that are involved, as back traces.
      
      Judging by lkml feedback developers were routinely confused by what
      a HARDIRQ->safe to unsafe issue is all about, and sometimes even
      blew it off as a bug in lockdep.
      
      It is not obvious when lockdep prints this message about a lock that
      is never taken in interrupt context.
      
      After explaining the problems that lockdep is reporting, I
      decided to add a description of the problem in visual form. Now
      the following is shown:
      
       ---
      other info that might help us debug this:
      
       Possible interrupt unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(lockA);
                                     local_irq_disable();
                                     lock(&rq->lock);
                                     lock(lockA);
        <Interrupt>
          lock(&rq->lock);
      
       *** DEADLOCK ***
      
       ---
      
      The above is the case when the unsafe lock is taken while
      holding a lock taken in irq context. But when a lock is taken
      that also grabs a unsafe lock, the call chain is shown:
      
       ---
      other info that might help us debug this:
      
      Chain exists of:
        &rq->lock --> lockA --> lockC
      
       Possible interrupt unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(lockC);
                                     local_irq_disable();
                                     lock(&rq->lock);
                                     lock(lockA);
        <Interrupt>
          lock(&rq->lock);
      
       *** DEADLOCK ***
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110421014259.132728798@goodmis.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      3003eba3
  2. 31 3月, 2011 1 次提交
  3. 20 1月, 2011 1 次提交
    • T
      lockdep: Move early boot local IRQ enable/disable status to init/main.c · 2ce802f6
      Tejun Heo 提交于
      During early boot, local IRQ is disabled until IRQ subsystem is
      properly initialized.  During this time, no one should enable
      local IRQ and some operations which usually are not allowed with
      IRQ disabled, e.g. operations which might sleep or require
      communications with other processors, are allowed.
      
      lockdep tracked this with early_boot_irqs_off/on() callbacks.
      As other subsystems need this information too, move it to
      init/main.c and make it generally available.  While at it,
      toggle the boolean to early_boot_irqs_disabled instead of
      enabled so that it can be initialized with %false and %true
      indicates the exceptional condition.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110120110635.GB6036@htj.dyndns.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ce802f6
  4. 19 10月, 2010 2 次提交
    • H
      lockdep: Check the depth of subclass · 4ba053c0
      Hitoshi Mitake 提交于
      Current look_up_lock_class() doesn't check the parameter "subclass".
      This rarely rises problems because the main caller of this function,
      register_lock_class(), checks it.
      
      But register_lock_class() is not the only function which calls
      look_up_lock_class(). lock_set_class() and its callees also call it.
      And lock_set_class() doesn't check this parameter.
      
      This will rise problems when the the value of subclass is larger than
      MAX_LOCKDEP_SUBCLASSES. Because the address (used as the key of class)
      caliculated with too large subclass has a probability to point
      another key in different lock_class_key.
      
      Of course this problem depends on the memory layout and
      occurs with really low probability.
      Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Dmitry Torokhov <dtor@mail.ru>
      Cc: Vojtech Pavlik <vojtech@ucw.cz>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1286958626-986-1-git-send-email-mitake@dcl.info.waseda.ac.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ba053c0
    • H
      lockdep: Add improved subclass caching · 62016250
      Hitoshi Mitake 提交于
      Current lockdep_map only caches one class with subclass == 0,
      and looks up hash table of classes when subclass != 0.
      
      It seems that this has no problem because the case of
      subclass != 0 is rare. But locks of struct rq are
      acquired with subclass == 1 when task migration is executed.
      Task migration is high frequent event, so I modified lockdep
      to cache subclasses.
      
      I measured the score of perf bench sched messaging.
      This patch has slightly but certain (order of milli seconds
      or 10 milli seconds) effect when lots of tasks are running.
      I'll show the result in the tail of this description.
      
      NR_LOCKDEP_CACHING_CLASSES specifies how many classes can be
      cached in the instances of lockdep_map.
      I discussed with Peter Zijlstra in LinuxCon Japan about
      this approach and he taught me that caching every subclasses(8)
      is cleary waste of memory. So number of cached classes
      should be configurable.
      
      === Score comparison of benchmarks ===
      # "min" means best score, and "max" means worst score
      
      for i in `seq 1 10`; do ./perf bench -f simple sched messaging; done
      
      before: min: 0.565000, max: 0.583000, avg: 0.572500
      after:  min: 0.559000, max: 0.568000, avg: 0.563300
      
      # with more processes
      for i in `seq 1 10`; do ./perf bench -f simple sched messaging -g 40; done
      
      before: min: 2.274000, max: 2.298000, avg: 2.286300
      after:  min: 2.242000, max: 2.270000, avg: 2.259700
      Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1286269311-28336-2-git-send-email-mitake@dcl.info.waseda.ac.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      62016250
  5. 17 8月, 2010 1 次提交
  6. 09 6月, 2010 1 次提交
  7. 22 5月, 2010 1 次提交
  8. 11 5月, 2010 1 次提交
  9. 09 5月, 2010 2 次提交
    • F
      tracing: Drop the nested field from lock_release event · 93135439
      Frederic Weisbecker 提交于
      Drop the nested field as we don't use it. Every nested state can
      be computed from a state machine on post processing already.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      93135439
    • F
      tracing: Drop lock_acquired waittime field · 883a2a31
      Frederic Weisbecker 提交于
      Drop the waittime field from the lock_acquired event, we can
      calculate it by substracting the lock_acquired event timestamp
      with the matching lock_acquire one.
      
      It is not needed and takes useless space in the traces.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      883a2a31
  10. 07 5月, 2010 1 次提交
    • Y
      lockdep: Reduce stack_trace usage · 4726f2a6
      Yong Zhang 提交于
      When calling check_prevs_add(), if all validations passed
      add_lock_to_list() will add new lock to dependency tree and
      alloc stack_trace for each list_entry.
      
      But at this time, we are always on the same stack, so stack_trace
      for each list_entry has the same value. This is redundant and eats
      up lots of memory which could lead to warning on low
      MAX_STACK_TRACE_ENTRIES.
      
      Use one copy of stack_trace instead.
      
      V2: As suggested by Peter Zijlstra, move save_trace() from
          check_prevs_add() to check_prev_add().
          Add tracking for trylock dependence which is also redundant.
      Signed-off-by: NYong Zhang <yong.zhang0@windriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100504065711.GC10784@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4726f2a6
  11. 04 5月, 2010 1 次提交
  12. 01 5月, 2010 1 次提交
  13. 06 4月, 2010 1 次提交
    • F
      lockstat: Make lockstat counting per cpu · bd6d29c2
      Frederic Weisbecker 提交于
      Locking statistics are implemented using global atomic
      variables. This is usually fine unless some path write them very
      often.
      
      This is the case for the function and function graph tracers
      that disable irqs for each entry saved (except if the function
      tracer is in preempt disabled only mode).
      And calls to local_irq_save/restore() increment
      hardirqs_on_events and hardirqs_off_events stats (or similar
      stats for redundant versions).
      
      Incrementing these global vars for each function ends up in too
      much cache bouncing if lockstats are enabled.
      
      To solve this, implement the debug_atomic_*() operations using
      per cpu vars.
      
       -v2: Use per_cpu() instead of get_cpu_var() to fetch the desired
            cpu vars on debug_atomic_read()
      
       -v3: Store the stats in a structure. No need for local_t as we
            are NMI/irq safe.
      
       -v4: Fix tons of build errors. I thought I had tested it but I
            probably forgot to select the relevant config.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1270505417-8144-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      bd6d29c2
  14. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  15. 29 3月, 2010 1 次提交
  16. 10 3月, 2010 1 次提交
    • F
      lockdep: Move lock events under lockdep recursion protection · db2c4c77
      Frederic Weisbecker 提交于
      There are rcu locked read side areas in the path where we submit
      a trace event. And these rcu_read_(un)lock() trigger lock events,
      which create recursive events.
      
      One pair in do_perf_sw_event:
      
      __lock_acquire
            |
            |--96.11%-- lock_acquire
            |          |
            |          |--27.21%-- do_perf_sw_event
            |          |          perf_tp_event
            |          |          |
            |          |          |--49.62%-- ftrace_profile_lock_release
            |          |          |          lock_release
            |          |          |          |
            |          |          |          |--33.85%-- _raw_spin_unlock
      
      Another pair in perf_output_begin/end:
      
      __lock_acquire
            |--23.40%-- perf_output_begin
            |          |          __perf_event_overflow
            |          |          perf_swevent_overflow
            |          |          perf_swevent_add
            |          |          perf_swevent_ctx_event
            |          |          do_perf_sw_event
            |          |          perf_tp_event
            |          |          |
            |          |          |--55.37%-- ftrace_profile_lock_acquire
            |          |          |          lock_acquire
            |          |          |          |
            |          |          |          |--37.31%-- _raw_spin_lock
      
      The problem is not that much the trace recursion itself, as we have a
      recursion protection already (though it's always wasteful to recurse).
      But the trace events are outside the lockdep recursion protection, then
      each lockdep event triggers a lock trace, which will trigger two
      other lockdep events. Here the recursive lock trace event won't
      be taken because of the trace recursion, so the recursion stops there
      but lockdep will still analyse these new events:
      
      To sum up, for each lockdep events we have:
      
      	lock_*()
      	     |
                   trace lock_acquire
                        |
                        ----- rcu_read_lock()
                        |          |
                        |          lock_acquire()
                        |          |
                        |          trace_lock_acquire() (stopped)
                        |          |
      		  |          lockdep analyze
                        |
                        ----- rcu_read_unlock()
                                   |
                                   lock_release
                                   |
                                   trace_lock_release() (stopped)
                                   |
                                   lockdep analyze
      
      And you can repeat the above two times as we have two rcu read side
      sections when we submit an event.
      
      This is fixed in this patch by moving the lock trace event under
      the lockdep recursion protection.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      db2c4c77
  17. 04 3月, 2010 1 次提交
  18. 26 2月, 2010 1 次提交
  19. 25 2月, 2010 1 次提交
  20. 27 1月, 2010 1 次提交
    • O
      lockdep: Fix check_usage_backwards() error message · 48d50674
      Oleg Nesterov 提交于
      Lockdep has found the real bug, but the output doesn't look right to me:
      
      > =========================================================
      > [ INFO: possible irq lock inversion dependency detected ]
      > 2.6.33-rc5 #77
      > ---------------------------------------------------------
      > emacs/1609 just changed the state of lock:
      >  (&(&tty->ctrl_lock)->rlock){+.....}, at: [<ffffffff8127c648>] tty_fasync+0xe8/0x190
      > but this lock took another, HARDIRQ-unsafe lock in the past:
      >  (&(&sighand->siglock)->rlock){-.....}
      
      "HARDIRQ-unsafe" and "this lock took another" looks wrong, afaics.
      
      >   ... key      at: [<ffffffff81c054a4>] __key.46539+0x0/0x8
      >   ... acquired at:
      >    [<ffffffff81089af6>] __lock_acquire+0x1056/0x15a0
      >    [<ffffffff8108a0df>] lock_acquire+0x9f/0x120
      >    [<ffffffff81423012>] _raw_spin_lock_irqsave+0x52/0x90
      >    [<ffffffff8127c1be>] __proc_set_tty+0x3e/0x150
      >    [<ffffffff8127e01d>] tty_open+0x51d/0x5e0
      
      The stack-trace shows that this lock (ctrl_lock) was taken under
      ->siglock (which is hopefully irq-safe).
      
      This is a clear typo in check_usage_backwards() where we tell the print a
      fancy routine we're forwards.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100126181641.GA10460@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d50674
  21. 15 12月, 2009 3 次提交
  22. 10 12月, 2009 1 次提交
  23. 06 12月, 2009 1 次提交
  24. 13 11月, 2009 1 次提交
  25. 29 10月, 2009 1 次提交
    • T
      percpu: make percpu symbols under kernel/ and mm/ unique · 1871e52c
      Tejun Heo 提交于
      This patch updates percpu related symbols under kernel/ and mm/ such
      that percpu symbols are unique and don't clash with local symbols.
      This serves two purposes of decreasing the possibility of global
      percpu symbol collision and allowing dropping per_cpu__ prefix from
      percpu symbols.
      
      * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/
      
      * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/	(any better idea?)
        		  s/sched_group_cpus/sched_groups/
      
      * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a
      
      * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
        		       s/watchdog_task/softlockup_watchdog/
      		       s/timestamp/ts/ for local variables
      
      * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/
      
      * mm/slab.c: s/reap_work/slab_reap_work/
        	     s/reap_node/slab_reap_node/
      
      * mm/vmstat.c: local variable changed to avoid collision with vmstat_work
      
      Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
      which cause name clashes" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: N(slab/vmstat) Christoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      1871e52c
  26. 09 10月, 2009 1 次提交
  27. 23 9月, 2009 1 次提交
  28. 29 8月, 2009 1 次提交
  29. 02 8月, 2009 4 次提交
    • M
      lockdep: Fix memory usage info of BFS · 90629209
      Ming Lei 提交于
      The unit is KB, so sizeof(struct circular_queue) should be
      divided by 1024.
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Cc: davem@davemloft.net
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: a.p.zijlstra@chello.nl
      LKML-Reference: <1249220616-7190-1-git-send-email-tom.leiming@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90629209
    • M
      lockdep: Reintroduce generation count to make BFS faster · e351b660
      Ming Lei 提交于
      We still can apply DaveM's generation count optimization to
      BFS, based on the following idea:
      
       - before doing each BFS, increase the global generation id
         by 1
      
       - if one node in the graph has been visited, mark it as
         visited by storing the current global generation id into
         the node's dep_gen_id field
      
       - so we can decide if one node has been visited already, by
         comparing the node's dep_gen_id with the global generation id.
      
      By applying DaveM's generation count optimization to current
      implementation of BFS, we gain the following advantages:
      
       - we save MAX_LOCKDEP_ENTRIES/8 bytes memory;
      
       - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES);
         in each BFS, which is very time-consuming since
         MAX_LOCKDEP_ENTRIES may be very large.(16384UL)
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "David S. Miller" <davem@davemloft.net>
      LKML-Reference: <1248274089-6358-1-git-send-email-tom.leiming@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e351b660
    • P
      lockdep: Deal with many similar locks · bb97a91e
      Peter Zijlstra 提交于
      spin_lock_nest_lock() allows to take many instances of the same
      class, this can easily lead to overflow of MAX_LOCK_DEPTH.
      
      To avoid this overflow, we'll stop accounting instances but
      start reference counting the class in the held_lock structure.
      
      [ We could maintain a list of instances, if we'd move the hlock
        stuff into __lock_acquired(), but that would require
        significant modifications to the current code. ]
      
      We restrict this mode to spin_lock_nest_lock() only, because it
      degrades the lockdep quality due to lost of instance.
      
      For lockstat this means we don't track lock statistics for any
      but the first lock in the series.
      
      Currently nesting is limited to 11 bits because that was the
      spare space available in held_lock. This yields a 2048
      instances maximium.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb97a91e
    • P
      lockdep: Introduce lockdep_assert_held() · f607c668
      Peter Zijlstra 提交于
      Add a lockdep helper to validate that we indeed are the owner
      of a lock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f607c668