1. 09 6月, 2010 1 次提交
  2. 22 5月, 2010 1 次提交
  3. 11 5月, 2010 1 次提交
  4. 09 5月, 2010 2 次提交
    • F
      tracing: Drop the nested field from lock_release event · 93135439
      Frederic Weisbecker 提交于
      Drop the nested field as we don't use it. Every nested state can
      be computed from a state machine on post processing already.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      93135439
    • F
      tracing: Drop lock_acquired waittime field · 883a2a31
      Frederic Weisbecker 提交于
      Drop the waittime field from the lock_acquired event, we can
      calculate it by substracting the lock_acquired event timestamp
      with the matching lock_acquire one.
      
      It is not needed and takes useless space in the traces.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      883a2a31
  5. 07 5月, 2010 1 次提交
    • Y
      lockdep: Reduce stack_trace usage · 4726f2a6
      Yong Zhang 提交于
      When calling check_prevs_add(), if all validations passed
      add_lock_to_list() will add new lock to dependency tree and
      alloc stack_trace for each list_entry.
      
      But at this time, we are always on the same stack, so stack_trace
      for each list_entry has the same value. This is redundant and eats
      up lots of memory which could lead to warning on low
      MAX_STACK_TRACE_ENTRIES.
      
      Use one copy of stack_trace instead.
      
      V2: As suggested by Peter Zijlstra, move save_trace() from
          check_prevs_add() to check_prev_add().
          Add tracking for trylock dependence which is also redundant.
      Signed-off-by: NYong Zhang <yong.zhang0@windriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100504065711.GC10784@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4726f2a6
  6. 04 5月, 2010 1 次提交
  7. 01 5月, 2010 1 次提交
  8. 06 4月, 2010 1 次提交
    • F
      lockstat: Make lockstat counting per cpu · bd6d29c2
      Frederic Weisbecker 提交于
      Locking statistics are implemented using global atomic
      variables. This is usually fine unless some path write them very
      often.
      
      This is the case for the function and function graph tracers
      that disable irqs for each entry saved (except if the function
      tracer is in preempt disabled only mode).
      And calls to local_irq_save/restore() increment
      hardirqs_on_events and hardirqs_off_events stats (or similar
      stats for redundant versions).
      
      Incrementing these global vars for each function ends up in too
      much cache bouncing if lockstats are enabled.
      
      To solve this, implement the debug_atomic_*() operations using
      per cpu vars.
      
       -v2: Use per_cpu() instead of get_cpu_var() to fetch the desired
            cpu vars on debug_atomic_read()
      
       -v3: Store the stats in a structure. No need for local_t as we
            are NMI/irq safe.
      
       -v4: Fix tons of build errors. I thought I had tested it but I
            probably forgot to select the relevant config.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1270505417-8144-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      bd6d29c2
  9. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  10. 29 3月, 2010 1 次提交
  11. 10 3月, 2010 1 次提交
    • F
      lockdep: Move lock events under lockdep recursion protection · db2c4c77
      Frederic Weisbecker 提交于
      There are rcu locked read side areas in the path where we submit
      a trace event. And these rcu_read_(un)lock() trigger lock events,
      which create recursive events.
      
      One pair in do_perf_sw_event:
      
      __lock_acquire
            |
            |--96.11%-- lock_acquire
            |          |
            |          |--27.21%-- do_perf_sw_event
            |          |          perf_tp_event
            |          |          |
            |          |          |--49.62%-- ftrace_profile_lock_release
            |          |          |          lock_release
            |          |          |          |
            |          |          |          |--33.85%-- _raw_spin_unlock
      
      Another pair in perf_output_begin/end:
      
      __lock_acquire
            |--23.40%-- perf_output_begin
            |          |          __perf_event_overflow
            |          |          perf_swevent_overflow
            |          |          perf_swevent_add
            |          |          perf_swevent_ctx_event
            |          |          do_perf_sw_event
            |          |          perf_tp_event
            |          |          |
            |          |          |--55.37%-- ftrace_profile_lock_acquire
            |          |          |          lock_acquire
            |          |          |          |
            |          |          |          |--37.31%-- _raw_spin_lock
      
      The problem is not that much the trace recursion itself, as we have a
      recursion protection already (though it's always wasteful to recurse).
      But the trace events are outside the lockdep recursion protection, then
      each lockdep event triggers a lock trace, which will trigger two
      other lockdep events. Here the recursive lock trace event won't
      be taken because of the trace recursion, so the recursion stops there
      but lockdep will still analyse these new events:
      
      To sum up, for each lockdep events we have:
      
      	lock_*()
      	     |
                   trace lock_acquire
                        |
                        ----- rcu_read_lock()
                        |          |
                        |          lock_acquire()
                        |          |
                        |          trace_lock_acquire() (stopped)
                        |          |
      		  |          lockdep analyze
                        |
                        ----- rcu_read_unlock()
                                   |
                                   lock_release
                                   |
                                   trace_lock_release() (stopped)
                                   |
                                   lockdep analyze
      
      And you can repeat the above two times as we have two rcu read side
      sections when we submit an event.
      
      This is fixed in this patch by moving the lock trace event under
      the lockdep recursion protection.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      db2c4c77
  12. 04 3月, 2010 1 次提交
  13. 26 2月, 2010 1 次提交
  14. 25 2月, 2010 1 次提交
  15. 27 1月, 2010 1 次提交
    • O
      lockdep: Fix check_usage_backwards() error message · 48d50674
      Oleg Nesterov 提交于
      Lockdep has found the real bug, but the output doesn't look right to me:
      
      > =========================================================
      > [ INFO: possible irq lock inversion dependency detected ]
      > 2.6.33-rc5 #77
      > ---------------------------------------------------------
      > emacs/1609 just changed the state of lock:
      >  (&(&tty->ctrl_lock)->rlock){+.....}, at: [<ffffffff8127c648>] tty_fasync+0xe8/0x190
      > but this lock took another, HARDIRQ-unsafe lock in the past:
      >  (&(&sighand->siglock)->rlock){-.....}
      
      "HARDIRQ-unsafe" and "this lock took another" looks wrong, afaics.
      
      >   ... key      at: [<ffffffff81c054a4>] __key.46539+0x0/0x8
      >   ... acquired at:
      >    [<ffffffff81089af6>] __lock_acquire+0x1056/0x15a0
      >    [<ffffffff8108a0df>] lock_acquire+0x9f/0x120
      >    [<ffffffff81423012>] _raw_spin_lock_irqsave+0x52/0x90
      >    [<ffffffff8127c1be>] __proc_set_tty+0x3e/0x150
      >    [<ffffffff8127e01d>] tty_open+0x51d/0x5e0
      
      The stack-trace shows that this lock (ctrl_lock) was taken under
      ->siglock (which is hopefully irq-safe).
      
      This is a clear typo in check_usage_backwards() where we tell the print a
      fancy routine we're forwards.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100126181641.GA10460@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d50674
  16. 15 12月, 2009 3 次提交
  17. 10 12月, 2009 1 次提交
  18. 06 12月, 2009 1 次提交
  19. 13 11月, 2009 1 次提交
  20. 29 10月, 2009 1 次提交
    • T
      percpu: make percpu symbols under kernel/ and mm/ unique · 1871e52c
      Tejun Heo 提交于
      This patch updates percpu related symbols under kernel/ and mm/ such
      that percpu symbols are unique and don't clash with local symbols.
      This serves two purposes of decreasing the possibility of global
      percpu symbol collision and allowing dropping per_cpu__ prefix from
      percpu symbols.
      
      * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/
      
      * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/	(any better idea?)
        		  s/sched_group_cpus/sched_groups/
      
      * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a
      
      * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
        		       s/watchdog_task/softlockup_watchdog/
      		       s/timestamp/ts/ for local variables
      
      * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/
      
      * mm/slab.c: s/reap_work/slab_reap_work/
        	     s/reap_node/slab_reap_node/
      
      * mm/vmstat.c: local variable changed to avoid collision with vmstat_work
      
      Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
      which cause name clashes" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: N(slab/vmstat) Christoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      1871e52c
  21. 09 10月, 2009 1 次提交
  22. 23 9月, 2009 1 次提交
  23. 29 8月, 2009 1 次提交
  24. 02 8月, 2009 6 次提交
    • M
      lockdep: Fix memory usage info of BFS · 90629209
      Ming Lei 提交于
      The unit is KB, so sizeof(struct circular_queue) should be
      divided by 1024.
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Cc: davem@davemloft.net
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: a.p.zijlstra@chello.nl
      LKML-Reference: <1249220616-7190-1-git-send-email-tom.leiming@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90629209
    • M
      lockdep: Reintroduce generation count to make BFS faster · e351b660
      Ming Lei 提交于
      We still can apply DaveM's generation count optimization to
      BFS, based on the following idea:
      
       - before doing each BFS, increase the global generation id
         by 1
      
       - if one node in the graph has been visited, mark it as
         visited by storing the current global generation id into
         the node's dep_gen_id field
      
       - so we can decide if one node has been visited already, by
         comparing the node's dep_gen_id with the global generation id.
      
      By applying DaveM's generation count optimization to current
      implementation of BFS, we gain the following advantages:
      
       - we save MAX_LOCKDEP_ENTRIES/8 bytes memory;
      
       - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES);
         in each BFS, which is very time-consuming since
         MAX_LOCKDEP_ENTRIES may be very large.(16384UL)
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "David S. Miller" <davem@davemloft.net>
      LKML-Reference: <1248274089-6358-1-git-send-email-tom.leiming@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e351b660
    • P
      lockdep: Deal with many similar locks · bb97a91e
      Peter Zijlstra 提交于
      spin_lock_nest_lock() allows to take many instances of the same
      class, this can easily lead to overflow of MAX_LOCK_DEPTH.
      
      To avoid this overflow, we'll stop accounting instances but
      start reference counting the class in the held_lock structure.
      
      [ We could maintain a list of instances, if we'd move the hlock
        stuff into __lock_acquired(), but that would require
        significant modifications to the current code. ]
      
      We restrict this mode to spin_lock_nest_lock() only, because it
      degrades the lockdep quality due to lost of instance.
      
      For lockstat this means we don't track lock statistics for any
      but the first lock in the series.
      
      Currently nesting is limited to 11 bits because that was the
      spare space available in held_lock. This yields a 2048
      instances maximium.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb97a91e
    • P
      lockdep: Introduce lockdep_assert_held() · f607c668
      Peter Zijlstra 提交于
      Add a lockdep helper to validate that we indeed are the owner
      of a lock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f607c668
    • P
      lockdep: Fix style nits · 98c33edd
      Peter Zijlstra 提交于
      fixes a few comments and whitespaces that annoyed me.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      98c33edd
    • P
      lockdep: Fix backtraces · 4f84f433
      Peter Zijlstra 提交于
      Truncate stupid -1 entries in backtraces.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1248096665.15751.8816.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4f84f433
  25. 24 7月, 2009 8 次提交