- 26 2月, 2010 1 次提交
-
-
由 Paul E. McKenney 提交于
Change from "unsafe" to "suspicious", given that there will be false alarms. Suggested-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: laijs@cn.fujitsu.com Cc: dipankar@in.ibm.com Cc: mathieu.desnoyers@polymtl.ca Cc: josh@joshtriplett.org Cc: dvhltc@us.ibm.com Cc: niv@us.ibm.com Cc: peterz@infradead.org Cc: rostedt@goodmis.org Cc: Valdis.Kletnieks@vt.edu Cc: dhowells@redhat.com LKML-Reference: <1267135607-7056-1-git-send-email-paulmck@linux.vnet.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 25 2月, 2010 1 次提交
-
-
由 Paul E. McKenney 提交于
Make rcu_dereference_check() print the list of held locks in addition to the stack dump to ease debugging. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: laijs@cn.fujitsu.com Cc: dipankar@in.ibm.com Cc: mathieu.desnoyers@polymtl.ca Cc: josh@joshtriplett.org Cc: dvhltc@us.ibm.com Cc: niv@us.ibm.com Cc: peterz@infradead.org Cc: rostedt@goodmis.org Cc: Valdis.Kletnieks@vt.edu Cc: dhowells@redhat.com LKML-Reference: <1266887105-1528-3-git-send-email-paulmck@linux.vnet.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 1月, 2010 1 次提交
-
-
由 Oleg Nesterov 提交于
Lockdep has found the real bug, but the output doesn't look right to me: > ========================================================= > [ INFO: possible irq lock inversion dependency detected ] > 2.6.33-rc5 #77 > --------------------------------------------------------- > emacs/1609 just changed the state of lock: > (&(&tty->ctrl_lock)->rlock){+.....}, at: [<ffffffff8127c648>] tty_fasync+0xe8/0x190 > but this lock took another, HARDIRQ-unsafe lock in the past: > (&(&sighand->siglock)->rlock){-.....} "HARDIRQ-unsafe" and "this lock took another" looks wrong, afaics. > ... key at: [<ffffffff81c054a4>] __key.46539+0x0/0x8 > ... acquired at: > [<ffffffff81089af6>] __lock_acquire+0x1056/0x15a0 > [<ffffffff8108a0df>] lock_acquire+0x9f/0x120 > [<ffffffff81423012>] _raw_spin_lock_irqsave+0x52/0x90 > [<ffffffff8127c1be>] __proc_set_tty+0x3e/0x150 > [<ffffffff8127e01d>] tty_open+0x51d/0x5e0 The stack-trace shows that this lock (ctrl_lock) was taken under ->siglock (which is hopefully irq-safe). This is a clear typo in check_usage_backwards() where we tell the print a fancy routine we're forwards. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100126181641.GA10460@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 12月, 2009 3 次提交
-
-
由 Thomas Gleixner 提交于
Name space cleanup. No functional change. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: linux-arch@vger.kernel.org
-
由 Thomas Gleixner 提交于
Further name space cleanup. No functional change Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: linux-arch@vger.kernel.org
-
由 Thomas Gleixner 提交于
The raw_spin* namespace was taken by lockdep for the architecture specific implementations. raw_spin_* would be the ideal name space for the spinlocks which are not converted to sleeping locks in preempt-rt. Linus suggested to convert the raw_ to arch_ locks and cleanup the name space instead of using an artifical name like core_spin, atomic_spin or whatever No functional change. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: linux-arch@vger.kernel.org
-
- 10 12月, 2009 1 次提交
-
-
由 Luck, Tony 提交于
ia64 found this the hard way (because we currently have a stub for save_stack_trace() that does nothing). But it would be a good idea to be cautious in case a real save_stack_trace() bailed out with an error before it set trace->nr_entries. Signed-off-by: NTony Luck <tony.luck@intel.com> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: luming.yu@intel.com LKML-Reference: <4b2024d085302c2a2@agluck-desktop.sc.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 06 12月, 2009 1 次提交
-
-
由 Frank Rowand 提交于
Fix min, max times in /proc/lock_stats (1) When collecting lock hold and wait times, if the current minimum time is zero, it will be replaced by the next time. (2) When aggregating minimum and maximum lock hold and wait times accross cpus, the values are added, instead of selecting the minimum and maximum. Signed-off-by: NFrank Rowand <frank.rowand@am.sony.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <4B05BBAE.2050005@am.sony.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 11月, 2009 1 次提交
-
-
由 Frederic Weisbecker 提交于
Lockdep events subsystem gathers various locking related events such as a request, release, contention or acquisition of a lock. The name of this event subsystem is a bit of a misnomer since these events are not quite related to lockdep but more generally to locking, ie: these events are not reporting lock dependencies or possible deadlock scenario but pure locking events. Hence this rename. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <1258103194-843-1-git-send-email-fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 10月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
This patch updates percpu related symbols under kernel/ and mm/ such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/ * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/ (any better idea?) s/sched_group_cpus/sched_groups/ * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/ s/watchdog_task/softlockup_watchdog/ s/timestamp/ts/ for local variables * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/ * mm/slab.c: s/reap_work/slab_reap_work/ s/reap_node/slab_reap_node/ * mm/vmstat.c: local variable changed to avoid collision with vmstat_work Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: N(slab/vmstat) Christoph Lameter <cl@linux-foundation.org> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de>
-
- 09 10月, 2009 1 次提交
-
-
由 Peter Zijlstra 提交于
Some tracepoint magic (TRACE_EVENT(lock_acquired)) relies on the fact that lock hold times are positive and uses div64 on that. That triggered a build warning on MIPS, and probably causes bad output in certain circumstances as well. Make it truly positive. Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1254818502.21044.112.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 9月, 2009 1 次提交
-
-
由 Mike Frysinger 提交于
This allows lockdep to locate symbols that are in arch-specific data sections (such as data in Blackfin on-chip SRAM regions). Signed-off-by: NMike Frysinger <vapier@gentoo.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Robin Getz <rgetz@blackfin.uclinux.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 8月, 2009 1 次提交
-
-
由 Ming Lei 提交于
Since lockdep has introduced BFS to avoid recursion, statistics for recursion does not make any sense now. So remove them. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Cc: a.p.zijlstra@chello.nl LKML-Reference: <1251542879-5211-1-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 02 8月, 2009 6 次提交
-
-
由 Ming Lei 提交于
The unit is KB, so sizeof(struct circular_queue) should be divided by 1024. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Cc: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org Cc: davem@davemloft.net Cc: Ming Lei <tom.leiming@gmail.com> Cc: a.p.zijlstra@chello.nl LKML-Reference: <1249220616-7190-1-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
We still can apply DaveM's generation count optimization to BFS, based on the following idea: - before doing each BFS, increase the global generation id by 1 - if one node in the graph has been visited, mark it as visited by storing the current global generation id into the node's dep_gen_id field - so we can decide if one node has been visited already, by comparing the node's dep_gen_id with the global generation id. By applying DaveM's generation count optimization to current implementation of BFS, we gain the following advantages: - we save MAX_LOCKDEP_ENTRIES/8 bytes memory; - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES); in each BFS, which is very time-consuming since MAX_LOCKDEP_ENTRIES may be very large.(16384UL) Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: "David S. Miller" <davem@davemloft.net> LKML-Reference: <1248274089-6358-1-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
spin_lock_nest_lock() allows to take many instances of the same class, this can easily lead to overflow of MAX_LOCK_DEPTH. To avoid this overflow, we'll stop accounting instances but start reference counting the class in the held_lock structure. [ We could maintain a list of instances, if we'd move the hlock stuff into __lock_acquired(), but that would require significant modifications to the current code. ] We restrict this mode to spin_lock_nest_lock() only, because it degrades the lockdep quality due to lost of instance. For lockstat this means we don't track lock statistics for any but the first lock in the series. Currently nesting is limited to 11 bits because that was the spare space available in held_lock. This yields a 2048 instances maximium. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Add a lockdep helper to validate that we indeed are the owner of a lock. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
fixes a few comments and whitespaces that annoyed me. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Truncate stupid -1 entries in backtraces. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1248096665.15751.8816.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 7月, 2009 10 次提交
-
-
由 Peter Zijlstra 提交于
Some cleanups of the lockdep code after the BFS series: - Remove the last traces of the generation id - Fixup comment style - Move the bfs routines into lockdep.c - Cleanup the bfs routines [ tom.leiming@gmail.com: Fix crash ] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-11-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
Add BFS statistics to the existing lockdep stats. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-10-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
Also account the BFS memory usage. Signed-off-by: NMing Lei <tom.leiming@gmail.com> [ fix build for !PROVE_LOCKING ] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-9-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
Implement lockdep_count_{for,back}ward using BFS. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-8-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
Since the shortest lock dependencies' path may be obtained by BFS, we print the shortest one by print_shortest_lock_dependencies(). Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-7-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
This patch uses BFS to implement find_usage_*wards(),which was originally writen by DFS. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-6-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
This patch uses BFS to implement check_noncircular() and prints the generated shortest circle if exists. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-5-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
1,introduce match() to BFS in order to make it usable to match different pattern; 2,also rename some functions to make them more suitable. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-4-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
1,replace %MAX_CIRCULAR_QUE_SIZE with &(MAX_CIRCULAR_QUE_SIZE-1) since we define MAX_CIRCULAR_QUE_SIZE as power of 2; 2,use bitmap to mark if a lock is accessed in BFS in order to clear it quickly, because we may search a graph many times. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-3-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ming Lei 提交于
Currently lockdep will print the 1st circle detected if it exists when acquiring a new (next) lock. This patch prints the shortest path from the next lock to be acquired to the previous held lock if a circle is found. The patch still uses the current method to check circle, and once the circle is found, breadth-first search algorithem is used to compute the shortest path from the next lock to the previous lock in the forward lock dependency graph. Printing the shortest path will shorten the dependency chain, and make troubleshooting for possible circular locking easier. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1246201486-7308-2-git-send-email-tom.leiming@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 4月, 2009 1 次提交
-
-
由 Peter Zijlstra 提交于
Steven Rostedt reported: > OK, I think I figured this bug out. This is a lockdep issue with respect > to tracepoints. > > The trace points in lockdep are called all the time. Outside the lockdep > logic. But if lockdep were to trigger an error / warning (which this run > did) we might be in trouble. For new locks, like the dentry->d_lock, that > are created, they will not get a name: > > void lockdep_init_map(struct lockdep_map *lock, const char *name, > struct lock_class_key *key, int subclass) > { > if (unlikely(!debug_locks)) > return; > > When a problem is found by lockdep, debug_locks becomes false. Thus we > stop allocating names for locks. This dentry->d_lock I had, now has no > name. Worse yet, I have CONFIG_DEBUG_VM set, that scrambles non > initialized memory. Thus, when the trace point was hit, it had junk for > the lock->name, and the machine crashed. Ah, nice catch. I think we should put at least the name in regardless. Ensure we at least initialize the trivial entries of the depmap so that they can be relied upon, even when lockdep itself decided to pack up and go home. [ Impact: fix lock tracing after lockdep warnings. ] Reported-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1239954049.23397.4156.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 4月, 2009 2 次提交
-
-
由 Steven Rostedt 提交于
Impact: clean up Create a sub directory in include/trace called events to keep the trace point headers in their own separate directory. Only headers that declare trace points should be defined in this directory. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Zhao Lei <zhaolei@cn.fujitsu.com> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
This patch lowers the number of places a developer must modify to add new tracepoints. The current method to add a new tracepoint into an existing system is to write the trace point macro in the trace header with one of the macros TRACE_EVENT, TRACE_FORMAT or DECLARE_TRACE, then they must add the same named item into the C file with the macro DEFINE_TRACE(name) and then add the trace point. This change cuts out the needing to add the DEFINE_TRACE(name). Every file that uses the tracepoint must still include the trace/<type>.h file, but the one C file must also add a define before the including of that file. #define CREATE_TRACE_POINTS #include <trace/mytrace.h> This will cause the trace/mytrace.h file to also produce the C code necessary to implement the trace point. Note, if more than one trace/<type>.h is used to create the C code it is best to list them all together. #define CREATE_TRACE_POINTS #include <trace/foo.h> #include <trace/bar.h> #include <trace/fido.h> Thanks to Mathieu Desnoyers and Christoph Hellwig for coming up with the cleaner solution of the define above the includes over my first design to have the C code include a "special" header. This patch converts sched, irq and lockdep and skb to use this new method. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Zhao Lei <zhaolei@cn.fujitsu.com> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 10 4月, 2009 1 次提交
-
-
由 Frederic Weisbecker 提交于
While trying to optimize the new lock on reiserfs to replace the bkl, I find the lock tracing very useful though it lacks something important for performance (and latency) instrumentation: the time a task waits for a lock. That's what this patch implements: bash-4816 [000] 202.652815: lock_contended: lock_contended: &sb->s_type->i_mutex_key bash-4816 [000] 202.652819: lock_acquired: &rq->lock (0.000 us) <...>-4787 [000] 202.652825: lock_acquired: &rq->lock (0.000 us) <...>-4787 [000] 202.652829: lock_acquired: &rq->lock (0.000 us) bash-4816 [000] 202.652833: lock_acquired: &sb->s_type->i_mutex_key (16.005 us) As shown above, the "lock acquired" field is followed by the time it has been waiting for the lock. Usually, a lock contended entry is followed by a near lock_acquired entry with a non-zero time waited. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <1238975373-15739-1-git-send-email-fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 3月, 2009 2 次提交
-
-
由 Peter Zijlstra 提交于
Have a better idea about exactly which loc causes a lockdep limit overflow. Often it's a bug or inefficiency in that subsystem. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1237376327.5069.253.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Heiko reported that we grab the graph lock with irqs enabled. Fix this by providng the same wrapper as all other lockdep entry functions have. Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Nick Piggin <npiggin@suse.de> LKML-Reference: <1237544000.24626.52.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 3月, 2009 4 次提交
-
-
由 David Rientjes 提交于
Impact: cleanup The atomic debug modifiers are already defined in kernel/lockdep_internals.h. Signed-off-by: NDavid Rientjes <rientjes@google.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> LKML-Reference: <alpine.DEB.2.00.0903050222160.30401@chino.kir.corp.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Augment the traces with lock names when lockdep is available: 1) | down_read_trylock() { 1) | _spin_lock_irqsave() { 1) | /* lock_acquire: &sem->wait_lock */ 1) 4.201 us | } 1) | _spin_unlock_irqrestore() { 1) | /* lock_release: &sem->wait_lock */ 1) 3.523 us | } 1) | /* lock_acquire: try read &mm->mmap_sem */ 1) + 13.386 us | } 1) 1.635 us | find_vma(); 1) | handle_mm_fault() { 1) | __do_fault() { 1) | filemap_fault() { 1) | find_lock_page() { 1) | find_get_page() { 1) | /* lock_acquire: read rcu_read_lock */ 1) | /* lock_release: rcu_read_lock */ 1) 5.697 us | } 1) 8.158 us | } 1) + 11.079 us | } 1) | _spin_lock() { 1) | /* lock_acquire: __pte_lockptr(page) */ 1) 3.949 us | } 1) 1.460 us | page_add_file_rmap(); 1) | _spin_unlock() { 1) | /* lock_release: __pte_lockptr(page) */ 1) 3.115 us | } 1) | unlock_page() { 1) 1.421 us | page_waitqueue(); 1) 1.220 us | __wake_up_bit(); 1) 6.519 us | } 1) + 34.328 us | } 1) + 37.452 us | } 1) | up_read() { 1) | /* lock_release: &mm->mmap_sem */ 1) | _spin_lock_irqsave() { 1) | /* lock_acquire: &sem->wait_lock */ 1) 3.865 us | } 1) | _spin_unlock_irqrestore() { 1) | /* lock_release: &sem->wait_lock */ 1) 8.562 us | } 1) + 17.370 us | } Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: =?ISO-8859-1?Q?T=F6r=F6k?= Edwin <edwintorok@gmail.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1236166375.5330.7209.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Impact: clarify lockdep printk text print_irq_inversion_bug() gets handed state strings of the form "HARDIRQ", "SOFTIRQ", "RECLAIM_FS" and appends "-irq-{un,}safe" to them, which is either redudant for *IRQ or confusing in the RECLAIM_FS case. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1236175192.5330.7585.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
In the recent mark_lock_irq() rework a bug snuck in that would report the state of write locks causing irq inversion under a read lock as a read lock. Fix this by masking the read bit of the state when validating write dependencies. Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1236172646.5330.7450.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 2月, 2009 1 次提交
-
-
由 Peter Zijlstra 提交于
The __GFP_FS annotations fail to build with CONFIG_LOCKDEP=y, CONFIG_PROVE_LOCKING=n, ammend that. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-