- 23 3月, 2015 1 次提交
-
-
由 Peter Zijlstra 提交于
Module unload calls lockdep_free_key_range(), which removes entries from the data structures. Most of the lockdep code OTOH assumes the data structures are append only; in specific see the comments in add_lock_to_list() and look_up_lock_class(). Clearly this has only worked by accident; make it work proper. The actual scenario to make it go boom would involve the memory freed by the module unlock being re-allocated and re-used for a lock inside of a rcu-sched grace period. This is a very unlikely scenario, still better plug the hole. Use RCU list iteration in all places and ammend the comments. Change lockdep_free_key_range() to issue a sync_sched() between removal from the lists and returning -- which results in the memory being freed. Further ensure the callers are placed correctly and comment the requirements. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Tsyvarev <tsyvarev@ispras.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 10月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
Commit f0bab73c ("locking/lockdep: Restrict the use of recursive read_lock() with qrwlock") changed lockdep to try and conform to the qrwlock semantics which differ from the traditional rwlock semantics. In particular qrwlock is fair outside of interrupt context, but in interrupt context readers will ignore all fairness. The problem modeling this is that read and write side have different lock state (interrupts) semantics but we only have a single representation of these. Therefore lockdep will get confused, thinking the lock can cause interrupt lock inversions. So revert it for now; the old rwlock semantics were already imperfectly modeled and the qrwlock extra won't fit either. If we want to properly fix this, I think we need to resurrect the work by Gautham did a few years ago that split the read and write state of locks: http://lwn.net/Articles/332801/ FWIW the locking selftest that would've failed (and was reported by Borislav earlier) is something like: RL(X1); /* IRQ-ON */ LOCK(A); UNLOCK(A); RU(X1); IRQ_ENTER(); RL(X1); /* IN-IRQ */ RU(X1); IRQ_EXIT(); At which point it would report that because A is an IRQ-unsafe lock we can suffer the following inversion: CPU0 CPU1 lock(A) lock(X1) lock(A) <IRQ> lock(X1) And this is 'wrong' because X1 can recurse (assuming the above lock are in fact read-lock) but lockdep doesn't know about this. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Waiman Long <Waiman.Long@hp.com> Cc: ego@linux.vnet.ibm.com Cc: bp@alien8.de Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20140930132600.GA7444@worktop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 13 8月, 2014 1 次提交
-
-
由 Waiman Long 提交于
Unlike the original unfair rwlock implementation, queued rwlock will grant lock according to the chronological sequence of the lock requests except when the lock requester is in the interrupt context. Consequently, recursive read_lock calls will now hang the process if there is a write_lock call somewhere in between the read_lock calls. This patch updates the lockdep implementation to look for recursive read_lock calls. A new read state (3) is used to mark those read_lock call that cannot be recursively called except in the interrupt context. The new read state does exhaust the 2 bits available in held_lock:read bit field. The addition of any new read state in the future may require a redesign of how all those bits are squeezed together in the held_lock structure. Signed-off-by: NWaiman Long <Waiman.Long@hp.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Rik van Riel <riel@redhat.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1407345722-61615-2-git-send-email-Waiman.Long@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 7月, 2014 1 次提交
-
-
由 Andreas Gruenbacher 提交于
When lockdep turns itself off, the following message is logged: Please attach the output of /proc/lock_stat to the bug report Omit this message when CONFIG_LOCK_STAT is off, and /proc/lock_stat doesn't exist. Signed-off-by: NAndreas Gruenbacher <andreas.gruenbacher@gmail.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1405451452-3824-1-git-send-email-andreas.gruenbacher@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 5月, 2014 1 次提交
-
-
由 Andi Kleen 提交于
As requested by Linus add explicit __visible to the asmlinkage users. This marks functions visible to assembler. Tree sweep for rest of tree. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1398984278-29319-4-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 14 2月, 2014 2 次提交
-
-
由 Andi Kleen 提交于
Can be called from assembler code. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1391845930-28580-6-git-send-email-ak@linux.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andi Kleen 提交于
lockdep_sys_exit can be called from assembler code, so make it asmlinkage. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1391845930-28580-5-git-send-email-ak@linux.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 10 2月, 2014 3 次提交
-
-
由 Oleg Nesterov 提交于
The __lockdep_no_validate check in mark_held_locks() adds the subtle and (afaics) unnecessary difference between no-validate and check==0. And this looks even more inconsistent because __lock_acquire() skips mark_irqflags()->mark_lock() if !check. Change mark_held_locks() to check hlock->check instead. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Cc: Dave Jones <davej@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140120182013.GA26505@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Oleg Nesterov 提交于
Test-case: DEFINE_MUTEX(m1); DEFINE_MUTEX(m2); DEFINE_MUTEX(mx); void lockdep_should_complain(void) { lockdep_set_novalidate_class(&mx); // m1 -> mx -> m2 mutex_lock(&m1); mutex_lock(&mx); mutex_lock(&m2); mutex_unlock(&m2); mutex_unlock(&mx); mutex_unlock(&m1); // m2 -> m1 ; should trigger the warning mutex_lock(&m2); mutex_lock(&m1); mutex_unlock(&m1); mutex_unlock(&m2); } this doesn't trigger any warning, lockdep can't detect the trivial deadlock. This is because lock(&mx) correctly avoids m1 -> mx dependency, it skips validate_chain() due to mx->check == 0. But lock(&m2) wrongly adds mx -> m2 and thus m1 -> m2 is not created. rcu_lock_acquire()->lock_acquire(check => 0) is fine due to read == 2, so currently only __lockdep_no_validate__ can trigger this problem. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Cc: Dave Jones <davej@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140120182010.GA26498@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Oleg Nesterov 提交于
The "int check" argument of lock_acquire() and held_lock->check are misleading. This is actually a boolean: 2 means "true", everything else is "false". And there is no need to pass 1 or 0 to lock_acquire() depending on CONFIG_PROVE_LOCKING, __lock_acquire() checks prove_locking at the start and clears "check" if !CONFIG_PROVE_LOCKING. Note: probably we can simply kill this member/arg. The only explicit user of check => 0 is rcu_lock_acquire(), perhaps we can change it to use lock_acquire(trylock =>, read => 2). __lockdep_no_validate means check => 0 implicitly, but we can change validate_chain() to check hlock->instance->key instead. Not to mention it would be nice to get rid of lockdep_set_novalidate_class(). Signed-off-by: NOleg Nesterov <oleg@redhat.com> Cc: Dave Jones <davej@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140120182006.GA26495@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 27 11月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
Lockdep is an awesome piece of code which detects locking issues which are relevant both to userspace and kernelspace. We can easily make lockdep work in userspace since there is really no kernel spacific magic going on in the code. All we need is to wrap two functions which are used by lockdep and are very kernel specific. Doing that will allow tools located in tools/ to easily utilize lockdep's code for their own use. Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: penberg@kernel.org Cc: torvalds@linux-foundation.org Link: http://lkml.kernel.org/r/1352753446-24109-1-git-send-email-sasha.levin@oracle.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 13 11月, 2013 1 次提交
-
-
由 Fengguang Wu 提交于
There are new Sparse warnings: >> kernel/locking/lockdep.c:1235:15: sparse: symbol '__lockdep_count_forward_deps' was not declared. Should it be static? >> kernel/locking/lockdep.c:1261:15: sparse: symbol '__lockdep_count_backward_deps' was not declared. Should it be static? Please consider folding the attached diff :-) Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/527d1787.ThzXGoUspZWehFDl\%fengguang.wu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 11月, 2013 1 次提交
-
-
由 Peter Zijlstra 提交于
Suggested-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-wl7s3tta5isufzfguc23et06@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 25 9月, 2013 1 次提交
-
-
由 Paul E. McKenney 提交于
The old rcu_is_cpu_idle() function is just __rcu_is_watching() with preemption disabled. This commit therefore renames rcu_is_cpu_idle() to rcu_is_watching. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 12 5月, 2013 1 次提交
-
-
由 Colin Cross 提交于
The only existing caller to debug_check_no_locks_held calls it with 'current' as the task, and the freezer needs to call debug_check_no_locks_held but doesn't already have a current task pointer, so remove the argument. It is already assuming that the current task is relevant by dumping the current stack trace as part of the warning. This was originally part of 6aa97070 (lockdep: check that no locks held at freeze time) which was reverted in dbf520a9. Original-author: Mandeep Singh Baines <msb@chromium.org> Acked-by: NPavel Machek <pavel@ucw.cz> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NColin Cross <ccross@android.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 4月, 2013 2 次提交
-
-
由 Dave Jones 提交于
Also add some missing printk levels. Signed-off-by: NDave Jones <davej@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20130425174002.GA26769@redhat.com [ Tweaked the messages a bit. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dave Jones 提交于
We occasionally get reports of these BUGs being hit, and the stack trace doesn't necessarily always tell us what we need to know about why we are hitting those limits. If users start attaching /proc/lock_stats to reports we may have more of a clue what's going on. Signed-off-by: NDave Jones <davej@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20130423163403.GA12839@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 4月, 2013 1 次提交
-
-
由 Hong Zhiguo 提交于
Signed-off-by: NHong Zhiguo <honkiko@gmail.com> Cc: peterz@infradead.org Link: http://lkml.kernel.org/r/1365058881-4044-1-git-send-email-honkiko@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 4月, 2013 1 次提交
-
-
由 Paul Walmsley 提交于
This reverts commit 6aa97070. Commit 6aa97070 ("lockdep: check that no locks held at freeze time") causes problems with NFS root filesystems. The failures were noticed on OMAP2 and 3 boards during kernel init: [ BUG: swapper/0/1 still has locks held! ] 3.9.0-rc3-00344-ga937536b #1 Not tainted ------------------------------------- 1 lock held by swapper/0/1: #0: (&type->s_umount_key#13/1){+.+.+.}, at: [<c011e84c>] sget+0x248/0x574 stack backtrace: rpc_wait_bit_killable __wait_on_bit out_of_line_wait_on_bit __rpc_execute rpc_run_task rpc_call_sync nfs_proc_get_root nfs_get_root nfs_fs_mount_common nfs_try_mount nfs_fs_mount mount_fs vfs_kern_mount do_mount sys_mount do_mount_root mount_root prepare_namespace kernel_init_freeable kernel_init Although the rootfs mounts, the system is unstable. Here's a transcript from a PM test: http://www.pwsan.com/omap/testlogs/test_v3.9-rc3/20130317194234/pm/37xxevm/37xxevm_log.txt Here's what the test log should look like: http://www.pwsan.com/omap/testlogs/test_v3.8/20130218214403/pm/37xxevm/37xxevm_log.txt Mailing list discussion is here: http://lkml.org/lkml/2013/3/4/221 Deal with this for v3.9 by reverting the problem commit, until folks can figure out the right long-term course of action. Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Mandeep Singh Baines <msb@chromium.org> Cc: Jeff Layton <jlayton@redhat.com> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: <maciej.rutecki@gmail.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ben Chan <benchan@chromium.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 3月, 2013 1 次提交
-
-
由 Kent Overstreet 提交于
Hack, but bcache needs a way around lockdep for locking during garbage collection - we need to keep multiple btree nodes locked for coalescing and rw_lock_nested() isn't really sufficient or appropriate here. Signed-off-by: NKent Overstreet <koverstreet@google.com> CC: Peter Zijlstra <peterz@infradead.org> CC: Ingo Molnar <mingo@redhat.com>
-
- 28 2月, 2013 1 次提交
-
-
由 Mandeep Singh Baines 提交于
We shouldn't try_to_freeze if locks are held. Holding a lock can cause a deadlock if the lock is later acquired in the suspend or hibernate path (e.g. by dpm). Holding a lock can also cause a deadlock in the case of cgroup_freezer if a lock is held inside a frozen cgroup that is later acquired by a process outside that group. [akpm@linux-foundation.org: export debug_check_no_locks_held] Signed-off-by: NMandeep Singh Baines <msb@chromium.org> Cc: Ben Chan <benchan@chromium.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 2月, 2013 2 次提交
-
-
由 Ben Greear 提交于
This helps debug cases where a lock is acquired over and over without being released. Suggested-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBen Greear <greearb@candelatech.com> Cc: peterz@infradead.org Link: http://lkml.kernel.org/r/1360176979-4421-1-git-send-email-greearb@candelatech.com [ Changed the printout ordering. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Srivatsa S. Bhat 提交于
Fix the typo in the function name (s/inbalance/imbalance) Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: rostedt@goodmis.org Cc: peterz@infradead.org Link: http://lkml.kernel.org/r/20130108130547.32733.79507.stgit@srivatsabhat.in.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 13 9月, 2012 1 次提交
-
-
由 Maarten Lankhorst 提交于
It is considered good form to lock the lock you claim to be nested in. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> [ removed nest_lock arg to print_lock_nested_lock_not_held in favour of hlock->nest_lock, also renamed the lock arg to hlock since its a held_lock type ] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/5051A9E7.5040501@canonical.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 22 2月, 2012 1 次提交
-
-
由 Paul E. McKenney 提交于
It is illegal to use RCU from a CPU that has reported idleness or offlinedness to RCU. However, it can be quite difficult to determine from a stack trace whether or not a given CPU is idle or offline. Therefore, this commit adds idle/offline diagnostics to the lockdep-RCU error message. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 12 12月, 2011 1 次提交
-
-
由 Frederic Weisbecker 提交于
Inform the user if an RCU usage error is detected by lockdep while in an extended quiescent state (in this case, the RCU-free window in idle). This is accomplished by adding a line to the RCU lockdep splat indicating whether or not the splat occurred in extended quiescent state. Uses of RCU from within extended quiescent state mode are totally ignored by RCU, hence the importance of this diagnostic. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 07 12月, 2011 1 次提交
-
-
由 Yong Zhang 提交于
Since commit f59de899 ("lockdep: Clear whole lockdep_map on initialization"), lockdep_init_map() will clear all the struct. But it will break lock_set_class()/lock_set_subclass(). A typical race condition is like below: CPU A CPU B lock_set_subclass(lockA); lock_set_class(lockA); lockdep_init_map(lockA); /* lockA->name is cleared */ memset(lockA); __lock_acquire(lockA); /* lockA->class_cache[] is cleared */ register_lock_class(lockA); look_up_lock_class(lockA); WARN_ON_ONCE(class->name != lock->name); lock->name = name; So restore to what we have done before commit f59de899 but annotate ->lock with kmemcheck_mark_initialized() to suppress the kmemcheck warning reported in commit f59de899. Reported-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: NBorislav Petkov <bp@alien8.de> Suggested-by: NVegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: <stable@kernel.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111109080451.GB8124@zhySigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 06 12月, 2011 3 次提交
-
-
由 Ming Lei 提交于
This patch prints the name of the lock which is acquired before lockdep_init() is called, so that users can easily find which lock triggered the lockdep init error warning. This patch also removes the lockdep_init_error() message of "Arch code didn't call lockdep_init() early enough?" since lockdep_init() is called in arch independent code now. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1321508072-23853-2-git-send-email-tom.leiming@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yong Zhang 提交于
Since commit f59de899 ("lockdep: Clear whole lockdep_map on initialization"), lockdep_init_map() will clear all the struct. But it will break lock_set_class()/lock_set_subclass(). A typical race condition is like below: CPU A CPU B lock_set_subclass(lockA); lock_set_class(lockA); lockdep_init_map(lockA); /* lockA->name is cleared */ memset(lockA); __lock_acquire(lockA); /* lockA->class_cache[] is cleared */ register_lock_class(lockA); look_up_lock_class(lockA); WARN_ON_ONCE(class->name != lock->name); lock->name = name; So restore to what we have done before commit f59de899 but annotate ->lock with kmemcheck_mark_initialized() to suppress the kmemcheck warning reported in commit f59de899. Reported-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: NBorislav Petkov <bp@alien8.de> Suggested-by: NVegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111109080451.GB8124@zhySigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ben Hutchings 提交于
Show the taint flags in all lockdep and rtmutex-debug error messages. Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1319773015.6759.30.camel@deadeyeSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 11月, 2011 1 次提交
-
-
由 Yong Zhang 提交于
Commit ["62016250 lockdep: Add improved subclass caching"] tries to improve performance (expecially to reduce the cost of rq->lock) when using lockdep, but it fails due to lockdep_init_map() in which ->class_cache is cleared. The typical caller is lock_set_subclass(), after that class will not be cached anymore. This patch tries to achive the goal of commit 62016250 by always setting ->class_cache in register_lock_class(). === Score comparison of benchmarks === for i in `seq 1 10`; do ./perf bench -f simple sched messaging; done before: min: 0.604, max: 0.660, avg: 0.622 after: min: 0.414, max: 0.473, avg: 0.427 for i in `seq 1 10`; do ./perf bench -f simple sched messaging -g 40; done before: min: 2.347, max: 2.421, avg: 2.391 after: min: 1.652, max: 1.699, avg: 1.671 Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111109080714.GC8124@zhySigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 11月, 2011 1 次提交
-
-
由 Steven Rostedt 提交于
The pretty print of the lockdep debug splat uses just the lock name to show how the locking scenario happens. But when it comes to nesting locks, the output becomes confusing which takes away the point of the pretty printing of the lock scenario. Without displaying the subclass info, we get the following output: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(slock-AF_INET); lock(slock-AF_INET); lock(slock-AF_INET); lock(slock-AF_INET); *** DEADLOCK *** The above looks more of a A->A locking bug than a A->B B->A. By adding the subclass to the output, we can see what really happened: other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(slock-AF_INET); lock(slock-AF_INET/1); lock(slock-AF_INET); lock(slock-AF_INET/1); *** DEADLOCK *** This bug was discovered while tracking down a real bug caught by lockdep. Link: http://lkml.kernel.org/r/20111025202049.GB25043@hostway.ca Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NSimon Kirby <sim@hostway.ca> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 29 9月, 2011 1 次提交
-
-
由 Paul E. McKenney 提交于
Long ago, using TREE_RCU with PREEMPT would result in "scheduling while atomic" diagnostics if you blocked in an RCU read-side critical section. However, PREEMPT now implies TREE_PREEMPT_RCU, which defeats this diagnostic. This commit therefore adds a replacement diagnostic based on PROVE_RCU. Because rcu_lockdep_assert() and lockdep_rcu_dereference() are now being used for things that have nothing to do with rcu_dereference(), rename lockdep_rcu_dereference() to lockdep_rcu_suspicious() and add a third argument that is a string indicating what is suspicious. This third argument is passed in from a new third argument to rcu_lockdep_assert(). Update all calls to rcu_lockdep_assert() to add an informative third argument. Also, add a pair of rcu_lockdep_assert() calls from within rcu_note_context_switch(), one complaining if a context switch occurs in an RCU-bh read-side critical section and another complaining if a context switch occurs in an RCU-sched read-side critical section. These are present only if the PROVE_RCU kernel parameter is enabled. Finally, fix some checkpatch whitespace complaints in lockdep.c. Again, you must enable PROVE_RCU to see these new diagnostics. But you are enabling PROVE_RCU to check out new RCU uses in any case, aren't you? Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 18 9月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
Andrew requested I comment all the lockdep WARN()s to help other people figure out wth is wrong.. Requested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315301493.3191.9.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 8月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
match_held_lock() was assuming it was being called on a lock class that had already seen usage. This condition was true for bug-free code using lockdep_assert_held(), since you're in fact holding the lock when calling it. However the assumption fails the moment you assume the assertion can fail, which is the whole point of having the assertion in the first place. Anyway, now that there's more lockdep_is_held() users, notably __rcu_dereference_check(), its much easier to trigger this since we test for a number of locks and we only need to hold any one of them to be good. Reported-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1312547787.28695.2.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 8月, 2011 3 次提交
-
-
由 Tejun Heo 提交于
lockdep_init_map() only initializes parts of lockdep_map and triggers kmemcheck warning when it is copied as a whole. There isn't anything to be gained by clearing selectively. memset() the whole structure and remove loop for ->class_cache[] clearing. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=35532Signed-off-by: NTejun Heo <tj@kernel.org> Reported-and-tested-by: NChristian Casteyde <casteyde.christian@free.fr> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=35532Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110714131909.GJ3455@htj.dyndns.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
On Sun, 2011-07-24 at 21:06 -0400, Arnaud Lacombe wrote: > /src/linux/linux/kernel/lockdep.c: In function 'mark_held_locks': > /src/linux/linux/kernel/lockdep.c:2471:31: warning: comparison of > distinct pointer types lacks a cast The warning is harmless in this case, but the below makes it go away. Reported-by: NArnaud Lacombe <lacombar@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1311588599.2617.56.camel@laptopSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Commit dd4e5d3a ("lockdep: Fix trace_[soft,hard]irqs_[on,off]() recursion") made a bit of a mess of the various checks and error conditions. In particular it moved the check for !irqs_disabled() before the spurious enable test, resulting in some warnings. Reported-by: NArnaud Lacombe <lacombar@gmail.com> Reported-by: NDave Jones <davej@redhat.com> Reported-and-tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1311679697.24752.28.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 7月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
Thomas noticed that a lock marked with lockdep_set_novalidate_class() will still trigger warnings for IRQ inversions. Cure this by skipping those when marking irq state. Reported-and-tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-2dp5vmpsxeraqm42kgww6ge2@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 6月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
Commit: 1efc5da3: [PATCH] order of lockdep off/on in vprintk() should be changed explains the reason for having raw_local_irq_*() and lockdep_off() in printk(). Instead of working around the broken recursion detection of interrupt state tracking, fix it. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: efault@gmx.de Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20110621153806.185242734@chello.nlSigned-off-by: NIngo Molnar <mingo@elte.hu>
-