1. 11 6月, 2013 13 次提交
    • P
      rcu: Shrink TINY_RCU by moving exit_rcu() · 2439b696
      Paul E. McKenney 提交于
      Now that TINY_PREEMPT_RCU is no more, exit_rcu() is always an empty
      function.  But if TINY_RCU is going to have an empty function, it should
      be in include/linux/rcutiny.h, where it does not bloat the kernel.
      This commit therefore moves exit_rcu() out of kernel/rcupdate.c to
      kernel/rcutree_plugin.h, and places a static inline empty function in
      include/linux/rcutiny.h in order to shrink TINY_RCU a bit.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      2439b696
    • P
      rcu: Consolidate rcutiny_plugin.h ifdefs · 318bdcd9
      Paul E. McKenney 提交于
      This commit rearranges code in order to allow ifdefs to be consolidated
      in kernel/rcutiny_plugin.h, simplifying the code.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      318bdcd9
    • P
      rcu: Remove check_cpu_stall_preempt() · 4879c84d
      Paul E. McKenney 提交于
      With the removal of CONFIG_TINY_PREEMPT_RCU, check_cpu_stall_preempt()
      is now an empty function.  This commit therefore eliminates it by
      inlining it.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      4879c84d
    • P
      rcu: Simplify RCU_TINY RCU callback invocation · 9dc5ad32
      Paul E. McKenney 提交于
      TINY_PREEMPT_RCU could use a kthread to handle RCU callback invocation,
      which required an API to abstract kthread vs. softirq invocation.
      Now that TINY_PREEMPT_RCU is no longer with us, this commit retires
      this API in favor of direct use of the relevant softirq primitives.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      9dc5ad32
    • P
      rcu: Remove rcu_preempt_process_callbacks() · 58c4e69d
      Paul E. McKenney 提交于
      With the removal of CONFIG_TINY_PREEMPT_RCU, rcu_preempt_process_callbacks()
      is now an empty function.  This commit therefore eliminates it by
      inlining it.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      58c4e69d
    • P
      rcu: Remove rcu_preempt_remove_callbacks() · 47d65935
      Paul E. McKenney 提交于
      With the removal of CONFIG_TINY_PREEMPT_RCU, rcu_preempt_remove_callbacks()
      is now an empty function.  This commit therefore eliminates it by
      inlining it.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      47d65935
    • P
      rcu: Remove rcu_preempt_check_callbacks() · 9acaac8c
      Paul E. McKenney 提交于
      With the removal of CONFIG_TINY_PREEMPT_RCU, rcu_preempt_check_callbacks()
      is now an empty function.  This commit therefore eliminates it by
      inlining it.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      9acaac8c
    • P
      rcu: Remove show_tiny_preempt_stats() · 221304e9
      Paul E. McKenney 提交于
      With the removal of CONFIG_TINY_PREEMPT_RCU, show_tiny_preempt_stats()
      is now an empty function.  This commit therefore eliminates it by
      inlining it.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      221304e9
    • P
      rcu: Remove TINY_PREEMPT_RCU · 127781d1
      Paul E. McKenney 提交于
      TINY_PREEMPT_RCU adds significant code and complexity, but does not
      offer commensurate benefits.  People currently using TINY_PREEMPT_RCU
      can get much better memory footprint with TINY_RCU, or, if they really
      need preemptible RCU, they can use TREE_PREEMPT_RCU with a relatively
      minor degradation in memory footprint.  Please note that this move
      has been widely publicized on LKML (https://lkml.org/lkml/2012/11/12/545)
      and on LWN (http://lwn.net/Articles/541037/).
      
      This commit therefore removes TINY_PREEMPT_RCU.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Updated to eliminate #else in rcutiny.h as suggested by Josh ]
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      127781d1
    • P
      rcu: Convert rcutree_plugin.h printk calls · efc151c3
      Paul E. McKenney 提交于
      This commit converts printk() calls to the corresponding pr_*() calls.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      efc151c3
    • P
      rcu: Convert rcutree.c printk calls · d7f3e207
      Paul E. McKenney 提交于
      This commit converts printk() calls to the corresponding pr_*() calls.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      d7f3e207
    • P
      rcu: Fix deadlock with CPU hotplug, RCU GP init, and timer migration · 971394f3
      Paul E. McKenney 提交于
      In Steven Rostedt's words:
      
      > I've been debugging the last couple of days why my tests have been
      > locking up. One of my tracing tests, runs all available tracers. The
      > lockup always happened with the mmiotrace, which is used to trace
      > interactions between priority drivers and the kernel. But to do this
      > easily, when the tracer gets registered, it disables all but the boot
      > CPUs. The lockup always happened after it got done disabling the CPUs.
      >
      > Then I decided to try this:
      >
      > while :; do
      > 	for i in 1 2 3; do
      > 		echo 0 > /sys/devices/system/cpu/cpu$i/online
      > 	done
      > 	for i in 1 2 3; do
      > 		echo 1 > /sys/devices/system/cpu/cpu$i/online
      > 	done
      > done
      >
      > Well, sure enough, that locked up too, with the same users. Doing a
      > sysrq-w (showing all blocked tasks):
      >
      > [ 2991.344562]   task                        PC stack   pid father
      > [ 2991.344562] rcu_preempt     D ffff88007986fdf8     0    10      2 0x00000000
      > [ 2991.344562]  ffff88007986fc98 0000000000000002 ffff88007986fc48 0000000000000908
      > [ 2991.344562]  ffff88007986c280 ffff88007986ffd8 ffff88007986ffd8 00000000001d3c80
      > [ 2991.344562]  ffff880079248a40 ffff88007986c280 0000000000000000 00000000fffd4295
      > [ 2991.344562] Call Trace:
      > [ 2991.344562]  [<ffffffff815437ba>] schedule+0x64/0x66
      > [ 2991.344562]  [<ffffffff81541750>] schedule_timeout+0xbc/0xf9
      > [ 2991.344562]  [<ffffffff8154bec0>] ? ftrace_call+0x5/0x2f
      > [ 2991.344562]  [<ffffffff81049513>] ? cascade+0xa8/0xa8
      > [ 2991.344562]  [<ffffffff815417ab>] schedule_timeout_uninterruptible+0x1e/0x20
      > [ 2991.344562]  [<ffffffff810c980c>] rcu_gp_kthread+0x502/0x94b
      > [ 2991.344562]  [<ffffffff81062791>] ? __init_waitqueue_head+0x50/0x50
      > [ 2991.344562]  [<ffffffff810c930a>] ? rcu_gp_fqs+0x64/0x64
      > [ 2991.344562]  [<ffffffff81061cdb>] kthread+0xb1/0xb9
      > [ 2991.344562]  [<ffffffff81091e31>] ? lock_release_holdtime.part.23+0x4e/0x55
      > [ 2991.344562]  [<ffffffff81061c2a>] ? __init_kthread_worker+0x58/0x58
      > [ 2991.344562]  [<ffffffff8154c1dc>] ret_from_fork+0x7c/0xb0
      > [ 2991.344562]  [<ffffffff81061c2a>] ? __init_kthread_worker+0x58/0x58
      > [ 2991.344562] kworker/0:1     D ffffffff81a30680     0    47      2 0x00000000
      > [ 2991.344562] Workqueue: events cpuset_hotplug_workfn
      > [ 2991.344562]  ffff880078dbbb58 0000000000000002 0000000000000006 00000000000000d8
      > [ 2991.344562]  ffff880078db8100 ffff880078dbbfd8 ffff880078dbbfd8 00000000001d3c80
      > [ 2991.344562]  ffff8800779ca5c0 ffff880078db8100 ffffffff81541fcf 0000000000000000
      > [ 2991.344562] Call Trace:
      > [ 2991.344562]  [<ffffffff81541fcf>] ? __mutex_lock_common+0x3d4/0x609
      > [ 2991.344562]  [<ffffffff815437ba>] schedule+0x64/0x66
      > [ 2991.344562]  [<ffffffff81543a39>] schedule_preempt_disabled+0x18/0x24
      > [ 2991.344562]  [<ffffffff81541fcf>] __mutex_lock_common+0x3d4/0x609
      > [ 2991.344562]  [<ffffffff8103d11b>] ? get_online_cpus+0x3c/0x50
      > [ 2991.344562]  [<ffffffff8103d11b>] ? get_online_cpus+0x3c/0x50
      > [ 2991.344562]  [<ffffffff815422ff>] mutex_lock_nested+0x3b/0x40
      > [ 2991.344562]  [<ffffffff8103d11b>] get_online_cpus+0x3c/0x50
      > [ 2991.344562]  [<ffffffff810af7e6>] rebuild_sched_domains_locked+0x6e/0x3a8
      > [ 2991.344562]  [<ffffffff810b0ec6>] rebuild_sched_domains+0x1c/0x2a
      > [ 2991.344562]  [<ffffffff810b109b>] cpuset_hotplug_workfn+0x1c7/0x1d3
      > [ 2991.344562]  [<ffffffff810b0ed9>] ? cpuset_hotplug_workfn+0x5/0x1d3
      > [ 2991.344562]  [<ffffffff81058e07>] process_one_work+0x2d4/0x4d1
      > [ 2991.344562]  [<ffffffff81058d3a>] ? process_one_work+0x207/0x4d1
      > [ 2991.344562]  [<ffffffff8105964c>] worker_thread+0x2e7/0x3b5
      > [ 2991.344562]  [<ffffffff81059365>] ? rescuer_thread+0x332/0x332
      > [ 2991.344562]  [<ffffffff81061cdb>] kthread+0xb1/0xb9
      > [ 2991.344562]  [<ffffffff81061c2a>] ? __init_kthread_worker+0x58/0x58
      > [ 2991.344562]  [<ffffffff8154c1dc>] ret_from_fork+0x7c/0xb0
      > [ 2991.344562]  [<ffffffff81061c2a>] ? __init_kthread_worker+0x58/0x58
      > [ 2991.344562] bash            D ffffffff81a4aa80     0  2618   2612 0x10000000
      > [ 2991.344562]  ffff8800379abb58 0000000000000002 0000000000000006 0000000000000c2c
      > [ 2991.344562]  ffff880077fea140 ffff8800379abfd8 ffff8800379abfd8 00000000001d3c80
      > [ 2991.344562]  ffff8800779ca5c0 ffff880077fea140 ffffffff81541fcf 0000000000000000
      > [ 2991.344562] Call Trace:
      > [ 2991.344562]  [<ffffffff81541fcf>] ? __mutex_lock_common+0x3d4/0x609
      > [ 2991.344562]  [<ffffffff815437ba>] schedule+0x64/0x66
      > [ 2991.344562]  [<ffffffff81543a39>] schedule_preempt_disabled+0x18/0x24
      > [ 2991.344562]  [<ffffffff81541fcf>] __mutex_lock_common+0x3d4/0x609
      > [ 2991.344562]  [<ffffffff81530078>] ? rcu_cpu_notify+0x2f5/0x86e
      > [ 2991.344562]  [<ffffffff81530078>] ? rcu_cpu_notify+0x2f5/0x86e
      > [ 2991.344562]  [<ffffffff815422ff>] mutex_lock_nested+0x3b/0x40
      > [ 2991.344562]  [<ffffffff81530078>] rcu_cpu_notify+0x2f5/0x86e
      > [ 2991.344562]  [<ffffffff81091c99>] ? __lock_is_held+0x32/0x53
      > [ 2991.344562]  [<ffffffff81548912>] notifier_call_chain+0x6b/0x98
      > [ 2991.344562]  [<ffffffff810671fd>] __raw_notifier_call_chain+0xe/0x10
      > [ 2991.344562]  [<ffffffff8103cf64>] __cpu_notify+0x20/0x32
      > [ 2991.344562]  [<ffffffff8103cf8d>] cpu_notify_nofail+0x17/0x36
      > [ 2991.344562]  [<ffffffff815225de>] _cpu_down+0x154/0x259
      > [ 2991.344562]  [<ffffffff81522710>] cpu_down+0x2d/0x3a
      > [ 2991.344562]  [<ffffffff81526351>] store_online+0x4e/0xe7
      > [ 2991.344562]  [<ffffffff8134d764>] dev_attr_store+0x20/0x22
      > [ 2991.344562]  [<ffffffff811b3c5f>] sysfs_write_file+0x108/0x144
      > [ 2991.344562]  [<ffffffff8114c5ef>] vfs_write+0xfd/0x158
      > [ 2991.344562]  [<ffffffff8114c928>] SyS_write+0x5c/0x83
      > [ 2991.344562]  [<ffffffff8154c494>] tracesys+0xdd/0xe2
      >
      > As well as held locks:
      >
      > [ 3034.728033] Showing all locks held in the system:
      > [ 3034.728033] 1 lock held by rcu_preempt/10:
      > [ 3034.728033]  #0:  (rcu_preempt_state.onoff_mutex){+.+...}, at: [<ffffffff810c9471>] rcu_gp_kthread+0x167/0x94b
      > [ 3034.728033] 4 locks held by kworker/0:1/47:
      > [ 3034.728033]  #0:  (events){.+.+.+}, at: [<ffffffff81058d3a>] process_one_work+0x207/0x4d1
      > [ 3034.728033]  #1:  (cpuset_hotplug_work){+.+.+.}, at: [<ffffffff81058d3a>] process_one_work+0x207/0x4d1
      > [ 3034.728033]  #2:  (cpuset_mutex){+.+.+.}, at: [<ffffffff810b0ec1>] rebuild_sched_domains+0x17/0x2a
      > [ 3034.728033]  #3:  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8103d11b>] get_online_cpus+0x3c/0x50
      > [ 3034.728033] 1 lock held by mingetty/2563:
      > [ 3034.728033]  #0:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff8131e28a>] n_tty_read+0x252/0x7e8
      > [ 3034.728033] 1 lock held by mingetty/2565:
      > [ 3034.728033]  #0:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff8131e28a>] n_tty_read+0x252/0x7e8
      > [ 3034.728033] 1 lock held by mingetty/2569:
      > [ 3034.728033]  #0:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff8131e28a>] n_tty_read+0x252/0x7e8
      > [ 3034.728033] 1 lock held by mingetty/2572:
      > [ 3034.728033]  #0:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff8131e28a>] n_tty_read+0x252/0x7e8
      > [ 3034.728033] 1 lock held by mingetty/2575:
      > [ 3034.728033]  #0:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff8131e28a>] n_tty_read+0x252/0x7e8
      > [ 3034.728033] 7 locks held by bash/2618:
      > [ 3034.728033]  #0:  (sb_writers#5){.+.+.+}, at: [<ffffffff8114bc3f>] file_start_write+0x2a/0x2c
      > [ 3034.728033]  #1:  (&buffer->mutex#2){+.+.+.}, at: [<ffffffff811b3b93>] sysfs_write_file+0x3c/0x144
      > [ 3034.728033]  #2:  (s_active#54){.+.+.+}, at: [<ffffffff811b3c3e>] sysfs_write_file+0xe7/0x144
      > [ 3034.728033]  #3:  (x86_cpu_hotplug_driver_mutex){+.+.+.}, at: [<ffffffff810217c2>] cpu_hotplug_driver_lock+0x17/0x19
      > [ 3034.728033]  #4:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff8103d196>] cpu_maps_update_begin+0x17/0x19
      > [ 3034.728033]  #5:  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8103cfd8>] cpu_hotplug_begin+0x2c/0x6d
      > [ 3034.728033]  #6:  (rcu_preempt_state.onoff_mutex){+.+...}, at: [<ffffffff81530078>] rcu_cpu_notify+0x2f5/0x86e
      > [ 3034.728033] 1 lock held by bash/2980:
      > [ 3034.728033]  #0:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff8131e28a>] n_tty_read+0x252/0x7e8
      >
      > Things looked a little weird. Also, this is a deadlock that lockdep did
      > not catch. But what we have here does not look like a circular lock
      > issue:
      >
      > Bash is blocked in rcu_cpu_notify():
      >
      > 1961		/* Exclude any attempts to start a new grace period. */
      > 1962		mutex_lock(&rsp->onoff_mutex);
      >
      >
      > kworker is blocked in get_online_cpus(), which makes sense as we are
      > currently taking down a CPU.
      >
      > But rcu_preempt is not blocked on anything. It is simply sleeping in
      > rcu_gp_kthread (really rcu_gp_init) here:
      >
      > 1453	#ifdef CONFIG_PROVE_RCU_DELAY
      > 1454			if ((prandom_u32() % (rcu_num_nodes * 8)) == 0 &&
      > 1455			    system_state == SYSTEM_RUNNING)
      > 1456				schedule_timeout_uninterruptible(2);
      > 1457	#endif /* #ifdef CONFIG_PROVE_RCU_DELAY */
      >
      > And it does this while holding the onoff_mutex that bash is waiting for.
      >
      > Doing a function trace, it showed me where it happened:
      >
      > [  125.940066] rcu_pree-10      3.... 28384115273: schedule_timeout_uninterruptible <-rcu_gp_kthread
      > [...]
      > [  125.940066] rcu_pree-10      3d..3 28384202439: sched_switch: prev_comm=rcu_preempt prev_pid=10 prev_prio=120 prev_state=D ==> next_comm=watchdog/3 next_pid=38 next_prio=120
      >
      > The watchdog ran, and then:
      >
      > [  125.940066] watchdog-38      3d..3 28384692863: sched_switch: prev_comm=watchdog/3 prev_pid=38 prev_prio=120 prev_state=P ==> next_comm=modprobe next_pid=2848 next_prio=118
      >
      > Not sure what modprobe was doing, but shortly after that:
      >
      > [  125.940066] modprobe-2848    3d..3 28385041749: sched_switch: prev_comm=modprobe prev_pid=2848 prev_prio=118 prev_state=R+ ==> next_comm=migration/3 next_pid=40 next_prio=0
      >
      > Where the migration thread took down the CPU:
      >
      > [  125.940066] migratio-40      3d..3 28389148276: sched_switch: prev_comm=migration/3 prev_pid=40 prev_prio=0 prev_state=P ==> next_comm=swapper/3 next_pid=0 next_prio=120
      >
      > which finally did:
      >
      > [  125.940066]   <idle>-0       3...1 28389282142: arch_cpu_idle_dead <-cpu_startup_entry
      > [  125.940066]   <idle>-0       3...1 28389282548: native_play_dead <-arch_cpu_idle_dead
      > [  125.940066]   <idle>-0       3...1 28389282924: play_dead_common <-native_play_dead
      > [  125.940066]   <idle>-0       3...1 28389283468: idle_task_exit <-play_dead_common
      > [  125.940066]   <idle>-0       3...1 28389284644: amd_e400_remove_cpu <-play_dead_common
      >
      >
      > CPU 3 is now offline, the rcu_preempt thread that ran on CPU 3 is still
      > doing a schedule_timeout_uninterruptible() and it registered it's
      > timeout to the timer base for CPU 3. You would think that it would get
      > migrated right? The issue here is that the timer migration happens at
      > the CPU notifier for CPU_DEAD. The problem is that the rcu notifier for
      > CPU_DOWN is blocked waiting for the onoff_mutex to be released, which is
      > held by the thread that just put itself into a uninterruptible sleep,
      > that wont wake up until the CPU_DEAD notifier of the timer
      > infrastructure is called, which wont happen until the rcu notifier
      > finishes. Here's our deadlock!
      
      This commit breaks this deadlock cycle by substituting a shorter udelay()
      for the previous schedule_timeout_uninterruptible(), while at the same
      time increasing the probability of the delay.  This maintains the intensity
      of the testing.
      Reported-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: NSteven Rostedt <rostedt@goodmis.org>
      971394f3
    • S
      rcu: Don't call wakeup() with rcu_node structure ->lock held · 016a8d5b
      Steven Rostedt 提交于
      This commit fixes a lockdep-detected deadlock by moving a wake_up()
      call out from a rnp->lock critical section.  Please see below for
      the long version of this story.
      
      On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:
      
      > [12572.705832] ======================================================
      > [12572.750317] [ INFO: possible circular locking dependency detected ]
      > [12572.796978] 3.10.0-rc3+ #39 Not tainted
      > [12572.833381] -------------------------------------------------------
      > [12572.862233] trinity-child17/31341 is trying to acquire lock:
      > [12572.870390]  (rcu_node_0){..-.-.}, at: [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12572.878859]
      > but task is already holding lock:
      > [12572.894894]  (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
      > [12572.903381]
      > which lock already depends on the new lock.
      >
      > [12572.927541]
      > the existing dependency chain (in reverse order) is:
      > [12572.943736]
      > -> #4 (&ctx->lock){-.-...}:
      > [12572.960032]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12572.968337]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12572.976633]        [<ffffffff8113c987>] __perf_event_task_sched_out+0x2e7/0x5e0
      > [12572.984969]        [<ffffffff81088953>] perf_event_task_sched_out+0x93/0xa0
      > [12572.993326]        [<ffffffff816ea0bf>] __schedule+0x2cf/0x9c0
      > [12573.001652]        [<ffffffff816eacfe>] schedule_user+0x2e/0x70
      > [12573.009998]        [<ffffffff816ecd64>] retint_careful+0x12/0x2e
      > [12573.018321]
      > -> #3 (&rq->lock){-.-.-.}:
      > [12573.034628]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.042930]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.051248]        [<ffffffff8108e6a7>] wake_up_new_task+0xb7/0x260
      > [12573.059579]        [<ffffffff810492f5>] do_fork+0x105/0x470
      > [12573.067880]        [<ffffffff81049686>] kernel_thread+0x26/0x30
      > [12573.076202]        [<ffffffff816cee63>] rest_init+0x23/0x140
      > [12573.084508]        [<ffffffff81ed8e1f>] start_kernel+0x3f1/0x3fe
      > [12573.092852]        [<ffffffff81ed856f>] x86_64_start_reservations+0x2a/0x2c
      > [12573.101233]        [<ffffffff81ed863d>] x86_64_start_kernel+0xcc/0xcf
      > [12573.109528]
      > -> #2 (&p->pi_lock){-.-.-.}:
      > [12573.125675]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.133829]        [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
      > [12573.141964]        [<ffffffff8108e881>] try_to_wake_up+0x31/0x320
      > [12573.150065]        [<ffffffff8108ebe2>] default_wake_function+0x12/0x20
      > [12573.158151]        [<ffffffff8107bbf8>] autoremove_wake_function+0x18/0x40
      > [12573.166195]        [<ffffffff81085398>] __wake_up_common+0x58/0x90
      > [12573.174215]        [<ffffffff81086909>] __wake_up+0x39/0x50
      > [12573.182146]        [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
      > [12573.190119]        [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
      > [12573.198023]        [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
      > [12573.205860]        [<ffffffff8107a91d>] kthread+0xed/0x100
      > [12573.213656]        [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
      > [12573.221379]
      > -> #1 (&rsp->gp_wq){..-.-.}:
      > [12573.236329]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.243783]        [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
      > [12573.251178]        [<ffffffff810868f3>] __wake_up+0x23/0x50
      > [12573.258505]        [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
      > [12573.265891]        [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
      > [12573.273248]        [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
      > [12573.280564]        [<ffffffff8107a91d>] kthread+0xed/0x100
      > [12573.287807]        [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
      
      Notice the above call chain.
      
      rcu_start_future_gp() is called with the rnp->lock held. Then it calls
      rcu_start_gp_advance, which does a wakeup.
      
      You can't do wakeups while holding the rnp->lock, as that would mean
      that you could not do a rcu_read_unlock() while holding the rq lock, or
      any lock that was taken while holding the rq lock. This is because...
      (See below).
      
      > [12573.295067]
      > -> #0 (rcu_node_0){..-.-.}:
      > [12573.309293]        [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
      > [12573.316568]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.323825]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.331081]        [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12573.338377]        [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
      > [12573.345648]        [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
      > [12573.352942]        [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
      > [12573.360211]        [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
      > [12573.367514]        [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
      > [12573.374816]        [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
      
      Notice the above trace.
      
      perf took its own ctx->lock, which can be taken while holding the rq
      lock. While holding this lock, it did a rcu_read_unlock(). The
      perf_lock_task_context() basically looks like:
      
      rcu_read_lock();
      raw_spin_lock(ctx->lock);
      rcu_read_unlock();
      
      Now, what looks to have happened, is that we scheduled after taking that
      first rcu_read_lock() but before taking the spin lock. When we scheduled
      back in and took the ctx->lock, the following rcu_read_unlock()
      triggered the "special" code.
      
      The rcu_read_unlock_special() takes the rnp->lock, which gives us a
      possible deadlock scenario.
      
      	CPU0		CPU1		CPU2
      	----		----		----
      
      				     rcu_nocb_kthread()
          lock(rq->lock);
      		    lock(ctx->lock);
      				     lock(rnp->lock);
      
      				     wake_up();
      
      				     lock(rq->lock);
      
      		    rcu_read_unlock();
      
      		    rcu_read_unlock_special();
      
      		    lock(rnp->lock);
          lock(ctx->lock);
      
      **** DEADLOCK ****
      
      > [12573.382068]
      > other info that might help us debug this:
      >
      > [12573.403229] Chain exists of:
      >   rcu_node_0 --> &rq->lock --> &ctx->lock
      >
      > [12573.424471]  Possible unsafe locking scenario:
      >
      > [12573.438499]        CPU0                    CPU1
      > [12573.445599]        ----                    ----
      > [12573.452691]   lock(&ctx->lock);
      > [12573.459799]                                lock(&rq->lock);
      > [12573.467010]                                lock(&ctx->lock);
      > [12573.474192]   lock(rcu_node_0);
      > [12573.481262]
      >  *** DEADLOCK ***
      >
      > [12573.501931] 1 lock held by trinity-child17/31341:
      > [12573.508990]  #0:  (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
      > [12573.516475]
      > stack backtrace:
      > [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
      > [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
      > [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
      > [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
      > [12573.567856] Call Trace:
      > [12573.575011]  [<ffffffff816e375b>] dump_stack+0x19/0x1b
      > [12573.582284]  [<ffffffff816dfa5d>] print_circular_bug+0x200/0x20f
      > [12573.589637]  [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
      > [12573.596982]  [<ffffffff810918f5>] ? sched_clock_cpu+0xb5/0x100
      > [12573.604344]  [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.611652]  [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
      > [12573.619030]  [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.626331]  [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
      > [12573.633671]  [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12573.640992]  [<ffffffff811390ed>] ? perf_lock_task_context+0x7d/0x2d0
      > [12573.648330]  [<ffffffff810b429e>] ? put_lock_stats.isra.29+0xe/0x40
      > [12573.655662]  [<ffffffff813095a0>] ? delay_tsc+0x90/0xe0
      > [12573.662964]  [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
      > [12573.670276]  [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
      > [12573.677622]  [<ffffffff81139070>] ? __perf_event_enable+0x370/0x370
      > [12573.684981]  [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
      > [12573.692358]  [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
      > [12573.699753]  [<ffffffff8108cd9d>] ? get_parent_ip+0xd/0x50
      > [12573.707135]  [<ffffffff810b71fd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
      > [12573.714599]  [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
      > [12573.721996]  [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
      
      This commit delays the wakeup via irq_work(), which is what
      perf and ftrace use to perform wakeups in critical sections.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      016a8d5b
  2. 09 6月, 2013 3 次提交
  3. 07 6月, 2013 1 次提交
    • S
      tracing: Use current_uid() for critical time tracing · f17a5194
      Steven Rostedt (Red Hat) 提交于
      The irqsoff tracer records the max time that interrupts are disabled.
      There are hooks in the assembly code that calls back into the tracer when
      interrupts are disabled or enabled.
      
      When they are enabled, the tracer checks if the amount of time they
      were disabled is larger than the previous recorded max interrupts off
      time. If it is, it creates a snapshot of the currently running trace
      to store where the last largest interrupts off time was held and how
      it happened.
      
      During testing, this RCU lockdep dump appeared:
      
      [ 1257.829021] ===============================
      [ 1257.829021] [ INFO: suspicious RCU usage. ]
      [ 1257.829021] 3.10.0-rc1-test+ #171 Tainted: G        W
      [ 1257.829021] -------------------------------
      [ 1257.829021] /home/rostedt/work/git/linux-trace.git/include/linux/rcupdate.h:780 rcu_read_lock() used illegally while idle!
      [ 1257.829021]
      [ 1257.829021] other info that might help us debug this:
      [ 1257.829021]
      [ 1257.829021]
      [ 1257.829021] RCU used illegally from idle CPU!
      [ 1257.829021] rcu_scheduler_active = 1, debug_locks = 0
      [ 1257.829021] RCU used illegally from extended quiescent state!
      [ 1257.829021] 2 locks held by trace-cmd/4831:
      [ 1257.829021]  #0:  (max_trace_lock){......}, at: [<ffffffff810e2b77>] stop_critical_timing+0x1a3/0x209
      [ 1257.829021]  #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff810dae5a>] __update_max_tr+0x88/0x1ee
      [ 1257.829021]
      [ 1257.829021] stack backtrace:
      [ 1257.829021] CPU: 3 PID: 4831 Comm: trace-cmd Tainted: G        W    3.10.0-rc1-test+ #171
      [ 1257.829021] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
      [ 1257.829021]  0000000000000001 ffff880065f49da8 ffffffff8153dd2b ffff880065f49dd8
      [ 1257.829021]  ffffffff81092a00 ffff88006bd78680 ffff88007add7500 0000000000000003
      [ 1257.829021]  ffff88006bd78680 ffff880065f49e18 ffffffff810daebf ffffffff810dae5a
      [ 1257.829021] Call Trace:
      [ 1257.829021]  [<ffffffff8153dd2b>] dump_stack+0x19/0x1b
      [ 1257.829021]  [<ffffffff81092a00>] lockdep_rcu_suspicious+0x109/0x112
      [ 1257.829021]  [<ffffffff810daebf>] __update_max_tr+0xed/0x1ee
      [ 1257.829021]  [<ffffffff810dae5a>] ? __update_max_tr+0x88/0x1ee
      [ 1257.829021]  [<ffffffff811002b9>] ? user_enter+0xfd/0x107
      [ 1257.829021]  [<ffffffff810dbf85>] update_max_tr_single+0x11d/0x12d
      [ 1257.829021]  [<ffffffff811002b9>] ? user_enter+0xfd/0x107
      [ 1257.829021]  [<ffffffff810e2b15>] stop_critical_timing+0x141/0x209
      [ 1257.829021]  [<ffffffff8109569a>] ? trace_hardirqs_on+0xd/0xf
      [ 1257.829021]  [<ffffffff811002b9>] ? user_enter+0xfd/0x107
      [ 1257.829021]  [<ffffffff810e3057>] time_hardirqs_on+0x2a/0x2f
      [ 1257.829021]  [<ffffffff811002b9>] ? user_enter+0xfd/0x107
      [ 1257.829021]  [<ffffffff8109550c>] trace_hardirqs_on_caller+0x16/0x197
      [ 1257.829021]  [<ffffffff8109569a>] trace_hardirqs_on+0xd/0xf
      [ 1257.829021]  [<ffffffff811002b9>] user_enter+0xfd/0x107
      [ 1257.829021]  [<ffffffff810029b4>] do_notify_resume+0x92/0x97
      [ 1257.829021]  [<ffffffff8154bdca>] int_signal+0x12/0x17
      
      What happened was entering into the user code, the interrupts were enabled
      and a max interrupts off was recorded. The trace buffer was saved along with
      various information about the task: comm, pid, uid, priority, etc.
      
      The uid is recorded with task_uid(tsk). But this is a macro that uses rcu_read_lock()
      to retrieve the data, and this happened to happen where RCU is blind (user_enter).
      
      As only the preempt and irqs off tracers can have this happen, and they both
      only have the tsk == current, if tsk == current, use current_uid() instead of
      task_uid(), as current_uid() does not use RCU as only current can change its uid.
      
      This fixes the RCU suspicious splat.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f17a5194
  4. 30 5月, 2013 1 次提交
  5. 29 5月, 2013 4 次提交
    • S
      ftrace: Use the rcu _notrace variants for rcu_dereference_raw() and friends · 1bb539ca
      Steven Rostedt 提交于
      As rcu_dereference_raw() under RCU debug config options can add quite a
      bit of checks, and that tracing uses rcu_dereference_raw(), these checks
      happen with the function tracer. The function tracer also happens to trace
      these debug checks too. This added overhead can livelock the system.
      
      Have the function tracer use the new RCU _notrace equivalents that do
      not do the debug checks for RCU.
      
      Link: http://lkml.kernel.org/r/20130528184209.467603904@goodmis.orgAcked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1bb539ca
    • J
      cgroup: warn about mismatching options of a new mount of an existing hierarchy · 2a0ff3fb
      Jeff Liu 提交于
      With the new __DEVEL__sane_behavior mount option was introduced,
      if the root cgroup is alive with no xattr function, to mount a
      new cgroup with xattr will be rejected in terms of design which
      just fine.  However, if the root cgroup does not mounted with
      __DEVEL__sane_hehavior, to create a new cgroup with xattr option
      will succeed although after that the EA function does not works
      as expected but will get ENOTSUPP for setting up attributes under
      either cgroup. e.g.
      
      setfattr: /cgroup2/test: Operation not supported
      
      Instead of keeping silence in this case, it's better to drop a log
      entry in warning level.  That would be helpful to understand the
      reason behind the scene from the user's perspective, and this is
      essentially an improvement does not break the backward compatibilities.
      
      With this fix, above mount attemption will keep up works as usual but
      the following line cound be found at the system log:
      
      [ ...] cgroup: new mount options do not match the existing superblock
      
      tj: minor formatting / message updates.
      Signed-off-by: NJie Liu <jeff.liu@oracle.com>
      Reported-by: NAlexey Kodanev <alexey.kodanev@oracle.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      2a0ff3fb
    • Z
      timekeeping: Correct run-time detection of persistent_clock. · 0d6bd995
      Zoran Markovic 提交于
      Since commit 31ade306, timekeeping_init()
      checks for presence of persistent clock by attempting to read a non-zero
      time value. This is an issue on platforms where persistent_clock (instead
      is implemented as a free-running counter (instead of an RTC) starting
      from zero on each boot and running during suspend. Examples are some ARM
      platforms (e.g. PandaBoard).
      
      An attempt to read such a clock during timekeeping_init() may return zero
      value and falsely declare persistent clock as missing. Additionally, in
      the above case suspend times may be accounted twice (once from
      timekeeping_resume() and once from rtc_resume()), resulting in a gradual
      drift of system time.
      
      This patch does a run-time correction of the issue by doing the same check
      during timekeeping_suspend().
      
      A better long-term solution would have to return error when trying to read
      non-existing clock and zero when trying to read an uninitialized clock, but
      that would require changing all persistent_clock implementations.
      
      This patch addresses the immediate breakage, for now.
      
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NZoran Markovic <zoran.markovic@linaro.org>
      [jstultz: Tweaked commit message and subject]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      0d6bd995
    • G
      ntp: Remove unused variable flags in __hardpps · aa848233
      Geert Uytterhoeven 提交于
      kernel/time/ntp.c: In function ‘__hardpps’:
      kernel/time/ntp.c:877: warning: unused variable ‘flags’
      
      commit a076b214 ("ntp: Remove ntp_lock,
      using the timekeeping locks to protect ntp state") removed its users,
      but not the actual variable.
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      aa848233
  6. 28 5月, 2013 2 次提交
    • S
      ring-buffer: Do not poll non allocated cpu buffers · 6721cb60
      Steven Rostedt (Red Hat) 提交于
      The tracing infrastructure sets up for possible CPUs, but it uses
      the ring buffer polling, it is possible to call the ring buffer
      polling code with a CPU that hasn't been allocated. This will cause
      a kernel oops when it access a ring buffer cpu buffer that is part
      of the possible cpus but hasn't been allocated yet as the CPU has never
      been online.
      Reported-by: NMauro Carvalho Chehab <mchehab@redhat.com>
      Tested-by: NMauro Carvalho Chehab <mchehab@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6721cb60
    • T
      tick: Cure broadcast false positive pending bit warning · 2938d275
      Thomas Gleixner 提交于
      commit 26517f3e (tick: Avoid programming the local cpu timer if
      broadcast pending) added a warning if the cpu enters broadcast mode
      again while the pending bit is still set. Meelis reported that the
      warning triggers. There are two corner cases which have been not
      considered:
      
      1) cpuidle calls clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER)
         twice. That can result in the following scenario
      
         CPU0                    CPU1
                                 cpuidle_idle_call()
                                   clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER)
                                     set cpu in tick_broadcast_oneshot_mask
      
         broadcast interrupt
           event expired for cpu1
           set pending bit
      
                                   acpi_idle_enter_simple()
                                     clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER)
                                       WARN_ON(pending bit)
      
        Move the WARN_ON into the section where we enter broadcast mode so
        it wont provide false positives on the second call.
      
      2) safe_halt() enables interrupts, so a broadcast interrupt can be
         delivered befor the broadcast mode is disabled. That sets the
         pending bit for the CPU which receives the broadcast
         interrupt. Though the interrupt is delivered right away from the
         broadcast handler and leaves the pending bit stale.
      
         Clear the pending bit for the current cpu in the broadcast handler.
      Reported-and-tested-by: NMeelis Roos <mroos@linux.ee>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1305271841130.4220@ionosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      2938d275
  7. 25 5月, 2013 1 次提交
  8. 24 5月, 2013 1 次提交
    • T
      cgroup: fix a subtle bug in descendant pre-order walk · 7805d000
      Tejun Heo 提交于
      When cgroup_next_descendant_pre() initiates a walk, it checks whether
      the subtree root doesn't have any children and if not returns NULL.
      Later code assumes that the subtree isn't empty.  This is broken
      because the subtree may become empty inbetween, which can lead to the
      traversal escaping the subtree by walking to the sibling of the
      subtree root.
      
      There's no reason to have the early exit path.  Remove it along with
      the later assumption that the subtree isn't empty.  This simplifies
      the code a bit and fixes the subtle bug.
      
      While at it, fix the comment of cgroup_for_each_descendant_pre() which
      was incorrectly referring to ->css_offline() instead of
      ->css_online().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Cc: stable@vger.kernel.org
      7805d000
  9. 23 5月, 2013 1 次提交
    • S
      tracing: Fix crash when ftrace=nop on the kernel command line · ca164318
      Steven Rostedt (Red Hat) 提交于
      If ftrace=<tracer> is on the kernel command line, when that tracer is
      registered, it will be initiated by tracing_set_tracer() to execute that
      tracer.
      
      The nop tracer is just a stub tracer that is used to have no tracer
      enabled. It is assigned at early bootup as it is the default tracer.
      
      But if ftrace=nop is on the kernel command line, the registering of the
      nop tracer will call tracing_set_tracer() which will try to execute
      the nop tracer. But it expects tr->current_trace to be assigned something
      as it usually is assigned to the nop tracer. As it hasn't been assigned
      to anything yet, it causes the system to crash.
      
      The simple fix is to move the tr->current_trace = nop before registering
      the nop tracer. The functionality is still the same as the nop tracer
      doesn't do anything anyway.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ca164318
  10. 18 5月, 2013 1 次提交
    • Y
      x86, range: fix missing merge during add range · fbe06b7b
      Yinghai Lu 提交于
      Christian found v3.9 does not work with E350 with EFI is enabled.
      
      [    1.658832] Trying to unpack rootfs image as initramfs...
      [    1.679935] BUG: unable to handle kernel paging request at ffff88006e3fd000
      [    1.686940] IP: [<ffffffff813661df>] memset+0x1f/0xb0
      [    1.692010] PGD 1f77067 PUD 1f7a067 PMD 61420067 PTE 0
      
      but early memtest report all memory could be accessed without problem.
      
      early page table is set in following sequence:
      [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
      [    0.000000] init_memory_mapping: [mem 0x6e600000-0x6e7fffff]
      [    0.000000] init_memory_mapping: [mem 0x6c000000-0x6e5fffff]
      [    0.000000] init_memory_mapping: [mem 0x00100000-0x6bffffff]
      [    0.000000] init_memory_mapping: [mem 0x6e800000-0x6ea07fff]
      but later efi_enter_virtual_mode try set mapping again wrongly.
      [    0.010644] pid_max: default: 32768 minimum: 301
      [    0.015302] init_memory_mapping: [mem 0x640c5000-0x6e3fcfff]
      that means it fails with pfn_range_is_mapped.
      
      It turns out that we have a bug in add_range_with_merge and it does not
      merge range properly when new add one fill the hole between two exsiting
      ranges. In the case when [mem 0x00100000-0x6bffffff] is the hole between
      [mem 0x00000000-0x000fffff] and [mem 0x6c000000-0x6e7fffff].
      
      Fix the add_range_with_merge by calling itself recursively.
      Reported-by: N"Christian König" <christian.koenig@amd.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/CAE9FiQVofGoSk7q5-0irjkBxemqK729cND4hov-1QCBJDhxpgQ@mail.gmail.com
      Cc: <stable@vger.kernel.org> v3.9
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      fbe06b7b
  11. 17 5月, 2013 3 次提交
  12. 16 5月, 2013 7 次提交
    • M
      tracing: Return -EBUSY when event_enable_func() fails to get module · 6ed01066
      Masami Hiramatsu 提交于
      Since try_module_get() returns false( = 0) when it fails to
      pindown a module, event_enable_func() returns 0 which means
      "succeed". This can cause a kernel panic when the entry
      is removed, because the event is already released.
      
      This fixes the bug by returning -EBUSY, because the reason
      why it fails is that the module is being removed at that time.
      
      Link: http://lkml.kernel.org/r/20130516114848.13508.97899.stgit@mhiramat-M0-7522
      
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tom Zanussi <tom.zanussi@intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6ed01066
    • T
      workqueue: don't perform NUMA-aware allocations on offline nodes in wq_numa_init() · 1be0c25d
      Tejun Heo 提交于
      wq_numa_init() builds per-node cpumasks which are later used to make
      unbound workqueues NUMA-aware.  The cpumasks are allocated using
      alloc_cpumask_var_node() for all possible nodes.  Unfortunately, on
      machines with off-line nodes, this leads to NUMA-aware allocations on
      existing bug offline nodes, which in turn triggers BUG in the memory
      allocation code.
      
      Fix it by using NUMA_NO_NODE for cpumask allocations for offline
      nodes.
      
        kernel BUG at include/linux/gfp.h:323!
        invalid opcode: 0000 [#1] SMP
        Modules linked in:
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0+ #1
        Hardware name: ProLiant BL465c G7, BIOS A19 12/10/2011
        task: ffff880234608000 ti: ffff880234602000 task.ti: ffff880234602000
        RIP: 0010:[<ffffffff8117495d>]  [<ffffffff8117495d>] new_slab+0x2ad/0x340
        RSP: 0000:ffff880234603bf8  EFLAGS: 00010246
        RAX: 0000000000000000 RBX: ffff880237404b40 RCX: 00000000000000d0
        RDX: 0000000000000001 RSI: 0000000000000003 RDI: 00000000002052d0
        RBP: ffff880234603c28 R08: 0000000000000000 R09: 0000000000000001
        R10: 0000000000000001 R11: ffffffff812e3aa8 R12: 0000000000000001
        R13: ffff8802378161c0 R14: 0000000000030027 R15: 00000000000040d0
        FS:  0000000000000000(0000) GS:ffff880237800000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
        CR2: ffff88043fdff000 CR3: 00000000018d5000 CR4: 00000000000007f0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
        Stack:
         ffff880234603c28 0000000000000001 00000000000000d0 ffff8802378161c0
         ffff880237404b40 ffff880237404b40 ffff880234603d28 ffffffff815edba1
         ffff880237816140 0000000000000000 ffff88023740e1c0
        Call Trace:
         [<ffffffff815edba1>] __slab_alloc+0x330/0x4f2
         [<ffffffff81174b25>] kmem_cache_alloc_node_trace+0xa5/0x200
         [<ffffffff812e3aa8>] alloc_cpumask_var_node+0x28/0x90
         [<ffffffff81a0bdb3>] wq_numa_init+0x10d/0x1be
         [<ffffffff81a0bec8>] init_workqueues+0x64/0x341
         [<ffffffff810002ea>] do_one_initcall+0xea/0x1a0
         [<ffffffff819f1f31>] kernel_init_freeable+0xb7/0x1ec
         [<ffffffff815d50de>] kernel_init+0xe/0xf0
         [<ffffffff815ff89c>] ret_from_fork+0x7c/0xb0
        Code: 45  84 ac 00 00 00 f0 41 80 4d 00 40 e9 f6 fe ff ff 66 0f 1f 84 00 00 00 00 00 e8 eb 4b ff ff 49 89 c5 e9 05 fe ff ff <0f> 0b 4c 8b 73 38 44 89 ff 81 cf 00 00 20 00 4c 89 f6 48 c1 ee
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-and-Tested-by: NLingzhu Xiang <lxiang@redhat.com>
      1be0c25d
    • M
      tracing/kprobes: Make print_*probe_event static · b62fdd97
      Masami Hiramatsu 提交于
      According to sparse warning, print_*probe_event static because
      those functions are not directly called from outside.
      
      Link: http://lkml.kernel.org/r/20130513115839.6545.83067.stgit@mhiramat-M0-7522
      
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tom Zanussi <tom.zanussi@intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b62fdd97
    • M
      tracing/kprobes: Fix a sparse warning for incorrect type in assignment · 3d1fc7b0
      Masami Hiramatsu 提交于
      Fix a sparse warning about the rcu operated pointer is
      defined without __rcu address space.
      
      Link: http://lkml.kernel.org/r/20130513115837.6545.23322.stgit@mhiramat-M0-7522
      
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tom Zanussi <tom.zanussi@intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3d1fc7b0
    • M
      tracing/kprobes: Use rcu_dereference_raw for tp->files · c02c7e65
      Masami Hiramatsu 提交于
      Use rcu_dereference_raw() for accessing tp->files. Because the
      write-side uses rcu_assign_pointer() for memory barrier,
      the read-side also has to use rcu_dereference_raw() with
      read memory barrier.
      
      Link: http://lkml.kernel.org/r/20130513115834.6545.17022.stgit@mhiramat-M0-7522
      
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tom Zanussi <tom.zanussi@intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c02c7e65
    • S
      tracing: Fix leaks of filter preds · 60705c89
      Steven Rostedt (Red Hat) 提交于
      Special preds are created when folding a series of preds that
      can be done in serial. These are allocated in an ops field of
      the pred structure. But they were never freed, causing memory
      leaks.
      
      This was discovered using the kmemleak checker:
      
      unreferenced object 0xffff8800797fd5e0 (size 32):
        comm "swapper/0", pid 1, jiffies 4294690605 (age 104.608s)
        hex dump (first 32 bytes):
          00 00 01 00 03 00 05 00 07 00 09 00 0b 00 0d 00  ................
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<ffffffff814b52af>] kmemleak_alloc+0x73/0x98
          [<ffffffff8111ff84>] kmemleak_alloc_recursive.constprop.42+0x16/0x18
          [<ffffffff81120e68>] __kmalloc+0xd7/0x125
          [<ffffffff810d47eb>] kcalloc.constprop.24+0x2d/0x2f
          [<ffffffff810d4896>] fold_pred_tree_cb+0xa9/0xf4
          [<ffffffff810d3781>] walk_pred_tree+0x47/0xcc
          [<ffffffff810d5030>] replace_preds.isra.20+0x6f8/0x72f
          [<ffffffff810d50b5>] create_filter+0x4e/0x8b
          [<ffffffff81b1c30d>] ftrace_test_event_filter+0x5a/0x155
          [<ffffffff8100028d>] do_one_initcall+0xa0/0x137
          [<ffffffff81afbedf>] kernel_init_freeable+0x14d/0x1dc
          [<ffffffff814b24b7>] kernel_init+0xe/0xdb
          [<ffffffff814d539c>] ret_from_fork+0x7c/0xb0
          [<ffffffffffffffff>] 0xffffffffffffffff
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: stable@vger.kernel.org # 2.6.39+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      60705c89
    • S
      rcu: Don't allocate bootmem from rcu_init() · 615ee544
      Sasha Levin 提交于
      When rcu_init() is called we already have slab working, allocating
      bootmem at that point results in warnings and an allocation from
      slab.  This commit therefore changes alloc_bootmem_cpumask_var() to
      alloc_cpumask_var() in rcu_bootup_announce_oddness(), which is called
      from rcu_init().
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Tested-by: NRobin Holt <holt@sgi.com>
      
      [paulmck: convert to zalloc_cpumask_var(), as suggested by Yinghai Lu.]
      615ee544
  13. 15 5月, 2013 2 次提交