1. 25 2月, 2016 2 次提交
    • P
      rcu: Use simple wait queues where possible in rcutree · abedf8e2
      Paul Gortmaker 提交于
      As of commit dae6e64d ("rcu: Introduce proper blocking to no-CBs kthreads
      GP waits") the RCU subsystem started making use of wait queues.
      
      Here we convert all additions of RCU wait queues to use simple wait queues,
      since they don't need the extra overhead of the full wait queue features.
      
      Originally this was done for RT kernels[1], since we would get things like...
      
        BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
        in_atomic(): 1, irqs_disabled(): 1, pid: 8, name: rcu_preempt
        Pid: 8, comm: rcu_preempt Not tainted
        Call Trace:
         [<ffffffff8106c8d0>] __might_sleep+0xd0/0xf0
         [<ffffffff817d77b4>] rt_spin_lock+0x24/0x50
         [<ffffffff8106fcf6>] __wake_up+0x36/0x70
         [<ffffffff810c4542>] rcu_gp_kthread+0x4d2/0x680
         [<ffffffff8105f910>] ? __init_waitqueue_head+0x50/0x50
         [<ffffffff810c4070>] ? rcu_gp_fqs+0x80/0x80
         [<ffffffff8105eabb>] kthread+0xdb/0xe0
         [<ffffffff8106b912>] ? finish_task_switch+0x52/0x100
         [<ffffffff817e0754>] kernel_thread_helper+0x4/0x10
         [<ffffffff8105e9e0>] ? __init_kthread_worker+0x60/0x60
         [<ffffffff817e0750>] ? gs_change+0xb/0xb
      
      ...and hence simple wait queues were deployed on RT out of necessity
      (as simple wait uses a raw lock), but mainline might as well take
      advantage of the more streamline support as well.
      
      [1] This is a carry forward of work from v3.10-rt; the original conversion
      was by Thomas on an earlier -rt version, and Sebastian extended it to
      additional post-3.10 added RCU waiters; here I've added a commit log and
      unified the RCU changes into one, and uprev'd it to match mainline RCU.
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-rt-users@vger.kernel.org
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1455871601-27484-6-git-send-email-wagi@monom.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      abedf8e2
    • D
      rcu: Do not call rcu_nocb_gp_cleanup() while holding rnp->lock · 065bb78c
      Daniel Wagner 提交于
      rcu_nocb_gp_cleanup() is called while holding rnp->lock. Currently,
      this is okay because the wake_up_all() in rcu_nocb_gp_cleanup() will
      not enable the IRQs. lockdep is happy.
      
      By switching over using swait this is not true anymore. swake_up_all()
      enables the IRQs while processing the waiters. __do_softirq() can now
      run and will eventually call rcu_process_callbacks() which wants to
      grap nrp->lock.
      
      Let's move the rcu_nocb_gp_cleanup() call outside the lock before we
      switch over to swait.
      
      If we would hold the rnp->lock and use swait, lockdep reports
      following:
      
       =================================
       [ INFO: inconsistent lock state ]
       4.2.0-rc5-00025-g9a73ba0 #136 Not tainted
       ---------------------------------
       inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
       rcu_preempt/8 [HC0[0]:SC0[0]:HE1:SE1] takes:
        (rcu_node_1){+.?...}, at: [<ffffffff811387c7>] rcu_gp_kthread+0xb97/0xeb0
       {IN-SOFTIRQ-W} state was registered at:
         [<ffffffff81109b9f>] __lock_acquire+0xd5f/0x21e0
         [<ffffffff8110be0f>] lock_acquire+0xdf/0x2b0
         [<ffffffff81841cc9>] _raw_spin_lock_irqsave+0x59/0xa0
         [<ffffffff81136991>] rcu_process_callbacks+0x141/0x3c0
         [<ffffffff810b1a9d>] __do_softirq+0x14d/0x670
         [<ffffffff810b2214>] irq_exit+0x104/0x110
         [<ffffffff81844e96>] smp_apic_timer_interrupt+0x46/0x60
         [<ffffffff81842e70>] apic_timer_interrupt+0x70/0x80
         [<ffffffff810dba66>] rq_attach_root+0xa6/0x100
         [<ffffffff810dbc2d>] cpu_attach_domain+0x16d/0x650
         [<ffffffff810e4b42>] build_sched_domains+0x942/0xb00
         [<ffffffff821777c2>] sched_init_smp+0x509/0x5c1
         [<ffffffff821551e3>] kernel_init_freeable+0x172/0x28f
         [<ffffffff8182cdce>] kernel_init+0xe/0xe0
         [<ffffffff8184231f>] ret_from_fork+0x3f/0x70
       irq event stamp: 76
       hardirqs last  enabled at (75): [<ffffffff81841330>] _raw_spin_unlock_irq+0x30/0x60
       hardirqs last disabled at (76): [<ffffffff8184116f>] _raw_spin_lock_irq+0x1f/0x90
       softirqs last  enabled at (0): [<ffffffff810a8df2>] copy_process.part.26+0x602/0x1cf0
       softirqs last disabled at (0): [<          (null)>]           (null)
       other info that might help us debug this:
        Possible unsafe locking scenario:
              CPU0
              ----
         lock(rcu_node_1);
         <Interrupt>
           lock(rcu_node_1);
        *** DEADLOCK ***
       1 lock held by rcu_preempt/8:
        #0:  (rcu_node_1){+.?...}, at: [<ffffffff811387c7>] rcu_gp_kthread+0xb97/0xeb0
       stack backtrace:
       CPU: 0 PID: 8 Comm: rcu_preempt Not tainted 4.2.0-rc5-00025-g9a73ba0 #136
       Hardware name: Dell Inc. PowerEdge R820/066N7P, BIOS 2.0.20 01/16/2014
        0000000000000000 000000006d7e67d8 ffff881fb081fbd8 ffffffff818379e0
        0000000000000000 ffff881fb0812a00 ffff881fb081fc38 ffffffff8110813b
        0000000000000000 0000000000000001 ffff881f00000001 ffffffff8102fa4f
       Call Trace:
        [<ffffffff818379e0>] dump_stack+0x4f/0x7b
        [<ffffffff8110813b>] print_usage_bug+0x1db/0x1e0
        [<ffffffff8102fa4f>] ? save_stack_trace+0x2f/0x50
        [<ffffffff811087ad>] mark_lock+0x66d/0x6e0
        [<ffffffff81107790>] ? check_usage_forwards+0x150/0x150
        [<ffffffff81108898>] mark_held_locks+0x78/0xa0
        [<ffffffff81841330>] ? _raw_spin_unlock_irq+0x30/0x60
        [<ffffffff81108a28>] trace_hardirqs_on_caller+0x168/0x220
        [<ffffffff81108aed>] trace_hardirqs_on+0xd/0x10
        [<ffffffff81841330>] _raw_spin_unlock_irq+0x30/0x60
        [<ffffffff810fd1c7>] swake_up_all+0xb7/0xe0
        [<ffffffff811386e1>] rcu_gp_kthread+0xab1/0xeb0
        [<ffffffff811089bf>] ? trace_hardirqs_on_caller+0xff/0x220
        [<ffffffff81841341>] ? _raw_spin_unlock_irq+0x41/0x60
        [<ffffffff81137c30>] ? rcu_barrier+0x20/0x20
        [<ffffffff810d2014>] kthread+0x104/0x120
        [<ffffffff81841330>] ? _raw_spin_unlock_irq+0x30/0x60
        [<ffffffff810d1f10>] ? kthread_create_on_node+0x260/0x260
        [<ffffffff8184231f>] ret_from_fork+0x3f/0x70
        [<ffffffff810d1f10>] ? kthread_create_on_node+0x260/0x260
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-rt-users@vger.kernel.org
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1455871601-27484-5-git-send-email-wagi@monom.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      065bb78c
  2. 08 12月, 2015 6 次提交
  3. 06 12月, 2015 4 次提交
  4. 05 12月, 2015 16 次提交
  5. 24 11月, 2015 2 次提交
  6. 08 10月, 2015 9 次提交
  7. 07 10月, 2015 1 次提交