1. 23 8月, 2016 3 次提交
    • P
      rcu: Avoid redundant quiescent-state chasing · 3563a438
      Paul E. McKenney 提交于
      Currently, __note_gp_changes() checks to see if the CPU has slept through
      multiple grace periods.  If it has, it resynchronizes that CPU's view
      of the grace-period state, which includes whether or not the current
      grace period needs a quiescent state from this CPU.  The fact of this
      need (or lack thereof) needs to be in two places, rdp->cpu_no_qs.b.norm
      and rdp->core_needs_qs.  The former tells RCU's context-switch code to
      go get a quiescent state and the latter says that it needs to be reported.
      The current code unconditionally sets the former to true, but correctly
      sets the latter.
      
      This does not result in failures, but it does unnecessarily increase
      the amount of work done on average at context-switch time.  This commit
      therefore correctly sets both fields.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      3563a438
    • P
      rcu: Don't use modular infrastructure in non-modular code · e77b7041
      Paul Gortmaker 提交于
      The Kconfig currently controlling compilation of tree.c is:
      
      init/Kconfig:config TREE_RCU
      init/Kconfig:   bool
      
      ...and update.c and sync.c are "obj-y" meaning that none are ever
      built as a module by anyone.
      
      Since MODULE_ALIAS is a no-op for non-modular code, we can remove
      them from these files.
      
      We leave moduleparam.h behind since the files instantiate some boot
      time configuration parameters with module_param() still.
      
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      e77b7041
    • J
      rcu: Use rcu_gp_kthread_wake() to wake up grace period kthreads · 94d44776
      Jisheng Zhang 提交于
      Commit abedf8e2 ("rcu: Use simple wait queues where possible in
      rcutree") converts Tree RCU's wait queues to simple wait queues,
      but it incorrectly reverts the commit 2aa792e6 ("rcu: Use
      rcu_gp_kthread_wake() to wake up grace period kthreads").  This can
      result in redundant self-wakeups.
      
      This commit therefore replaces the simple wait-queue wakeups with
      rcu_gp_kthread_wake(), thus avoiding the redundant wakeups.
      Signed-off-by: NJisheng Zhang <jszhang@marvell.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      94d44776
  2. 15 7月, 2016 1 次提交
  3. 16 6月, 2016 2 次提交
    • M
      rcu: Correctly handle sparse possible cpus · bc75e999
      Mark Rutland 提交于
      In many cases in the RCU tree code, we iterate over the set of cpus for
      a leaf node described by rcu_node::grplo and rcu_node::grphi, checking
      per-cpu data for each cpu in this range. However, if the set of possible
      cpus is sparse, some cpus described in this range are not possible, and
      thus no per-cpu region will have been allocated (or initialised) for
      them by the generic percpu code.
      
      Erroneous accesses to a per-cpu area for these !possible cpus may fault
      or may hit other data depending on the addressed generated when the
      erroneous per cpu offset is applied. In practice, both cases have been
      observed on arm64 hardware (the former being silent, but detectable with
      additional patches).
      
      To avoid issues resulting from this, we must iterate over the set of
      *possible* cpus for a given leaf node. This patch add a new helper,
      for_each_leaf_node_possible_cpu, to enable this. As iteration is often
      intertwined with rcu_node local bitmask manipulation, a new
      leaf_node_cpu_bit helper is added to make this simpler and more
      consistent. The RCU tree code is made to use both of these where
      appropriate.
      
      Without this patch, running reboot at a shell can result in an oops
      like:
      
      [ 3369.075979] Unable to handle kernel paging request at virtual address ffffff8008b21b4c
      [ 3369.083881] pgd = ffffffc3ecdda000
      [ 3369.087270] [ffffff8008b21b4c] *pgd=00000083eca48003, *pud=00000083eca48003, *pmd=0000000000000000
      [ 3369.096222] Internal error: Oops: 96000007 [#1] PREEMPT SMP
      [ 3369.101781] Modules linked in:
      [ 3369.104825] CPU: 2 PID: 1817 Comm: NetworkManager Tainted: G        W       4.6.0+ #3
      [ 3369.121239] task: ffffffc0fa13e000 ti: ffffffc3eb940000 task.ti: ffffffc3eb940000
      [ 3369.128708] PC is at sync_rcu_exp_select_cpus+0x188/0x510
      [ 3369.134094] LR is at sync_rcu_exp_select_cpus+0x104/0x510
      [ 3369.139479] pc : [<ffffff80081109a8>] lr : [<ffffff8008110924>] pstate: 200001c5
      [ 3369.146860] sp : ffffffc3eb9435a0
      [ 3369.150162] x29: ffffffc3eb9435a0 x28: ffffff8008be4f88
      [ 3369.155465] x27: ffffff8008b66c80 x26: ffffffc3eceb2600
      [ 3369.160767] x25: 0000000000000001 x24: ffffff8008be4f88
      [ 3369.166070] x23: ffffff8008b51c3c x22: ffffff8008b66c80
      [ 3369.171371] x21: 0000000000000001 x20: ffffff8008b21b40
      [ 3369.176673] x19: ffffff8008b66c80 x18: 0000000000000000
      [ 3369.181975] x17: 0000007fa951a010 x16: ffffff80086a30f0
      [ 3369.187278] x15: 0000007fa9505590 x14: 0000000000000000
      [ 3369.192580] x13: ffffff8008b51000 x12: ffffffc3eb940000
      [ 3369.197882] x11: 0000000000000006 x10: ffffff8008b51b78
      [ 3369.203184] x9 : 0000000000000001 x8 : ffffff8008be4000
      [ 3369.208486] x7 : ffffff8008b21b40 x6 : 0000000000001003
      [ 3369.213788] x5 : 0000000000000000 x4 : ffffff8008b27280
      [ 3369.219090] x3 : ffffff8008b21b4c x2 : 0000000000000001
      [ 3369.224406] x1 : 0000000000000001 x0 : 0000000000000140
      ...
      [ 3369.972257] [<ffffff80081109a8>] sync_rcu_exp_select_cpus+0x188/0x510
      [ 3369.978685] [<ffffff80081128b4>] synchronize_rcu_expedited+0x64/0xa8
      [ 3369.985026] [<ffffff80086b987c>] synchronize_net+0x24/0x30
      [ 3369.990499] [<ffffff80086ddb54>] dev_deactivate_many+0x28c/0x298
      [ 3369.996493] [<ffffff80086b6bb8>] __dev_close_many+0x60/0xd0
      [ 3370.002052] [<ffffff80086b6d48>] __dev_close+0x28/0x40
      [ 3370.007178] [<ffffff80086bf62c>] __dev_change_flags+0x8c/0x158
      [ 3370.012999] [<ffffff80086bf718>] dev_change_flags+0x20/0x60
      [ 3370.018558] [<ffffff80086cf7f0>] do_setlink+0x288/0x918
      [ 3370.023771] [<ffffff80086d0798>] rtnl_newlink+0x398/0x6a8
      [ 3370.029158] [<ffffff80086cee84>] rtnetlink_rcv_msg+0xe4/0x220
      [ 3370.034891] [<ffffff80086e274c>] netlink_rcv_skb+0xc4/0xf8
      [ 3370.040364] [<ffffff80086ced8c>] rtnetlink_rcv+0x2c/0x40
      [ 3370.045663] [<ffffff80086e1fe8>] netlink_unicast+0x160/0x238
      [ 3370.051309] [<ffffff80086e24b8>] netlink_sendmsg+0x2f0/0x358
      [ 3370.056956] [<ffffff80086a0070>] sock_sendmsg+0x18/0x30
      [ 3370.062168] [<ffffff80086a21cc>] ___sys_sendmsg+0x26c/0x280
      [ 3370.067728] [<ffffff80086a30ac>] __sys_sendmsg+0x44/0x88
      [ 3370.073027] [<ffffff80086a3100>] SyS_sendmsg+0x10/0x20
      [ 3370.078153] [<ffffff8008085e70>] el0_svc_naked+0x24/0x28
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NDennis Chen <dennis.chen@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bc75e999
    • D
      rcu: sysctl: Panic on RCU Stall · 088e9d25
      Daniel Bristot de Oliveira 提交于
      It is not always easy to determine the cause of an RCU stall just by
      analysing the RCU stall messages, mainly when the problem is caused
      by the indirect starvation of rcu threads. For example, when preempt_rcu
      is not awakened due to the starvation of a timer softirq.
      
      We have been hard coding panic() in the RCU stall functions for
      some time while testing the kernel-rt. But this is not possible in
      some scenarios, like when supporting customers.
      
      This patch implements the sysctl kernel.panic_on_rcu_stall. If
      set to 1, the system will panic() when an RCU stall takes place,
      enabling the capture of a vmcore. The vmcore provides a way to analyze
      all kernel/tasks states, helping out to point to the culprit and the
      solution for the stall.
      
      The kernel.panic_on_rcu_stall sysctl is disabled by default.
      
      Changes from v1:
      - Fixed a typo in the git log
      - The if(sysctl_panic_on_rcu_stall) panic() is in a static function
      - Fixed the CONFIG_TINY_RCU compilation issue
      - The var sysctl_panic_on_rcu_stall is now __read_mostly
      
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Reviewed-by: NArnaldo Carvalho de Melo <acme@kernel.org>
      Tested-by: N"Luis Claudio R. Goncalves" <lgoncalv@redhat.com>
      Signed-off-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      088e9d25
  4. 15 6月, 2016 4 次提交
  5. 08 6月, 2016 1 次提交
  6. 01 4月, 2016 20 次提交
  7. 02 3月, 2016 1 次提交
    • T
      rcu: Make CPU_DYING_IDLE an explicit call · 27d50c7e
      Thomas Gleixner 提交于
      Make the RCU CPU_DYING_IDLE callback an explicit function call, so it gets
      invoked at the proper place.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.870167933@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      27d50c7e
  8. 25 2月, 2016 2 次提交
    • P
      rcu: Use simple wait queues where possible in rcutree · abedf8e2
      Paul Gortmaker 提交于
      As of commit dae6e64d ("rcu: Introduce proper blocking to no-CBs kthreads
      GP waits") the RCU subsystem started making use of wait queues.
      
      Here we convert all additions of RCU wait queues to use simple wait queues,
      since they don't need the extra overhead of the full wait queue features.
      
      Originally this was done for RT kernels[1], since we would get things like...
      
        BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
        in_atomic(): 1, irqs_disabled(): 1, pid: 8, name: rcu_preempt
        Pid: 8, comm: rcu_preempt Not tainted
        Call Trace:
         [<ffffffff8106c8d0>] __might_sleep+0xd0/0xf0
         [<ffffffff817d77b4>] rt_spin_lock+0x24/0x50
         [<ffffffff8106fcf6>] __wake_up+0x36/0x70
         [<ffffffff810c4542>] rcu_gp_kthread+0x4d2/0x680
         [<ffffffff8105f910>] ? __init_waitqueue_head+0x50/0x50
         [<ffffffff810c4070>] ? rcu_gp_fqs+0x80/0x80
         [<ffffffff8105eabb>] kthread+0xdb/0xe0
         [<ffffffff8106b912>] ? finish_task_switch+0x52/0x100
         [<ffffffff817e0754>] kernel_thread_helper+0x4/0x10
         [<ffffffff8105e9e0>] ? __init_kthread_worker+0x60/0x60
         [<ffffffff817e0750>] ? gs_change+0xb/0xb
      
      ...and hence simple wait queues were deployed on RT out of necessity
      (as simple wait uses a raw lock), but mainline might as well take
      advantage of the more streamline support as well.
      
      [1] This is a carry forward of work from v3.10-rt; the original conversion
      was by Thomas on an earlier -rt version, and Sebastian extended it to
      additional post-3.10 added RCU waiters; here I've added a commit log and
      unified the RCU changes into one, and uprev'd it to match mainline RCU.
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-rt-users@vger.kernel.org
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1455871601-27484-6-git-send-email-wagi@monom.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      abedf8e2
    • D
      rcu: Do not call rcu_nocb_gp_cleanup() while holding rnp->lock · 065bb78c
      Daniel Wagner 提交于
      rcu_nocb_gp_cleanup() is called while holding rnp->lock. Currently,
      this is okay because the wake_up_all() in rcu_nocb_gp_cleanup() will
      not enable the IRQs. lockdep is happy.
      
      By switching over using swait this is not true anymore. swake_up_all()
      enables the IRQs while processing the waiters. __do_softirq() can now
      run and will eventually call rcu_process_callbacks() which wants to
      grap nrp->lock.
      
      Let's move the rcu_nocb_gp_cleanup() call outside the lock before we
      switch over to swait.
      
      If we would hold the rnp->lock and use swait, lockdep reports
      following:
      
       =================================
       [ INFO: inconsistent lock state ]
       4.2.0-rc5-00025-g9a73ba0 #136 Not tainted
       ---------------------------------
       inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
       rcu_preempt/8 [HC0[0]:SC0[0]:HE1:SE1] takes:
        (rcu_node_1){+.?...}, at: [<ffffffff811387c7>] rcu_gp_kthread+0xb97/0xeb0
       {IN-SOFTIRQ-W} state was registered at:
         [<ffffffff81109b9f>] __lock_acquire+0xd5f/0x21e0
         [<ffffffff8110be0f>] lock_acquire+0xdf/0x2b0
         [<ffffffff81841cc9>] _raw_spin_lock_irqsave+0x59/0xa0
         [<ffffffff81136991>] rcu_process_callbacks+0x141/0x3c0
         [<ffffffff810b1a9d>] __do_softirq+0x14d/0x670
         [<ffffffff810b2214>] irq_exit+0x104/0x110
         [<ffffffff81844e96>] smp_apic_timer_interrupt+0x46/0x60
         [<ffffffff81842e70>] apic_timer_interrupt+0x70/0x80
         [<ffffffff810dba66>] rq_attach_root+0xa6/0x100
         [<ffffffff810dbc2d>] cpu_attach_domain+0x16d/0x650
         [<ffffffff810e4b42>] build_sched_domains+0x942/0xb00
         [<ffffffff821777c2>] sched_init_smp+0x509/0x5c1
         [<ffffffff821551e3>] kernel_init_freeable+0x172/0x28f
         [<ffffffff8182cdce>] kernel_init+0xe/0xe0
         [<ffffffff8184231f>] ret_from_fork+0x3f/0x70
       irq event stamp: 76
       hardirqs last  enabled at (75): [<ffffffff81841330>] _raw_spin_unlock_irq+0x30/0x60
       hardirqs last disabled at (76): [<ffffffff8184116f>] _raw_spin_lock_irq+0x1f/0x90
       softirqs last  enabled at (0): [<ffffffff810a8df2>] copy_process.part.26+0x602/0x1cf0
       softirqs last disabled at (0): [<          (null)>]           (null)
       other info that might help us debug this:
        Possible unsafe locking scenario:
              CPU0
              ----
         lock(rcu_node_1);
         <Interrupt>
           lock(rcu_node_1);
        *** DEADLOCK ***
       1 lock held by rcu_preempt/8:
        #0:  (rcu_node_1){+.?...}, at: [<ffffffff811387c7>] rcu_gp_kthread+0xb97/0xeb0
       stack backtrace:
       CPU: 0 PID: 8 Comm: rcu_preempt Not tainted 4.2.0-rc5-00025-g9a73ba0 #136
       Hardware name: Dell Inc. PowerEdge R820/066N7P, BIOS 2.0.20 01/16/2014
        0000000000000000 000000006d7e67d8 ffff881fb081fbd8 ffffffff818379e0
        0000000000000000 ffff881fb0812a00 ffff881fb081fc38 ffffffff8110813b
        0000000000000000 0000000000000001 ffff881f00000001 ffffffff8102fa4f
       Call Trace:
        [<ffffffff818379e0>] dump_stack+0x4f/0x7b
        [<ffffffff8110813b>] print_usage_bug+0x1db/0x1e0
        [<ffffffff8102fa4f>] ? save_stack_trace+0x2f/0x50
        [<ffffffff811087ad>] mark_lock+0x66d/0x6e0
        [<ffffffff81107790>] ? check_usage_forwards+0x150/0x150
        [<ffffffff81108898>] mark_held_locks+0x78/0xa0
        [<ffffffff81841330>] ? _raw_spin_unlock_irq+0x30/0x60
        [<ffffffff81108a28>] trace_hardirqs_on_caller+0x168/0x220
        [<ffffffff81108aed>] trace_hardirqs_on+0xd/0x10
        [<ffffffff81841330>] _raw_spin_unlock_irq+0x30/0x60
        [<ffffffff810fd1c7>] swake_up_all+0xb7/0xe0
        [<ffffffff811386e1>] rcu_gp_kthread+0xab1/0xeb0
        [<ffffffff811089bf>] ? trace_hardirqs_on_caller+0xff/0x220
        [<ffffffff81841341>] ? _raw_spin_unlock_irq+0x41/0x60
        [<ffffffff81137c30>] ? rcu_barrier+0x20/0x20
        [<ffffffff810d2014>] kthread+0x104/0x120
        [<ffffffff81841330>] ? _raw_spin_unlock_irq+0x30/0x60
        [<ffffffff810d1f10>] ? kthread_create_on_node+0x260/0x260
        [<ffffffff8184231f>] ret_from_fork+0x3f/0x70
        [<ffffffff810d1f10>] ? kthread_create_on_node+0x260/0x260
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-rt-users@vger.kernel.org
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1455871601-27484-5-git-send-email-wagi@monom.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      065bb78c
  9. 24 2月, 2016 6 次提交
    • P
    • B
      RCU: Privatize rcu_node::lock · 67c583a7
      Boqun Feng 提交于
      In patch:
      
      "rcu: Add transitivity to remaining rcu_node ->lock acquisitions"
      
      All locking operations on rcu_node::lock are replaced with the wrappers
      because of the need of transitivity, which indicates we should never
      write code using LOCK primitives alone(i.e. without a proper barrier
      following) on rcu_node::lock outside those wrappers. We could detect
      this kind of misuses on rcu_node::lock in the future by adding __private
      modifier on rcu_node::lock.
      
      To privatize rcu_node::lock, unlock wrappers are also needed. Replacing
      spinlock unlocks with these wrappers not only privatizes rcu_node::lock
      but also makes it easier to figure out critical sections of rcu_node.
      
      This patch adds __private modifier to rcu_node::lock and makes every
      access to it wrapped by ACCESS_PRIVATE(). Besides, unlock wrappers are
      added and raw_spin_unlock(&rnp->lock) and its friends are replaced with
      those wrappers.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      67c583a7
    • C
      rcu: Remove useless rcu_data_p when !PREEMPT_RCU · 1914aab5
      Chen Gang 提交于
      The related warning from gcc 6.0:
      
        In file included from kernel/rcu/tree.c:4630:0:
        kernel/rcu/tree_plugin.h:810:40: warning: ‘rcu_data_p’ defined but not used [-Wunused-const-variable]
         static struct rcu_data __percpu *const rcu_data_p = &rcu_sched_data;
                                                ^~~~~~~~~~
      
      Also remove always redundant rcu_data_p in tree.c.
      Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1914aab5
    • P
      rcu: Set rdp->gpwrap when CPU is idle · 23a9bacd
      Paul E. McKenney 提交于
      Commit #e3663b10 ("rcu: Handle gpnum/completed wrap while dyntick
      idle") sets rdp->gpwrap on the wrong side of the "if" statement in
      dyntick_save_progress_counter(), that is, it sets it when the CPU is
      not idle instead of when it is idle.  Of course, if the CPU is not idle,
      its rdp->gpnum won't be lagging beind the global rsp->gpnum, which means
      that rdp->gpwrap will never be set.
      
      This commit therefore moves this code to the proper leg of that "if"
      statement.  This change means that the "else" cause is just "return 0"
      and the "then" clause ends with "return 1", so also move the "return 0"
      to follow the "if", dropping the "else" clause.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      23a9bacd
    • P
      rcu: Stop treating in-kernel CPU-bound workloads as errors · 4914950a
      Paul E. McKenney 提交于
      Commit 4a81e832 ("Reduce overhead of cond_resched() checks for RCU")
      handles the error case where a nohz_full loops indefinitely in the kernel
      with the scheduling-clock interrupt disabled.  However, this handling
      includes IPIing the CPU running the offending loop, which is not what
      we want for real-time workloads.  And there are starting to be real-time
      CPU-bound in-kernel workloads, and these must be handled without IPIing
      the CPU, at least not in the common case.  Therefore, this situation can
      no longer be dismissed as an error case.
      
      This commit therefore splits the handling out, so that the setting of
      bits in the per-CPU rcu_sched_qs_mask variable is done relatively early,
      but if the problem persists, resched_cpu() is eventually used to IPI the
      CPU containing the offending loop.  Assuming that in-kernel CPU-bound
      loops used by real-time tasks contain frequent calls cond_resched_rcu_qs()
      (as in more than once per few tens of milliseconds), the real-time tasks
      will never be IPIed.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      4914950a
    • P
      rcu: Update rcu_report_qs_rsp() comment · 8994515c
      Paul E. McKenney 提交于
      The header comment for rcu_report_qs_rsp() was obsolete, dating well
      before the advent of RCU grace-period kthreads.  This commit therefore
      brings this comment back into alignment with current reality.
      Reported-by: NLihao Liang <lihao.liang@cs.ox.ac.uk>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      8994515c