1. 16 7月, 2015 5 次提交
  2. 28 5月, 2015 4 次提交
  3. 13 3月, 2015 2 次提交
    • P
      rcu: Eliminate ->onoff_mutex from rcu_node structure · c1990689
      Paul E. McKenney 提交于
      Because that RCU grace-period initialization need no longer exclude
      CPU-hotplug operations, this commit eliminates the ->onoff_mutex and
      its uses.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c1990689
    • P
      rcu: Process offlining and onlining only at grace-period start · 0aa04b05
      Paul E. McKenney 提交于
      Races between CPU hotplug and grace periods can be difficult to resolve,
      so the ->onoff_mutex is used to exclude the two events.  Unfortunately,
      this means that it is impossible for an outgoing CPU to perform the
      last bits of its offlining from its last pass through the idle loop,
      because sleeplocks cannot be acquired in that context.
      
      This commit avoids these problems by buffering online and offline events
      in a new ->qsmaskinitnext field in the leaf rcu_node structures.  When a
      grace period starts, the events accumulated in this mask are applied to
      the ->qsmaskinit field, and, if needed, up the rcu_node tree.  The special
      case of all CPUs corresponding to a given leaf rcu_node structure being
      offline while there are still elements in that structure's ->blkd_tasks
      list is handled using a new ->wait_blkd_tasks field.  In this case,
      propagating the offline bits up the tree is deferred until the beginning
      of the grace period after all of the tasks have exited their RCU read-side
      critical sections and removed themselves from the list, at which point
      the ->wait_blkd_tasks flag is cleared.  If one of that leaf rcu_node
      structure's CPUs comes back online before the list empties, then the
      ->wait_blkd_tasks flag is simply cleared.
      
      This of course means that RCU's notion of which CPUs are offline can be
      out of date.  This is OK because RCU need only wait on CPUs that were
      online at the time that the grace period started.  In addition, RCU's
      force-quiescent-state actions will handle the case where a CPU goes
      offline after the grace period starts.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0aa04b05
  4. 16 1月, 2015 1 次提交
    • P
      rcu: Make cond_resched_rcu_qs() apply to normal RCU flavors · 5cd37193
      Paul E. McKenney 提交于
      Although cond_resched_rcu_qs() only applies to TASKS_RCU, it is used
      in places where it would be useful for it to apply to the normal RCU
      flavors, rcu_preempt, rcu_sched, and rcu_bh.  This is especially the
      case for workloads that aggressively overload the system, particularly
      those that generate large numbers of RCU updates on systems running
      NO_HZ_FULL CPUs.  This commit therefore communicates quiescent states
      from cond_resched_rcu_qs() to the normal RCU flavors.
      
      Note that it is unfortunately necessary to leave the old ->passed_quiesce
      mechanism in place to allow quiescent states that apply to only one
      flavor to be recorded.  (Yes, we could decrement ->rcu_qs_ctr_snap in
      that case, but that is not so good for debugging of RCU internals.)
      In addition, if one of the RCU flavor's grace period has stalled, this
      will invoke rcu_momentary_dyntick_idle(), resulting in a heavy-weight
      quiescent state visible from other CPUs.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Merge commit from Sasha Levin fixing a bug where __this_cpu()
        was used in preemptible code. ]
      5cd37193
  5. 11 1月, 2015 2 次提交
  6. 07 1月, 2015 8 次提交
    • P
      rcu: Handle gpnum/completed wrap while dyntick idle · e3663b10
      Paul E. McKenney 提交于
      Subtle race conditions can result if a CPU stays in dyntick-idle mode
      long enough for the ->gpnum and ->completed fields to wrap.  For
      example, consider the following sequence of events:
      
      o	CPU 1 encounters a quiescent state while waiting for grace period
      	5 to complete, but then enters dyntick-idle mode.
      
      o	While CPU 1 is in dyntick-idle mode, the grace-period counters
      	wrap around so that the grace period number is now 4.
      
      o	Just as CPU 1 exits dyntick-idle mode, grace period 4 completes
      	and grace period 5 begins.
      
      o	The quiescent state that CPU 1 passed through during the old
      	grace period 5 looks like it applies to the new grace period
      	5.  Therefore, the new grace period 5 completes without CPU 1
      	having passed through a quiescent state.
      
      This could clearly be a fatal surprise to any long-running RCU read-side
      critical section that happened to be running on CPU 1 at the time.  At one
      time, this was not a problem, given that it takes significant time for
      the grace-period counters to overflow even on 32-bit systems.  However,
      with the advent of NO_HZ_FULL and SMP embedded systems, arbitrarily long
      idle periods are now becoming quite feasible.  It is therefore time to
      close this race.
      
      This commit therefore avoids this race condition by having the
      quiescent-state forcing code detect when a CPU is falling too far
      behind, and setting a new rcu_data field ->gpwrap when this happens.
      Whenever this new ->gpwrap field is set, the CPU's ->gpnum and ->completed
      fields are known to be untrustworthy, and can be ignored, along with
      any associated quiescent states.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      e3663b10
    • P
      rcu: Improve diagnostics for spurious RCU CPU stall warnings · 6ccd2ecd
      Paul E. McKenney 提交于
      The current RCU CPU stall warning code will print "Stall ended before
      state dump start" any time that the stall-warning code is triggered on
      a CPU that has already reported a quiescent state for the current grace
      period and if all quiescent states have been reported for the current
      grace period.  However, a true stall can result in these symptoms, for
      example, by preventing RCU's grace-period kthreads from ever running
      
      This commit therefore checks for this condition, reporting the end of
      the stall only if one of the grace-period counters has actually advanced.
      Otherwise, it reports the last time that the grace-period kthread made
      meaningful progress.  (In normal situations, the grace-period kthread
      should make meaningful progress at least every jiffies_till_next_fqs
      jiffies.)
      Reported-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: NMiroslav Benes <mbenes@suse.cz>
      6ccd2ecd
    • P
      rcu: Make RCU_CPU_STALL_INFO include number of fqs attempts · fc908ed3
      Paul E. McKenney 提交于
      One way that an RCU CPU stall warning can happen is if the grace-period
      kthread is not allowed to execute.  One proxy for this kthread's
      forward progress is the number of force-quiescent-state (fqs) scans.
      This commit therefore adds the number of fqs scans to the RCU CPU stall
      warning printouts when CONFIG_RCU_CPU_STALL_INFO=y.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      fc908ed3
    • L
      rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion · abaf3f9d
      Lai Jiangshan 提交于
      The patch dfeb9765 ("Allow post-unlock reference for rt_mutex")
      ensured rcu-boost safe even the rt_mutex has post-unlock reference.
      
      But rt_mutex allowing post-unlock reference is definitely a bug and it was
      fixed by the commit 27e35715 ("rtmutex: Plug slow unlock race").
      This fix made the previous patch (dfeb9765) useless.
      
      And even worse, the priority-inversion introduced by the the previous
      patch still exists.
      
      rcu_read_unlock_special() {
      	rt_mutex_unlock(&rnp->boost_mtx);
      	/* Priority-Inversion:
      	 * the current task had been deboosted and preempted as a low
      	 * priority task immediately, it could wait long before reschedule in,
      	 * and the rcu-booster also waits on this low priority task and sleeps.
      	 * This priority-inversion makes rcu-booster can't work
      	 * as expected.
      	 */
      	complete(&rnp->boost_completion);
      }
      
      Just revert the patch to avoid it.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      abaf3f9d
    • P
      rcu: Don't migrate blocked tasks even if all corresponding CPUs offline · d19fb8d1
      Paul E. McKenney 提交于
      When the last CPU associated with a given leaf rcu_node structure
      goes offline, something must be done about the tasks queued on that
      rcu_node structure.  Each of these tasks has been preempted on one of
      the leaf rcu_node structure's CPUs while in an RCU read-side critical
      section that it have not yet exited.  Handling these tasks is the job of
      rcu_preempt_offline_tasks(), which migrates them from the leaf rcu_node
      structure to the root rcu_node structure.
      
      Unfortunately, this migration has to be done one task at a time because
      each tasks allegiance must be shifted from the original leaf rcu_node to
      the root, so that future attempts to deal with these tasks will acquire
      the root rcu_node structure's ->lock rather than that of the leaf.
      Worse yet, this migration must be done with interrupts disabled, which
      is not so good for realtime response, especially given that there is
      no bound on the number of tasks on a given rcu_node structure's list.
      (OK, OK, there is a bound, it is just that it is unreasonably large,
      especially on 64-bit systems.)  This was not considered a problem back
      when rcu_preempt_offline_tasks() was first written because realtime
      systems were assumed not to do CPU-hotplug operations while real-time
      applications were running.  This assumption has proved of dubious validity
      given that people are starting to run multiple realtime applications
      on a single SMP system and that it is common practice to offline then
      online a CPU before starting its real-time application in order to clear
      extraneous processing off of that CPU.  So we now need CPU hotplug
      operations to avoid undue latencies.
      
      This commit therefore avoids migrating these tasks, instead letting
      them be dequeued one by one from the original leaf rcu_node structure
      by rcu_read_unlock_special().  This means that the clearing of bits
      from the upper-level rcu_node structures must be deferred until the
      last such task has been dequeued, because otherwise subsequent grace
      periods won't wait on them.  This commit has the beneficial side effect
      of simplifying the CPU-hotplug code for TREE_PREEMPT_RCU, especially in
      CONFIG_RCU_BOOST builds.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d19fb8d1
    • P
      rcu: Abstract rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu() · 8af3a5e7
      Paul E. McKenney 提交于
      This commit abstracts rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu()
      in preparation for the rework of RCU priority boosting.  This new function
      will be invoked from rcu_read_unlock_special() in the reworked scheme,
      which is why rcu_cleanup_dead_rnp() assumes that the leaf rcu_node
      structure's ->qsmaskinit field has already been updated.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      8af3a5e7
    • L
      rcu: Remove "select IRQ_WORK" from config TREE_RCU · 5a43b88e
      Lai Jiangshan 提交于
      The 48a7639c ("rcu: Make callers awaken grace-period kthread")
      removed the irq_work_queue(), so the TREE_RCU doesn't need
      irq work any more.  This commit therefore updates RCU's Kconfig and
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5a43b88e
    • P
      rcu: Fix rcu_barrier() race that could result in too-short wait · 41050a00
      Paul E. McKenney 提交于
      The rcu_barrier() no-callbacks check for no-CBs CPUs has race conditions.
      It checks a given CPU's lists of callbacks, and if all three no-CBs lists
      are empty, ignores that CPU.  However, these three lists could potentially
      be empty even when callbacks are present if the check executed just as
      the callbacks were being moved from one list to another.  It turns out
      that recent versions of rcutorture can spot this race.
      
      This commit plugs this hole by consolidating the per-list counts of
      no-CBs callbacks into a single count, which is incremented before
      the corresponding callback is posted and after it is invoked.  Then
      rcu_barrier() checks this single count to reliably determine whether
      the corresponding CPU has no-CBs callbacks.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      41050a00
  7. 04 11月, 2014 5 次提交
  8. 30 10月, 2014 1 次提交
  9. 29 10月, 2014 1 次提交
    • P
      rcu: Make rcu_barrier() understand about missing rcuo kthreads · d7e29933
      Paul E. McKenney 提交于
      Commit 35ce7f29 (rcu: Create rcuo kthreads only for onlined CPUs)
      avoids creating rcuo kthreads for CPUs that never come online.  This
      fixes a bug in many instances of firmware: Instead of lying about their
      age, these systems instead lie about the number of CPUs that they have.
      Before commit 35ce7f29, this could result in huge numbers of useless
      rcuo kthreads being created.
      
      It appears that experience indicates that I should have told the
      people suffering from this problem to fix their broken firmware, but
      I instead produced what turned out to be a partial fix.   The missing
      piece supplied by this commit makes sure that rcu_barrier() knows not to
      post callbacks for no-CBs CPUs that have not yet come online, because
      otherwise rcu_barrier() will hang on systems having firmware that lies
      about the number of CPUs.
      
      It is tempting to simply have rcu_barrier() refuse to post a callback on
      any no-CBs CPU that does not have an rcuo kthread.  This unfortunately
      does not work because rcu_barrier() is required to wait for all pending
      callbacks.  It is therefore required to wait even for those callbacks
      that cannot possibly be invoked.  Even if doing so hangs the system.
      
      Given that posting a callback to a no-CBs CPU that does not yet have an
      rcuo kthread can hang rcu_barrier(), It is tempting to report an error
      in this case.  Unfortunately, this will result in false positives at
      boot time, when it is perfectly legal to post callbacks to the boot CPU
      before the scheduler has started, in other words, before it is legal
      to invoke rcu_barrier().
      
      So this commit instead has rcu_barrier() avoid posting callbacks to
      CPUs having neither rcuo kthread nor pending callbacks, and has it
      complain bitterly if it finds CPUs having no rcuo kthread but some
      pending callbacks.  And when rcu_barrier() does find CPUs having no rcuo
      kthread but pending callbacks, as noted earlier, it has no choice but
      to hang indefinitely.
      Reported-by: NYanko Kaneti <yaneti@declera.com>
      Reported-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Reported-by: NMeelis Roos <mroos@linux.ee>
      Reported-by: NEric B Munson <emunson@akamai.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: NEric B Munson <emunson@akamai.com>
      Tested-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Tested-by: NYanko Kaneti <yaneti@declera.com>
      Tested-by: NKevin Fenzi <kevin@scrye.com>
      Tested-by: NMeelis Roos <mroos@linux.ee>
      d7e29933
  10. 17 9月, 2014 2 次提交
  11. 08 9月, 2014 2 次提交
    • P
      rcu: Make TASKS_RCU handle nohz_full= CPUs · 176f8f7a
      Paul E. McKenney 提交于
      Currently TASKS_RCU would ignore a CPU running a task in nohz_full=
      usermode execution.  There would be neither a context switch nor a
      scheduling-clock interrupt to tell TASKS_RCU that the task in question
      had passed through a quiescent state.  The grace period would therefore
      extend indefinitely.  This commit therefore makes RCU's dyntick-idle
      subsystem record the task_struct structure of the task that is running
      in dyntick-idle mode on each CPU.  The TASKS_RCU grace period can
      then access this information and record a quiescent state on
      behalf of any CPU running in dyntick-idle usermode.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      176f8f7a
    • P
      rcu: Break more call_rcu() deadlock involving scheduler and perf · 9fdd3bc9
      Paul E. McKenney 提交于
      Commit 96d3fd0d (rcu: Break call_rcu() deadlock involving scheduler
      and perf) covered the case where __call_rcu_nocb_enqueue() needs to wake
      the rcuo kthread due to the queue being initially empty, but did not
      do anything for the case where the queue was overflowing.  This commit
      therefore also defers wakeup for the overflow case.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      9fdd3bc9
  12. 28 8月, 2014 1 次提交
  13. 10 7月, 2014 2 次提交
    • P
      rcu: Simplify priority boosting by putting rt_mutex in rcu_node · abaa93d9
      Paul E. McKenney 提交于
      RCU priority boosting currently checks for boosting via a pointer in
      task_struct.  However, this is not needed: As Oleg noted, if the
      rt_mutex is placed in the rcu_node instead of on the booster's stack,
      the boostee can simply check it see if it owns the lock.  This commit
      makes this change, shrinking task_struct by one pointer and the kernel
      by thirteen lines.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      abaa93d9
    • P
      rcu: Allow post-unlock reference for rt_mutex · dfeb9765
      Paul E. McKenney 提交于
      The current approach to RCU priority boosting uses an rt_mutex strictly
      for its priority-boosting side effects.  The rt_mutex_init_proxy_locked()
      function is used by the booster to initialize the lock as held by the
      boostee.  The booster then uses rt_mutex_lock() to acquire this rt_mutex,
      which priority-boosts the boostee.  When the boostee reaches the end
      of its outermost RCU read-side critical section, it checks a field in
      its task structure to see whether it has been boosted, and, if so, uses
      rt_mutex_unlock() to release the rt_mutex.  The booster can then go on
      to boost the next task that is blocking the current RCU grace period.
      
      But reasonable implementations of rt_mutex_unlock() might result in the
      boostee referencing the rt_mutex's data after releasing it.  But the
      booster might have re-initialized the rt_mutex between the time that the
      boostee released it and the time that it later referenced it.  This is
      clearly asking for trouble, so this commit introduces a completion that
      forces the booster to wait until the boostee has completely finished with
      the rt_mutex, thus avoiding the case where the booster is re-initializing
      the rt_mutex before the last boostee's last reference to that rt_mutex.
      
      This of course does introduce some overhead, but the priority-boosting
      code paths are miles from any possible fastpath, and the overhead of
      executing the completion will normally be quite small compared to the
      overhead of priority boosting and deboosting, so this should be OK.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      dfeb9765
  14. 08 7月, 2014 1 次提交
    • P
      rcu: Parallelize and economize NOCB kthread wakeups · fbce7497
      Paul E. McKenney 提交于
      An 80-CPU system with a context-switch-heavy workload can require so
      many NOCB kthread wakeups that the RCU grace-period kthreads spend several
      tens of percent of a CPU just awakening things.  This clearly will not
      scale well: If you add enough CPUs, the RCU grace-period kthreads would
      get behind, increasing grace-period latency.
      
      To avoid this problem, this commit divides the NOCB kthreads into leaders
      and followers, where the grace-period kthreads awaken the leaders each of
      whom in turn awakens its followers.  By default, the number of groups of
      kthreads is the square root of the number of CPUs, but this default may
      be overridden using the rcutree.rcu_nocb_leader_stride boot parameter.
      This reduces the number of wakeups done per grace period by the RCU
      grace-period kthread by the square root of the number of CPUs, but of
      course by shifting those wakeups to the leaders.  In addition, because
      the leaders do grace periods on behalf of their respective followers,
      the number of wakeups of the followers decreases by up to a factor of two.
      Instead of being awakened once when new callbacks arrive and again
      at the end of the grace period, the followers are awakened only at
      the end of the grace period.
      
      For a numerical example, in a 4096-CPU system, the grace-period kthread
      would awaken 64 leaders, each of which would awaken its 63 followers
      at the end of the grace period.  This compares favorably with the 79
      wakeups for the grace-period kthread on an 80-CPU system.
      Reported-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      fbce7497
  15. 24 6月, 2014 1 次提交
    • P
      rcu: Reduce overhead of cond_resched() checks for RCU · 4a81e832
      Paul E. McKenney 提交于
      Commit ac1bea85 (Make cond_resched() report RCU quiescent states)
      fixed a problem where a CPU looping in the kernel with but one runnable
      task would give RCU CPU stall warnings, even if the in-kernel loop
      contained cond_resched() calls.  Unfortunately, in so doing, it introduced
      performance regressions in Anton Blanchard's will-it-scale "open1" test.
      The problem appears to be not so much the increased cond_resched() path
      length as an increase in the rate at which grace periods complete, which
      increased per-update grace-period overhead.
      
      This commit takes a different approach to fixing this bug, mainly by
      moving the RCU-visible quiescent state from cond_resched() to
      rcu_note_context_switch(), and by further reducing the check to a
      simple non-zero test of a single per-CPU variable.  However, this
      approach requires that the force-quiescent-state processing send
      resched IPIs to the offending CPUs.  These will be sent only once
      the grace period has reached an age specified by the boot/sysfs
      parameter rcutree.jiffies_till_sched_qs, or once the grace period
      reaches an age halfway to the point at which RCU CPU stall warnings
      will be emitted, whichever comes first.
      Reported-by: NDave Hansen <dave.hansen@intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Christoph Lameter <cl@gentwo.org>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      [ paulmck: Made rcu_momentary_dyntick_idle() as suggested by the
        ktest build robot.  Also fixed smp_mb() comment as noted by
        Oleg Nesterov. ]
      
      Merge with e552592e (Reduce overhead of cond_resched() checks for RCU)
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4a81e832
  16. 15 5月, 2014 1 次提交
  17. 29 4月, 2014 1 次提交
    • P
      rcu: Make callers awaken grace-period kthread · 48a7639c
      Paul E. McKenney 提交于
      The rcu_start_gp_advanced() function currently uses irq_work_queue()
      to defer wakeups of the RCU grace-period kthread.  This deferring
      is necessary to avoid RCU-scheduler deadlocks involving the rcu_node
      structure's lock, meaning that RCU cannot call any of the scheduler's
      wake-up functions while holding one of these locks.
      
      Unfortunately, the second and subsequent calls to irq_work_queue() are
      ignored, and the first call will be ignored (aside from queuing the work
      item) if the scheduler-clock tick is turned off.  This is OK for many
      uses, especially those where irq_work_queue() is called from an interrupt
      or softirq handler, because in those cases the scheduler-clock-tick state
      will be re-evaluated, which will turn the scheduler-clock tick back on.
      On the next tick, any deferred work will then be processed.
      
      However, this strategy does not always work for RCU, which can be invoked
      at process level from idle CPUs.  In this case, the tick might never
      be turned back on, indefinitely defering a grace-period start request.
      Note that the RCU CPU stall detector cannot see this condition, because
      there is no RCU grace period in progress.  Therefore, we can (and do!)
      see long tens-of-seconds stalls in grace-period handling.  In theory,
      we could see a full grace-period hang, but rcutorture testing to date
      has seen only the tens-of-seconds stalls.  Event tracing demonstrates
      that irq_work_queue() is being called repeatedly to no effect during
      these stalls: The "newreq" event appears repeatedly from a task that is
      not one of the grace-period kthreads.
      
      In theory, irq_work_queue() might be fixed to avoid this sort of issue,
      but RCU's requirements are unusual and it is quite straightforward to pass
      wake-up responsibility up through RCU's call chain, so that the wakeup
      happens when the offending locks are released.
      
      This commit therefore makes this change.  The rcu_start_gp_advanced(),
      rcu_start_future_gp(), rcu_accelerate_cbs(), rcu_advance_cbs(),
      __note_gp_changes(), and rcu_start_gp() functions now return a boolean
      which indicates when a wake-up is needed.  A new rcu_gp_kthread_wake()
      does the wakeup when it is necessary and safe to do so: No self-wakes,
      no wake-ups if the ->gp_flags field indicates there is no need (as in
      someone else did the wake-up before we got around to it), and no wake-ups
      before the grace-period kthread has been created.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      48a7639c