1. 28 5月, 2015 7 次提交
  2. 15 4月, 2015 1 次提交
    • P
      rcu: Control grace-period delays directly from value · 8d7dc928
      Paul E. McKenney 提交于
      In a misguided attempt to avoid an #ifdef, the use of the
      gp_init_delay module parameter was conditioned on the corresponding
      RCU_TORTURE_TEST_SLOW_INIT Kconfig variable, using IS_ENABLED() at
      the point of use in the code.  This meant that the compiler always saw
      the delay, which meant that RCU_TORTURE_TEST_SLOW_INIT_DELAY had to be
      unconditionally defined.  This in turn caused "make oldconfig" to ask
      pointless questions about the value of RCU_TORTURE_TEST_SLOW_INIT_DELAY
      in cases where it was not even used.
      
      This commit avoids these pointless questions by defining gp_init_delay
      under #ifdef.  In one branch, gp_init_delay is initialized to
      RCU_TORTURE_TEST_SLOW_INIT_DELAY and is also a module parameter (thus
      allowing boot-time modification), and in the other branch gp_init_delay
      is a const variable initialized by default to zero.
      
      This approach also simplifies the code at the delay point by eliminating
      the IS_DEFINED().  Because gp_init_delay is constant zero in the no-delay
      case intended for production use, the "gp_init_delay > 0" check causes
      the delay to become dead code, as desired in this case.  In addition,
      this commit replaces magic constant "10" with the preprocessor variable
      PER_RCU_NODE_PERIOD, which controls the number of grace periods that
      are allowed to elapse at full speed before a delay is inserted.
      
      Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
      Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      8d7dc928
  3. 20 3月, 2015 2 次提交
    • P
      rcu: Associate quiescent-state reports with grace period · 654e9533
      Paul E. McKenney 提交于
      As noted in earlier commit logs, CPU hotplug operations running
      concurrently with grace-period initialization can result in a given
      leaf rcu_node structure having all CPUs offline and no blocked readers,
      but with this rcu_node structure nevertheless blocking the current
      grace period.  Therefore, the quiescent-state forcing code now checks
      for this situation and repairs it.
      
      Unfortunately, this checking can result in false positives, for example,
      when the last task has just removed itself from this leaf rcu_node
      structure, but has not yet started clearing the ->qsmask bits further
      up the structure.  This means that the grace-period kthread (which
      forces quiescent states) and some other task might be attempting to
      concurrently clear these ->qsmask bits.  This is usually not a problem:
      One of these tasks will be the first to acquire the upper-level rcu_node
      structure's lock and with therefore clear the bit, and the other task,
      seeing the bit already cleared, will stop trying to clear bits.
      
      Sadly, this means that the following unusual sequence of events -can-
      result in a problem:
      
      1.	The grace-period kthread wins, and clears the ->qsmask bits.
      
      2.	This is the last thing blocking the current grace period, so
      	that the grace-period kthread clears ->qsmask bits all the way
      	to the root and finds that the root ->qsmask field is now zero.
      
      3.	Another grace period is required, so that the grace period kthread
      	initializes it, including setting all the needed qsmask bits.
      
      4.	The leaf rcu_node structure (the one that started this whole
      	mess) is blocking this new grace period, either because it
      	has at least one online CPU or because there is at least one
      	task that had blocked within an RCU read-side critical section
      	while running on one of this leaf rcu_node structure's CPUs.
      	(And yes, that CPU might well have gone offline before the
      	grace period in step (3) above started, which can mean that
      	there is a task on the leaf rcu_node structure's ->blkd_tasks
      	list, but ->qsmask equal to zero.)
      
      5.	The other kthread didn't get around to trying to clear the upper
      	level ->qsmask bits until all the above had happened.  This means
      	that it now sees bits set in the upper-level ->qsmask field, so it
      	proceeds to clear them.  Too bad that it is doing so on behalf of
      	a quiescent state that does not apply to the current grace period!
      
      This sequence of events can result in the new grace period being too
      short.  It can also result in the new grace period ending before the
      leaf rcu_node structure's ->qsmask bits have been cleared, which will
      result in splats during initialization of the next grace period.  In
      addition, it can result in tasks blocking the new grace period still
      being queued at the start of the next grace period, which will result
      in other splats.  Sasha's testing turned up another of these splats,
      as did rcutorture testing.  (And yes, rcutorture is being adjusted to
      make these splats show up more quickly.  Which probably is having the
      undesirable side effect of making other problems show up less quickly.
      Can't have everything!)
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: <stable@vger.kernel.org> # 4.0.x
      Tested-by: NSasha Levin <sasha.levin@oracle.com>
      654e9533
    • P
      rcu: Yet another fix for preemption and CPU hotplug · a77da14c
      Paul E. McKenney 提交于
      As noted earlier, the following sequence of events can occur when
      running PREEMPT_RCU and HOTPLUG_CPU on a system with a multi-level
      rcu_node combining tree:
      
      1.	A group of tasks block on CPUs corresponding to a given leaf
      	rcu_node structure while within RCU read-side critical sections.
      2.	All CPUs corrsponding to that rcu_node structure go offline.
      3.	The next grace period starts, but because there are still tasks
      	blocked, the upper-level bits corresponding to this leaf rcu_node
      	structure remain set.
      4.	All the tasks exit their RCU read-side critical sections and
      	remove themselves from the leaf rcu_node structure's list,
      	leaving it empty.
      5.	But because there now is code to check for this condition at
      	force-quiescent-state time, the upper bits are cleared and the
      	grace period completes.
      
      However, there is another complication that can occur following step 4 above:
      
      4a.	The grace period starts, and the leaf rcu_node structure's
      	gp_tasks pointer is set to NULL because there are no tasks
      	blocked on this structure.
      4b.	One of the CPUs corresponding to the leaf rcu_node structure
      	comes back online.
      4b.	An endless stream of tasks are preempted within RCU read-side
      	critical sections on this CPU, such that the ->blkd_tasks
      	list is always non-empty.
      
      The grace period will never end.
      
      This commit therefore makes the force-quiescent-state processing check only
      for absence of tasks blocking the current grace period rather than absence
      of tasks altogether.  This will cause a quiescent state to be reported if
      the current leaf rcu_node structure is not blocking the current grace period
      and its parent thinks that it is, regardless of how RCU managed to get
      itself into this state.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: <stable@vger.kernel.org> # 4.0.x
      Tested-by: NSasha Levin <sasha.levin@oracle.com>
      a77da14c
  4. 13 3月, 2015 5 次提交
    • P
      rcu: Add diagnostics to grace-period cleanup · 5c60d25f
      Paul E. McKenney 提交于
      At grace-period initialization time, RCU checks that all quiescent
      states were really reported for the previous grace period.  Now that
      grace-period cleanup has been split out of grace-period initialization,
      this commit also performs those checks at grace-period cleanup time.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5c60d25f
    • P
      rcu: Handle outgoing CPUs on exit from idle loop · 88428cc5
      Paul E. McKenney 提交于
      This commit informs RCU of an outgoing CPU just before that CPU invokes
      arch_cpu_idle_dead() during its last pass through the idle loop (via a
      new CPU_DYING_IDLE notifier value).  This change means that RCU need not
      deal with outgoing CPUs passing through the scheduler after informing
      RCU that they are no longer online.  Note that removing the CPU from
      the rcu_node ->qsmaskinit bit masks is done at CPU_DYING_IDLE time,
      and orphaning callbacks is still done at CPU_DEAD time, the reason being
      that at CPU_DEAD time we have another CPU that can adopt them.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      88428cc5
    • P
      rcu: Eliminate ->onoff_mutex from rcu_node structure · c1990689
      Paul E. McKenney 提交于
      Because that RCU grace-period initialization need no longer exclude
      CPU-hotplug operations, this commit eliminates the ->onoff_mutex and
      its uses.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c1990689
    • P
      rcu: Process offlining and onlining only at grace-period start · 0aa04b05
      Paul E. McKenney 提交于
      Races between CPU hotplug and grace periods can be difficult to resolve,
      so the ->onoff_mutex is used to exclude the two events.  Unfortunately,
      this means that it is impossible for an outgoing CPU to perform the
      last bits of its offlining from its last pass through the idle loop,
      because sleeplocks cannot be acquired in that context.
      
      This commit avoids these problems by buffering online and offline events
      in a new ->qsmaskinitnext field in the leaf rcu_node structures.  When a
      grace period starts, the events accumulated in this mask are applied to
      the ->qsmaskinit field, and, if needed, up the rcu_node tree.  The special
      case of all CPUs corresponding to a given leaf rcu_node structure being
      offline while there are still elements in that structure's ->blkd_tasks
      list is handled using a new ->wait_blkd_tasks field.  In this case,
      propagating the offline bits up the tree is deferred until the beginning
      of the grace period after all of the tasks have exited their RCU read-side
      critical sections and removed themselves from the list, at which point
      the ->wait_blkd_tasks flag is cleared.  If one of that leaf rcu_node
      structure's CPUs comes back online before the list empties, then the
      ->wait_blkd_tasks flag is simply cleared.
      
      This of course means that RCU's notion of which CPUs are offline can be
      out of date.  This is OK because RCU need only wait on CPUs that were
      online at the time that the grace period started.  In addition, RCU's
      force-quiescent-state actions will handle the case where a CPU goes
      offline after the grace period starts.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0aa04b05
    • P
      rcu: Move rcu_report_unblock_qs_rnp() to common code · cc99a310
      Paul E. McKenney 提交于
      The rcu_report_unblock_qs_rnp() function is invoked when the
      last task blocking the current grace period exits its outermost
      RCU read-side critical section.  Previously, this was called only
      from rcu_read_unlock_special(), and was therefore defined only when
      CONFIG_RCU_PREEMPT=y.  However, this function will be invoked even when
      CONFIG_RCU_PREEMPT=n once CPU-hotplug operations are processed only at
      the beginnings of RCU grace periods.  The reason for this change is that
      the last task on a given leaf rcu_node structure's ->blkd_tasks list
      might well exit its RCU read-side critical section between the time that
      recent CPU-hotplug operations were applied and when the new grace period
      was initialized.  This situation could result in RCU waiting forever on
      that leaf rcu_node structure, because if all that structure's CPUs were
      already offline, there would be no quiescent-state events to drive that
      structure's part of the grace period.
      
      This commit therefore moves rcu_report_unblock_qs_rnp() to common code
      that is built unconditionally so that the quiescent-state-forcing code
      can clean up after this situation, avoiding the grace-period stall.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      cc99a310
  5. 12 3月, 2015 5 次提交
  6. 04 3月, 2015 5 次提交
  7. 27 2月, 2015 8 次提交
  8. 26 2月, 2015 1 次提交
  9. 16 1月, 2015 3 次提交
    • P
      rcu: Add GP-kthread-starvation checks to CPU stall warnings · fb81a44b
      Paul E. McKenney 提交于
      This commit adds a message that is printed if the relevant grace-period
      kthread has not been able to run for the two seconds preceding the
      stall warning.  (The two seconds is double the maximum interval between
      successive bouts of quiescent-state forcing.)
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      fb81a44b
    • P
      rcu: Make cond_resched_rcu_qs() apply to normal RCU flavors · 5cd37193
      Paul E. McKenney 提交于
      Although cond_resched_rcu_qs() only applies to TASKS_RCU, it is used
      in places where it would be useful for it to apply to the normal RCU
      flavors, rcu_preempt, rcu_sched, and rcu_bh.  This is especially the
      case for workloads that aggressively overload the system, particularly
      those that generate large numbers of RCU updates on systems running
      NO_HZ_FULL CPUs.  This commit therefore communicates quiescent states
      from cond_resched_rcu_qs() to the normal RCU flavors.
      
      Note that it is unfortunately necessary to leave the old ->passed_quiesce
      mechanism in place to allow quiescent states that apply to only one
      flavor to be recorded.  (Yes, we could decrement ->rcu_qs_ctr_snap in
      that case, but that is not so good for debugging of RCU internals.)
      In addition, if one of the RCU flavor's grace period has stalled, this
      will invoke rcu_momentary_dyntick_idle(), resulting in a heavy-weight
      quiescent state visible from other CPUs.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Merge commit from Sasha Levin fixing a bug where __this_cpu()
        was used in preemptible code. ]
      5cd37193
    • P
      rcu: Optionally run grace-period kthreads at real-time priority · a94844b2
      Paul E. McKenney 提交于
      Recent testing has shown that under heavy load, running RCU's grace-period
      kthreads at real-time priority can improve performance (according to 0day
      test robot) and reduce the incidence of RCU CPU stall warnings.  However,
      most systems do just fine with the default non-realtime priorities for
      these kthreads, and it does not make sense to expose the entire user
      base to any risk stemming from this change, given that this change is
      of use only to a few users running extremely heavy workloads.
      
      Therefore, this commit allows users to specify realtime priorities
      for the grace-period kthreads, but leaves them running SCHED_OTHER
      by default.  The realtime priority may be specified at build time
      via the RCU_KTHREAD_PRIO Kconfig parameter, or at boot time via the
      rcutree.kthread_prio parameter.  Either way, 0 says to continue the
      default SCHED_OTHER behavior and values from 1-99 specify that priority
      of SCHED_FIFO behavior.  Note that a value of 0 is not permitted when
      the RCU_BOOST Kconfig parameter is specified.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      a94844b2
  10. 11 1月, 2015 2 次提交
    • P
      rcutorture: Check from beginning to end of grace period · 917963d0
      Paul E. McKenney 提交于
      Currently, rcutorture's Reader Batch checks measure from the end of
      the previous grace period to the end of the current one.  This commit
      tightens up these checks by measuring from the start and end of the same
      grace period.  This involves adding rcu_batches_started() and friends
      corresponding to the existing rcu_batches_completed() and friends.
      
      We leave SRCU alone for the moment, as it does not yet have a way of
      tracking both ends of its grace periods.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      917963d0
    • P
      rcu: Make _batches_completed() functions return unsigned long · 9733e4f0
      Paul E. McKenney 提交于
      Long ago, the various ->completed fields were of type long, but now are
      unsigned long due to signed-integer-overflow concerns.  However, the
      various _batches_completed() functions remained of type long, even though
      their only purpose in life is to return the corresponding ->completed
      field.  This patch cleans this up by changing these functions' return
      types to unsigned long.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      9733e4f0
  11. 07 1月, 2015 1 次提交
    • P
      rcu: Handle gpnum/completed wrap while dyntick idle · e3663b10
      Paul E. McKenney 提交于
      Subtle race conditions can result if a CPU stays in dyntick-idle mode
      long enough for the ->gpnum and ->completed fields to wrap.  For
      example, consider the following sequence of events:
      
      o	CPU 1 encounters a quiescent state while waiting for grace period
      	5 to complete, but then enters dyntick-idle mode.
      
      o	While CPU 1 is in dyntick-idle mode, the grace-period counters
      	wrap around so that the grace period number is now 4.
      
      o	Just as CPU 1 exits dyntick-idle mode, grace period 4 completes
      	and grace period 5 begins.
      
      o	The quiescent state that CPU 1 passed through during the old
      	grace period 5 looks like it applies to the new grace period
      	5.  Therefore, the new grace period 5 completes without CPU 1
      	having passed through a quiescent state.
      
      This could clearly be a fatal surprise to any long-running RCU read-side
      critical section that happened to be running on CPU 1 at the time.  At one
      time, this was not a problem, given that it takes significant time for
      the grace-period counters to overflow even on 32-bit systems.  However,
      with the advent of NO_HZ_FULL and SMP embedded systems, arbitrarily long
      idle periods are now becoming quite feasible.  It is therefore time to
      close this race.
      
      This commit therefore avoids this race condition by having the
      quiescent-state forcing code detect when a CPU is falling too far
      behind, and setting a new rcu_data field ->gpwrap when this happens.
      Whenever this new ->gpwrap field is set, the CPU's ->gpnum and ->completed
      fields are known to be untrustworthy, and can be ignored, along with
      any associated quiescent states.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      e3663b10