1. 06 5月, 2011 8 次提交
    • P
      rcu: add grace-period age and more kthread state to tracing · 15ba0ba8
      Paul E. McKenney 提交于
      This commit adds the age in jiffies of the current grace period along
      with the duration in jiffies of the longest grace period since boot
      to the rcu/rcugp debugfs file.  It also adds an additional "O" state
      to kthread tracing to differentiate between the kthread waiting due to
      having nothing to do on the one hand and waiting due to being on the
      wrong CPU on the other hand.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      15ba0ba8
    • P
      rcu: add tracing for RCU's kthread run states. · d71df90e
      Paul E. McKenney 提交于
      Add tracing to help debugging situations when RCU's kthreads are not
      running but are supposed to be.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      d71df90e
    • P
      rcu: Add boosting to TREE_PREEMPT_RCU tracing · 0ea1f2eb
      Paul E. McKenney 提交于
      Includes total number of tasks boosted, number boosted on behalf of each
      of normal and expedited grace periods, and statistics on attempts to
      initiate boosting that failed for various reasons.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      0ea1f2eb
    • P
      rcu: priority boosting for TREE_PREEMPT_RCU · 27f4d280
      Paul E. McKenney 提交于
      Add priority boosting for TREE_PREEMPT_RCU, similar to that for
      TINY_PREEMPT_RCU.  This is enabled by the default-off RCU_BOOST
      kernel parameter.  The priority to which to boost preempted
      RCU readers is controlled by the RCU_BOOST_PRIO kernel parameter
      (defaulting to real-time priority 1) and the time to wait before
      boosting the readers who are blocking a given grace period is
      controlled by the RCU_BOOST_DELAY kernel parameter (defaulting to
      500 milliseconds).
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      27f4d280
    • P
      rcu: move TREE_RCU from softirq to kthread · a26ac245
      Paul E. McKenney 提交于
      If RCU priority boosting is to be meaningful, callback invocation must
      be boosted in addition to preempted RCU readers.  Otherwise, in presence
      of CPU real-time threads, the grace period ends, but the callbacks don't
      get invoked.  If the callbacks don't get invoked, the associated memory
      doesn't get freed, so the system is still subject to OOM.
      
      But it is not reasonable to priority-boost RCU_SOFTIRQ, so this commit
      moves the callback invocations to a kthread, which can be boosted easily.
      
      Also add comments and properly synchronized all accesses to
      rcu_cpu_kthread_task, as suggested by Lai Jiangshan.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      a26ac245
    • P
      rcu: merge TREE_PREEPT_RCU blocked_tasks[] lists · 12f5f524
      Paul E. McKenney 提交于
      Combine the current TREE_PREEMPT_RCU ->blocked_tasks[] lists in the
      rcu_node structure into a single ->blkd_tasks list with ->gp_tasks
      and ->exp_tasks tail pointers.  This is in preparation for RCU priority
      boosting, which will add a third dimension to the combinatorial explosion
      in the ->blocked_tasks[] case, but simply a third pointer in the new
      ->blkd_tasks case.
      
      Also update documentation to reflect blocked_tasks[] merge
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      12f5f524
    • P
      rcu: Decrease memory-barrier usage based on semi-formal proof · e59fb312
      Paul E. McKenney 提交于
      Commit d09b62df fixed grace-period synchronization, but left some smp_mb()
      invocations in rcu_process_callbacks() that are no longer needed, but
      sheer paranoia prevented them from being removed.  This commit removes
      them and provides a proof of correctness in their absence.  It also adds
      a memory barrier to rcu_report_qs_rsp() immediately before the update to
      rsp->completed in order to handle the theoretical possibility that the
      compiler or CPU might move massive quantities of code into a lock-based
      critical section.  This also proves that the sheer paranoia was not
      entirely unjustified, at least from a theoretical point of view.
      
      In addition, the old dyntick-idle synchronization depended on the fact
      that grace periods were many milliseconds in duration, so that it could
      be assumed that no dyntick-idle CPU could reorder a memory reference
      across an entire grace period.  Unfortunately for this design, the
      addition of expedited grace periods breaks this assumption, which has
      the unfortunate side-effect of requiring atomic operations in the
      functions that track dyntick-idle state for RCU.  (There is some hope
      that the algorithms used in user-level RCU might be applied here, but
      some work is required to handle the NMIs that user-space applications
      can happily ignore.  For the short term, better safe than sorry.)
      
      This proof assumes that neither compiler nor CPU will allow a lock
      acquisition and release to be reordered, as doing so can result in
      deadlock.  The proof is as follows:
      
      1.	A given CPU declares a quiescent state under the protection of
      	its leaf rcu_node's lock.
      
      2.	If there is more than one level of rcu_node hierarchy, the
      	last CPU to declare a quiescent state will also acquire the
      	->lock of the next rcu_node up in the hierarchy,  but only
      	after releasing the lower level's lock.  The acquisition of this
      	lock clearly cannot occur prior to the acquisition of the leaf
      	node's lock.
      
      3.	Step 2 repeats until we reach the root rcu_node structure.
      	Please note again that only one lock is held at a time through
      	this process.  The acquisition of the root rcu_node's ->lock
      	must occur after the release of that of the leaf rcu_node.
      
      4.	At this point, we set the ->completed field in the rcu_state
      	structure in rcu_report_qs_rsp().  However, if the rcu_node
      	hierarchy contains only one rcu_node, then in theory the code
      	preceding the quiescent state could leak into the critical
      	section.  We therefore precede the update of ->completed with a
      	memory barrier.  All CPUs will therefore agree that any updates
      	preceding any report of a quiescent state will have happened
      	before the update of ->completed.
      
      5.	Regardless of whether a new grace period is needed, rcu_start_gp()
      	will propagate the new value of ->completed to all of the leaf
      	rcu_node structures, under the protection of each rcu_node's ->lock.
      	If a new grace period is needed immediately, this propagation
      	will occur in the same critical section that ->completed was
      	set in, but courtesy of the memory barrier in #4 above, is still
      	seen to follow any pre-quiescent-state activity.
      
      6.	When a given CPU invokes __rcu_process_gp_end(), it becomes
      	aware of the end of the old grace period and therefore makes
      	any RCU callbacks that were waiting on that grace period eligible
      	for invocation.
      
      	If this CPU is the same one that detected the end of the grace
      	period, and if there is but a single rcu_node in the hierarchy,
      	we will still be in the single critical section.  In this case,
      	the memory barrier in step #4 guarantees that all callbacks will
      	be seen to execute after each CPU's quiescent state.
      
      	On the other hand, if this is a different CPU, it will acquire
      	the leaf rcu_node's ->lock, and will again be serialized after
      	each CPU's quiescent state for the old grace period.
      
      On the strength of this proof, this commit therefore removes the memory
      barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
      The effect is to reduce the number of memory barriers by one and to
      reduce the frequency of execution from about once per scheduling tick
      per CPU to once per grace period.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      e59fb312
    • P
      rcu: Remove conditional compilation for RCU CPU stall warnings · a00e0d71
      Paul E. McKenney 提交于
      The RCU CPU stall warnings can now be controlled using the
      rcu_cpu_stall_suppress boot-time parameter or via the same parameter
      from sysfs.  There is therefore no longer any reason to have
      kernel config parameters for this feature.  This commit therefore
      removes the RCU_CPU_STALL_DETECTOR and RCU_CPU_STALL_DETECTOR_RUNNABLE
      kernel config parameters.  The RCU_CPU_STALL_TIMEOUT parameter remains
      to allow the timeout to be tuned and the RCU_CPU_STALL_VERBOSE parameter
      remains to allow task-stall information to be suppressed if desired.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      a00e0d71
  2. 18 12月, 2010 1 次提交
    • P
      rcu: limit rcu_node leaf-level fanout · 0209f649
      Paul E. McKenney 提交于
      Some recent benchmarks have indicated possible lock contention on the
      leaf-level rcu_node locks.  This commit therefore limits the number of
      CPUs per leaf-level rcu_node structure to 16, in other words, there
      can be at most 16 rcu_data structures fanning into a given rcu_node
      structure.  Prior to this, the limit was 32 on 32-bit systems and 64 on
      64-bit systems.
      
      Note that the fanout of non-leaf rcu_node structures is unchanged.  The
      organization of accesses to the rcu_node tree is such that references
      to non-leaf rcu_node structures are much less frequent than to the
      leaf structures.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0209f649
  3. 30 11月, 2010 1 次提交
  4. 24 9月, 2010 1 次提交
    • P
      rcu: Add tracing data to support queueing models · 269dcc1c
      Paul E. McKenney 提交于
      The current tracing data is not sufficient to deduce the average time
      that a callback spends waiting for a grace period to end.  Add three
      per-CPU counters recording the number of callbacks invoked (ci), the
      number of callbacks orphaned (co), and the number of callbacks adopted
      (ca).  Given the existing callback queue length (ql), the average wait
      time in absence of CPU hotplug operations is ql/ci.  The units of wait
      time will be in terms of the duration over which ci was measured.
      
      In the presence of CPU hotplug operations, there is room for argument,
      but ql/(ci-co+ca) won't steer you too far wrong.
      
      Also fixes a typo called out by Lucas De Marchi <lucas.de.marchi@gmail.com>.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      269dcc1c
  5. 21 8月, 2010 2 次提交
  6. 20 8月, 2010 3 次提交
  7. 11 5月, 2010 2 次提交
    • P
      rcu: reduce the number of spurious RCU_SOFTIRQ invocations · d21670ac
      Paul E. McKenney 提交于
      Lai Jiangshan noted that up to 10% of the RCU_SOFTIRQ are spurious, and
      traced this down to the fact that the current grace-period machinery
      will uselessly raise RCU_SOFTIRQ when a given CPU needs to go through
      a quiescent state, but has not yet done so.  In this situation, there
      might well be nothing that RCU_SOFTIRQ can do, and the overhead can be
      worth worrying about in the ksoftirqd case.  This patch therefore avoids
      raising RCU_SOFTIRQ in this situation.
      
      Changes since v1 (http://lkml.org/lkml/2010/3/30/122 from Lai Jiangshan):
      
      o	Omit the rcu_qs_pending() prechecks, as they aren't that
      	much less expensive than the quiescent-state checks.
      
      o	Merge with the set_need_resched() patch that reduces IPIs.
      
      o	Add the new n_rp_report_qs field to the rcu_pending tracing output.
      
      o	Update the tracing documentation accordingly.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d21670ac
    • P
      rcu: improve RCU CPU stall-warning messages · 4300aa64
      Paul E. McKenney 提交于
      The existing RCU CPU stall-warning messages can be confusing, especially
      in the case where one CPU detects a single other stalled CPU.  In addition,
      the console messages did not say which flavor of RCU detected the stall,
      which can make it difficult to work out exactly what is causing the stall.
      This commit improves these messages.
      Requested-by: NDhaval Giani <dhaval.giani@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4300aa64
  8. 11 3月, 2010 1 次提交
    • P
      rcu: Increase RCU CPU stall timeouts if PROVE_RCU · 007b0924
      Paul E. McKenney 提交于
      CONFIG_PROVE_RCU imposes additional overhead on the kernel, so
      increase the RCU CPU stall timeouts in an attempt to allow for
      this effect.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1267830207-9474-2-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      007b0924
  9. 27 2月, 2010 1 次提交
  10. 25 2月, 2010 3 次提交
  11. 16 1月, 2010 1 次提交
    • P
      rcu: Fix sparse warnings · 017c4261
      Paul E. McKenney 提交于
      Rename local variable "i" in rcu_init() to avoid conflict with
      RCU_INIT_FLAVOR(), restrict the scope of RCU_TREE_NONCORE, and
      make __synchronize_srcu() static.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12635142581560-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      017c4261
  12. 13 1月, 2010 4 次提交
    • P
      rcu: Make force_quiescent_state() start grace period if needed · 46a1e34e
      Paul E. McKenney 提交于
      Grace periods cannot be started while force_quiescent_state() is
      active.  This is OK in that the affected CPUs will try again
      later, but it does induce needless grace-period delays.  This
      patch causes rcu_start_gp() to record a failed attempt to start
      a grace period. When force_quiescent_state() prepares to return,
      it then starts the grace period if there was such a failed
      attempt.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12626465501854-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      46a1e34e
    • P
      rcu: Remove leg of force_quiescent_state() switch statement · ee47eb9f
      Paul E. McKenney 提交于
      The comparisons of rsp->gpnum nad rsp->completed in
      rcu_process_dyntick() and force_quiescent_state() can be
      replaced by the much more clear rcu_gp_in_progress() predicate
      function.  After doing this, it becomes clear that the
      RCU_SAVE_COMPLETED leg of the force_quiescent_state() function's
      switch statement is almost completely a no-op.  A small change
      to the RCU_SAVE_DYNTICK leg renders it a complete no-op, after
      which it can be removed.  Doing so also eliminates the forcenow
      local variable from force_quiescent_state().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12626465501781-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ee47eb9f
    • P
      rcu: Eliminate local variable lastcomp from force_quiescent_state() · 39c0bbfc
      Paul E. McKenney 提交于
      Because rsp->fqs_active is set to 1 across
      force_quiescent_state()'s switch statement, rcu_start_gp() will
      refrain from starting a new grace period during this time.
      Therefore, rsp->gpnum is constant, and can be propagated to all
      uses of lastcomp, eliminating this local variable.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12626465502985-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      39c0bbfc
    • P
      rcu: Prohibit starting new grace periods while forcing quiescent states · 07079d53
      Paul E. McKenney 提交于
      Reduce the number and variety of race conditions by prohibiting
      the start of a new grace period while force_quiescent_state() is
      active. A new fqs_active flag in the rcu_state structure is used
      to trace whether or not force_quiescent_state() is active, and
      this new flag is tested by rcu_start_gp().  If the CPU that
      closed out the last grace period needs another grace period,
      this new grace period may be delayed up to one scheduling-clock
      tick, but it will eventually get started.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <126264655052-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      07079d53
  13. 03 12月, 2009 3 次提交
    • P
      rcu: Add expedited grace-period support for preemptible RCU · d9a3da06
      Paul E. McKenney 提交于
      Implement an synchronize_rcu_expedited() for preemptible RCU
      that actually is expedited.  This uses
      synchronize_sched_expedited() to force all threads currently
      running in a preemptible-RCU read-side critical section onto the
      appropriate ->blocked_tasks[] list, then takes a snapshot of all
      of these lists and waits for them to drain.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1259784616158-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9a3da06
    • P
      rcu: Enable fourth level of TREE_RCU hierarchy · cf244dc0
      Paul E. McKenney 提交于
      Enable a fourth level of rcu_node hierarchy for TREE_RCU and
      TREE_PREEMPT_RCU.  This is for stress-testing and experiemental
      purposes only, although in theory this would enable 16,777,216
      CPUs on 64-bit systems, though only 1,048,576 CPUs on 32-bit
      systems. Normal experimental use of this fourth level will
      normally set CONFIG_RCU_FANOUT=2, requiring a 16-CPU system,
      though the more adventurous (and more fortunate) experimenters
      may wish to chose CONFIG_RCU_FANOUT=3 for 81-CPU systems or even
      CONFIG_RCU_FANOUT=4 for 256-CPU systems.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NJosh Triplett <josh@joshtriplett.org>
      Acked-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12597846161257-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cf244dc0
    • P
      rcu: Rename "quiet" functions · d3f6bad3
      Paul E. McKenney 提交于
      The number of "quiet" functions has grown recently, and the
      names are no longer very descriptive.  The point of all of these
      functions is to do some portion of the task of reporting a
      quiescent state, so rename them accordingly:
      
      o	cpu_quiet() becomes rcu_report_qs_rdp(), which reports a
      	quiescent state to the per-CPU rcu_data structure.  If this
      	turns out to be a new quiescent state for this grace period,
      	then rcu_report_qs_rnp() will be invoked to propagate the
      	quiescent state up the rcu_node hierarchy.
      
      o	cpu_quiet_msk() becomes rcu_report_qs_rnp(), which reports
      	a quiescent state for a given CPU (or possibly a set of CPUs)
      	up the rcu_node hierarchy.
      
      o	cpu_quiet_msk_finish() becomes rcu_report_qs_rsp(), which
      	reports a full set of quiescent states to the global rcu_state
      	structure.
      
      o	task_quiet() becomes rcu_report_unblock_qs_rnp(), which reports
      	a quiescent state due to a task exiting an RCU read-side critical
      	section that had previously blocked in that same critical section.
      	As indicated by the new name, this type of quiescent state is
      	reported up the rcu_node hierarchy (using rcu_report_qs_rnp()
      	to do so).
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NJosh Triplett <josh@joshtriplett.org>
      Acked-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12597846163698-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d3f6bad3
  14. 23 11月, 2009 1 次提交
    • P
      rcu: Fix grace-period-stall bug on large systems with CPU hotplug · b668c9cf
      Paul E. McKenney 提交于
      When the last CPU of a given leaf rcu_node structure goes
      offline, all of the tasks queued on that leaf rcu_node structure
      (due to having blocked in their current RCU read-side critical
      sections) are requeued onto the root rcu_node structure.  This
      requeuing is carried out by rcu_preempt_offline_tasks().
      However, it is possible that these queued tasks are the only
      thing preventing the leaf rcu_node structure from reporting a
      quiescent state up the rcu_node hierarchy.  Unfortunately, the
      old code would fail to do this reporting, resulting in a
      grace-period stall given the following sequence of events:
      
      1.	Kernel built for more than 32 CPUs on 32-bit systems or for more
      	than 64 CPUs on 64-bit systems, so that there is more than one
      	rcu_node structure.  (Or CONFIG_RCU_FANOUT is artificially set
      	to a number smaller than CONFIG_NR_CPUS.)
      
      2.	The kernel is built with CONFIG_TREE_PREEMPT_RCU.
      
      3.	A task running on a CPU associated with a given leaf rcu_node
      	structure blocks while in an RCU read-side critical section
      	-and- that CPU has not yet passed through a quiescent state
      	for the current RCU grace period.  This will cause the task
      	to be queued on the leaf rcu_node's blocked_tasks[] array, in
      	particular, on the element of this array corresponding to the
      	current grace period.
      
      4.	Each of the remaining CPUs corresponding to this same leaf rcu_node
      	structure pass through a quiescent state.  However, the task is
      	still in its RCU read-side critical section, so these quiescent
      	states cannot be reported further up the rcu_node hierarchy.
      	Nevertheless, all bits in the leaf rcu_node structure's ->qsmask
      	field are now zero.
      
      5.	Each of the remaining CPUs go offline.  (The events in step
      	#4 and #5 can happen in any order as long as each CPU passes
      	through a quiescent state before going offline.)
      
      6.	When the last CPU goes offline, __rcu_offline_cpu() will invoke
      	rcu_preempt_offline_tasks(), which will move the task to the
      	root rcu_node structure, but without reporting a quiescent state
      	up the rcu_node hierarchy (and this failure to report a quiescent
      	state is the bug).
      
      	But because this leaf rcu_node structure's ->qsmask field is
      	already zero and its ->block_tasks[] entries are all empty,
      	force_quiescent_state() will skip this rcu_node structure.
      
      	Therefore, grace periods are now hung.
      
      This patch abstracts some code out of rcu_read_unlock_special(),
      calling the result task_quiet() by analogy with cpu_quiet(), and
      invokes task_quiet() from both rcu_read_lock_special() and
      __rcu_offline_cpu().  Invoking task_quiet() from
      __rcu_offline_cpu() reports the quiescent state up the rcu_node
      hierarchy, fixing the bug.  This ends up requiring a separate
      lock_class_key per level of the rcu_node hierarchy, which this
      patch also provides.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12589088301770-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b668c9cf
  15. 11 11月, 2009 2 次提交
    • P
      rcu: Rename dynticks_completed to completed_fqs · 4bcfe055
      Paul E. McKenney 提交于
      This field is used whether or not CONFIG_NO_HZ is set, so the
      old name of ->dynticks_completed is quite misleading.
      
      Change to ->completed_fqs, given that it the value that
      force_quiescent_state() is trying to drive the ->completed field
      away from.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12578890423298-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4bcfe055
    • P
      rcu: Remove inline from forward-referenced functions · dbe01350
      Paul E. McKenney 提交于
      Some variants of gcc are reputed to dislike forward references
      to functions declared "inline".  Remove the "inline" keyword
      from such functions.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12578890422402-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dbe01350
  16. 10 11月, 2009 2 次提交
    • P
      rcu: Fix synchronization for rcu_process_gp_end() uses of ->completed counter · d09b62df
      Paul E. McKenney 提交于
      Impose a clear locking design on the rcu_process_gp_end()
      function's use of the ->completed counter.  This is done by
      creating a ->completed field in the rcu_node structure, which
      can safely be accessed under the protection of that structure's
      lock.  Performance and scalability are maintained by using a
      form of double-checked locking, so that rcu_process_gp_end()
      only acquires the leaf rcu_node structure's ->lock if a grace
      period has recently ended.
      
      This fix reduces rcutorture failure rate by at least two orders
      of magnitude under heavy stress with force_quiescent_state()
      being invoked artificially often.  Without this fix,
      unsynchronized access to the ->completed field can cause
      rcu_process_gp_end() to advance callbacks whose grace period has
      not yet expired.  (Bad idea!)
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: <stable@kernel.org> # .32.x
      LKML-Reference: <12571987494069-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d09b62df
    • P
      rcu: Prepare for synchronization fixes: clean up for non-NO_HZ handling of ->completed counter · 281d150c
      Paul E. McKenney 提交于
      Impose a clear locking design on non-NO_HZ handling of the
      ->completed counter.  This increases the distance between the
      RCU and the CPU-hotplug mechanisms.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: <stable@kernel.org> # .32.x
      LKML-Reference: <12571987491353-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      281d150c
  17. 02 11月, 2009 1 次提交
    • P
      rcu: Fix long-grace-period race between forcing and initialization · 83f5b01f
      Paul E. McKenney 提交于
      Very long RCU read-side critical sections (50 milliseconds or
      so) can cause a race between force_quiescent_state() and
      rcu_start_gp() as follows on kernel builds with multi-level
      rcu_node hierarchies:
      
      1.	CPU 0 calls force_quiescent_state(), sees that there is a
      	grace period in progress, and acquires ->fsqlock.
      
      2.	CPU 1 detects the end of the grace period, and so
      	cpu_quiet_msk_finish() sets rsp->completed to rsp->gpnum.
      	This operation is carried out under the root rnp->lock,
      	but CPU 0 has not yet acquired that lock.  Note that
      	rsp->signaled is still RCU_SAVE_DYNTICK from the last
      	grace period.
      
      3.	CPU 1 calls rcu_start_gp(), but no one wants a new grace
      	period, so it drops the root rnp->lock and returns.
      
      4.	CPU 0 acquires the root rnp->lock and picks up rsp->completed
      	and rsp->signaled, then drops rnp->lock.  It then enters the
      	RCU_SAVE_DYNTICK leg of the switch statement.
      
      5.	CPU 2 invokes call_rcu(), and now needs a new grace period.
      	It calls rcu_start_gp(), which acquires the root rnp->lock, sets
      	rsp->signaled to RCU_GP_INIT (too bad that CPU 0 is already in
      	the RCU_SAVE_DYNTICK leg of the switch statement!)  and starts
      	initializing the rcu_node hierarchy.  If there are multiple
      	levels to the hierarchy, it will drop the root rnp->lock and
      	initialize the lower levels of the hierarchy.
      
      6.	CPU 0 notes that rsp->completed has not changed, which permits
              both CPU 2 and CPU 0 to try updating it concurrently.  If CPU 0's
      	update prevails, later calls to force_quiescent_state() can
      	count old quiescent states against the new grace period, which
      	can in turn result in premature ending of grace periods.
      
      	Not good.
      
      This patch adds an RCU_GP_IDLE state for rsp->signaled that is
      set initially at boot time and any time a grace period ends.
      This prevents CPU 0 from getting into the workings of
      force_quiescent_state() in step 4.  Additional locking and
      checks prevent the concurrent update of rsp->signaled in step 6.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1256742889199-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      83f5b01f
  18. 16 10月, 2009 1 次提交
    • P
      rcu: Fix TREE_PREEMPT_RCU CPU_HOTPLUG bad-luck hang · 237c80c5
      Paul E. McKenney 提交于
      If the following sequence of events occurs, then
      TREE_PREEMPT_RCU will hang waiting for a grace period to
      complete, eventually OOMing the system:
      
      o	A TREE_PREEMPT_RCU build of the kernel is booted on a system
      	with more than 64 physical CPUs present (32 on a 32-bit system).
      	Alternatively, a TREE_PREEMPT_RCU build of the kernel is booted
      	with RCU_FANOUT set to a sufficiently small value that the
      	physical CPUs populate two or more leaf rcu_node structures.
      
      o	A task is preempted in an RCU read-side critical section
      	while running on a CPU corresponding to a given leaf rcu_node
      	structure.
      
      o	All CPUs corresponding to this same leaf rcu_node structure
      	record quiescent states for the current grace period.
      
      o	All of these same CPUs go offline (hence the need for enough
      	physical CPUs to populate more than one leaf rcu_node structure).
      	This causes the preempted task to be moved to the root rcu_node
      	structure.
      
      At this point, there is nothing left to cause the quiescent
      state to be propagated up the rcu_node tree, so the current
      grace period never completes.
      
      The simplest fix, especially after considering the deadlock
      possibilities, is to detect this situation when the last CPU is
      offlined, and to set that CPU's ->qsmask bit in its leaf
      rcu_node structure.  This will cause the next invocation of
      force_quiescent_state() to end the grace period.
      
      Without this fix, this hang can be triggered in an hour or so on
      some machines with rcutorture and random CPU onlining/offlining.
      With this fix, these same machines pass a full 10 hours of this
      sort of abuse.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <20091015162614.GA19131@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      237c80c5
  19. 15 10月, 2009 1 次提交
    • P
      rcu: Prevent RCU IPI storms in presence of high call_rcu() load · 37c72e56
      Paul E. McKenney 提交于
      As the number of callbacks on a given CPU rises, invoke
      force_quiescent_state() only every blimit number of callbacks
      (defaults to 10,000), and even then only if no other CPU has
      invoked force_quiescent_state() in the meantime.
      
      This should fix the performance regression reported by Nick.
      Reported-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: jens.axboe@oracle.com
      LKML-Reference: <12555405592133-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37c72e56
  20. 07 10月, 2009 1 次提交
    • P
      rcu: Make hot-unplugged CPU relinquish its own RCU callbacks · e74f4c45
      Paul E. McKenney 提交于
      The current interaction between RCU and CPU hotplug requires that
      RCU block in CPU notifiers waiting for callbacks to drain.
      
      This can be greatly simplified by having each CPU relinquish its
      own callbacks, and for both _rcu_barrier() and CPU_DEAD notifiers
      to adopt all callbacks that were previously relinquished.
      
      This change also eliminates the possibility of certain types of
      hangs due to the previous practice of waiting for callbacks to be
      invoked from within CPU notifiers.  If you don't every wait, you
      cannot hang.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1254890898456-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e74f4c45