1. 06 5月, 2011 15 次提交
    • P
      rcu: further lower priority in rcu_yield() · baa1ae0c
      Paul E. McKenney 提交于
      Although rcu_yield() dropped from real-time to normal priority, there
      is always the possibility that the competing tasks have been niced.
      So nice to 19 in rcu_yield() to help ensure that other tasks have a
      better chance of running.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      baa1ae0c
    • L
      rcu: introduce kfree_rcu() · 9ab1544e
      Lai Jiangshan 提交于
      Many rcu callbacks functions just call kfree() on the base structure.
      These functions are trivial, but their size adds up, and furthermore
      when they are used in a kernel module, that module must invoke the
      high-latency rcu_barrier() function at module-unload time.
      
      The kfree_rcu() function introduced by this commit addresses this issue.
      Rather than encoding a function address in the embedded rcu_head
      structure, kfree_rcu() instead encodes the offset of the rcu_head
      structure within the base structure.  Because the functions are not
      allowed in the low-order 4096 bytes of kernel virtual memory, offsets
      up to 4095 bytes can be accommodated.  If the offset is larger than
      4095 bytes, a compile-time error will be generated in __kfree_rcu().
      If this error is triggered, you can either fall back to use of call_rcu()
      or rearrange the structure to position the rcu_head structure into the
      first 4096 bytes.
      
      Note that the allowable offset might decrease in the future, for example,
      to allow something like kmem_cache_free_rcu().
      
      The new kfree_rcu() function can replace code as follows:
      
      	call_rcu(&p->rcu, simple_kfree_callback);
      
      where "simple_kfree_callback()" might be defined as follows:
      
      	void simple_kfree_callback(struct rcu_head *p)
      	{
      		struct foo *q = container_of(p, struct foo, rcu);
      
      		kfree(q);
      	}
      
      with the following:
      
      	kfree_rcu(&p->rcu, rcu);
      
      Note that the "rcu" is the name of a field in the structure being
      freed.  The reason for using this rather than passing in a pointer
      to the base structure is that the above approach allows better type
      checking.
      
      This commit is based on earlier work by Lai Jiangshan and Manfred Spraul:
      
      Lai's V1 patch: http://lkml.org/lkml/2008/9/18/1
      Manfred's patch: http://lkml.org/lkml/2009/1/2/115Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NManfred Spraul <manfred@colorfullife.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      9ab1544e
    • P
      rcu: fix spelling · 6cc68793
      Paul E. McKenney 提交于
      The "preemptible" spelling is preferable.  May as well fix it.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      6cc68793
    • P
      rcu: Switch to this_cpu() primitives · f0a07aea
      Paul E. McKenney 提交于
      This removes a couple of lines from invoke_rcu_cpu_kthread(), improving
      readability.
      Reported-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      f0a07aea
    • P
      rcu: Add forward-progress diagnostic for per-CPU kthreads · 5ece5bab
      Paul E. McKenney 提交于
      Increment a per-CPU counter on each pass through rcu_cpu_kthread()'s
      service loop, and add it to the rcudata trace output.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      5ece5bab
    • P
      rcu: add grace-period age and more kthread state to tracing · 15ba0ba8
      Paul E. McKenney 提交于
      This commit adds the age in jiffies of the current grace period along
      with the duration in jiffies of the longest grace period since boot
      to the rcu/rcugp debugfs file.  It also adds an additional "O" state
      to kthread tracing to differentiate between the kthread waiting due to
      having nothing to do on the one hand and waiting due to being on the
      wrong CPU on the other hand.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      15ba0ba8
    • P
      rcu: make rcutorture version numbers available through debugfs · 4a298656
      Paul E. McKenney 提交于
      It is not possible to accurately correlate rcutorture output with that
      of debugfs.  This patch therefore adds a debugfs file that prints out
      the rcutorture version number, permitting easy correlation.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      4a298656
    • P
      rcu: add tracing for RCU's kthread run states. · d71df90e
      Paul E. McKenney 提交于
      Add tracing to help debugging situations when RCU's kthreads are not
      running but are supposed to be.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      d71df90e
    • P
      rcu: put per-CPU kthread at non-RT priority during CPU hotplug operations · e3995a25
      Paul E. McKenney 提交于
      If you are doing CPU hotplug operations, it is best not to have
      CPU-bound realtime tasks running CPU-bound on the outgoing CPU.
      So this commit makes per-CPU kthreads run at non-realtime priority
      during that time.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      e3995a25
    • P
      rcu: Force per-rcu_node kthreads off of the outgoing CPU · 0f962a5e
      Paul E. McKenney 提交于
      The scheduler has had some heartburn in the past when too many real-time
      kthreads were affinitied to the outgoing CPU.  So, this commit lightens
      the load by forcing the per-rcu_node and the boost kthreads off of the
      outgoing CPU.  Note that RCU's per-CPU kthread remains on the outgoing
      CPU until the bitter end, as it must in order to preserve correctness.
      
      Also avoid disabling hardirqs across calls to set_cpus_allowed_ptr(),
      given that this function can block.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0f962a5e
    • P
      rcu: priority boosting for TREE_PREEMPT_RCU · 27f4d280
      Paul E. McKenney 提交于
      Add priority boosting for TREE_PREEMPT_RCU, similar to that for
      TINY_PREEMPT_RCU.  This is enabled by the default-off RCU_BOOST
      kernel parameter.  The priority to which to boost preempted
      RCU readers is controlled by the RCU_BOOST_PRIO kernel parameter
      (defaulting to real-time priority 1) and the time to wait before
      boosting the readers who are blocking a given grace period is
      controlled by the RCU_BOOST_DELAY kernel parameter (defaulting to
      500 milliseconds).
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      27f4d280
    • P
      rcu: move TREE_RCU from softirq to kthread · a26ac245
      Paul E. McKenney 提交于
      If RCU priority boosting is to be meaningful, callback invocation must
      be boosted in addition to preempted RCU readers.  Otherwise, in presence
      of CPU real-time threads, the grace period ends, but the callbacks don't
      get invoked.  If the callbacks don't get invoked, the associated memory
      doesn't get freed, so the system is still subject to OOM.
      
      But it is not reasonable to priority-boost RCU_SOFTIRQ, so this commit
      moves the callback invocations to a kthread, which can be boosted easily.
      
      Also add comments and properly synchronized all accesses to
      rcu_cpu_kthread_task, as suggested by Lai Jiangshan.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      a26ac245
    • P
      rcu: merge TREE_PREEPT_RCU blocked_tasks[] lists · 12f5f524
      Paul E. McKenney 提交于
      Combine the current TREE_PREEMPT_RCU ->blocked_tasks[] lists in the
      rcu_node structure into a single ->blkd_tasks list with ->gp_tasks
      and ->exp_tasks tail pointers.  This is in preparation for RCU priority
      boosting, which will add a third dimension to the combinatorial explosion
      in the ->blocked_tasks[] case, but simply a third pointer in the new
      ->blkd_tasks case.
      
      Also update documentation to reflect blocked_tasks[] merge
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      12f5f524
    • P
      rcu: Decrease memory-barrier usage based on semi-formal proof · e59fb312
      Paul E. McKenney 提交于
      Commit d09b62df fixed grace-period synchronization, but left some smp_mb()
      invocations in rcu_process_callbacks() that are no longer needed, but
      sheer paranoia prevented them from being removed.  This commit removes
      them and provides a proof of correctness in their absence.  It also adds
      a memory barrier to rcu_report_qs_rsp() immediately before the update to
      rsp->completed in order to handle the theoretical possibility that the
      compiler or CPU might move massive quantities of code into a lock-based
      critical section.  This also proves that the sheer paranoia was not
      entirely unjustified, at least from a theoretical point of view.
      
      In addition, the old dyntick-idle synchronization depended on the fact
      that grace periods were many milliseconds in duration, so that it could
      be assumed that no dyntick-idle CPU could reorder a memory reference
      across an entire grace period.  Unfortunately for this design, the
      addition of expedited grace periods breaks this assumption, which has
      the unfortunate side-effect of requiring atomic operations in the
      functions that track dyntick-idle state for RCU.  (There is some hope
      that the algorithms used in user-level RCU might be applied here, but
      some work is required to handle the NMIs that user-space applications
      can happily ignore.  For the short term, better safe than sorry.)
      
      This proof assumes that neither compiler nor CPU will allow a lock
      acquisition and release to be reordered, as doing so can result in
      deadlock.  The proof is as follows:
      
      1.	A given CPU declares a quiescent state under the protection of
      	its leaf rcu_node's lock.
      
      2.	If there is more than one level of rcu_node hierarchy, the
      	last CPU to declare a quiescent state will also acquire the
      	->lock of the next rcu_node up in the hierarchy,  but only
      	after releasing the lower level's lock.  The acquisition of this
      	lock clearly cannot occur prior to the acquisition of the leaf
      	node's lock.
      
      3.	Step 2 repeats until we reach the root rcu_node structure.
      	Please note again that only one lock is held at a time through
      	this process.  The acquisition of the root rcu_node's ->lock
      	must occur after the release of that of the leaf rcu_node.
      
      4.	At this point, we set the ->completed field in the rcu_state
      	structure in rcu_report_qs_rsp().  However, if the rcu_node
      	hierarchy contains only one rcu_node, then in theory the code
      	preceding the quiescent state could leak into the critical
      	section.  We therefore precede the update of ->completed with a
      	memory barrier.  All CPUs will therefore agree that any updates
      	preceding any report of a quiescent state will have happened
      	before the update of ->completed.
      
      5.	Regardless of whether a new grace period is needed, rcu_start_gp()
      	will propagate the new value of ->completed to all of the leaf
      	rcu_node structures, under the protection of each rcu_node's ->lock.
      	If a new grace period is needed immediately, this propagation
      	will occur in the same critical section that ->completed was
      	set in, but courtesy of the memory barrier in #4 above, is still
      	seen to follow any pre-quiescent-state activity.
      
      6.	When a given CPU invokes __rcu_process_gp_end(), it becomes
      	aware of the end of the old grace period and therefore makes
      	any RCU callbacks that were waiting on that grace period eligible
      	for invocation.
      
      	If this CPU is the same one that detected the end of the grace
      	period, and if there is but a single rcu_node in the hierarchy,
      	we will still be in the single critical section.  In this case,
      	the memory barrier in step #4 guarantees that all callbacks will
      	be seen to execute after each CPU's quiescent state.
      
      	On the other hand, if this is a different CPU, it will acquire
      	the leaf rcu_node's ->lock, and will again be serialized after
      	each CPU's quiescent state for the old grace period.
      
      On the strength of this proof, this commit therefore removes the memory
      barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
      The effect is to reduce the number of memory barriers by one and to
      reduce the frequency of execution from about once per scheduling tick
      per CPU to once per grace period.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      e59fb312
    • P
      rcu: Remove conditional compilation for RCU CPU stall warnings · a00e0d71
      Paul E. McKenney 提交于
      The RCU CPU stall warnings can now be controlled using the
      rcu_cpu_stall_suppress boot-time parameter or via the same parameter
      from sysfs.  There is therefore no longer any reason to have
      kernel config parameters for this feature.  This commit therefore
      removes the RCU_CPU_STALL_DETECTOR and RCU_CPU_STALL_DETECTOR_RUNNABLE
      kernel config parameters.  The RCU_CPU_STALL_TIMEOUT parameter remains
      to allow the timeout to be tuned and the RCU_CPU_STALL_VERBOSE parameter
      remains to allow task-stall information to be suppressed if desired.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      a00e0d71
  2. 18 12月, 2010 5 次提交
    • P
      rcu: reduce __call_rcu()-induced contention on rcu_node structures · b52573d2
      Paul E. McKenney 提交于
      When the current __call_rcu() function was written, the expedited
      APIs did not exist.  The __call_rcu() implementation therefore went
      to great lengths to detect the end of old grace periods and to start
      new ones, all in the name of reducing grace-period latency.  Now the
      expedited APIs do exist, and the usage of __call_rcu() has increased
      considerably.  This commit therefore causes __call_rcu() to avoid
      worrying about grace periods unless there are a large number of
      RCU callbacks stacked up on the current CPU.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b52573d2
    • P
      rcu: limit rcu_node leaf-level fanout · 0209f649
      Paul E. McKenney 提交于
      Some recent benchmarks have indicated possible lock contention on the
      leaf-level rcu_node locks.  This commit therefore limits the number of
      CPUs per leaf-level rcu_node structure to 16, in other words, there
      can be at most 16 rcu_data structures fanning into a given rcu_node
      structure.  Prior to this, the limit was 32 on 32-bit systems and 64 on
      64-bit systems.
      
      Note that the fanout of non-leaf rcu_node structures is unchanged.  The
      organization of accesses to the rcu_node tree is such that references
      to non-leaf rcu_node structures are much less frequent than to the
      leaf structures.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0209f649
    • P
      rcu: fine-tune grace-period begin/end checks · 121dfc4b
      Paul E. McKenney 提交于
      Use the CPU's bit in rnp->qsmask to determine whether or not the CPU
      should try to report a quiescent state.  Handle overflow in the check
      for rdp->gpnum having fallen behind.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      121dfc4b
    • F
      rcu: Keep gpnum and completed fields synchronized · 5ff8e6f0
      Frederic Weisbecker 提交于
      When a CPU that was in an extended quiescent state wakes
      up and catches up with grace periods that remote CPUs
      completed on its behalf, we update the completed field
      but not the gpnum that keeps a stale value of a backward
      grace period ID.
      
      Later, note_new_gpnum() will interpret the shift between
      the local CPU and the node grace period ID as some new grace
      period to handle and will then start to hunt quiescent state.
      
      But if every grace periods have already been completed, this
      interpretation becomes broken. And we'll be stuck in clusters
      of spurious softirqs because rcu_report_qs_rdp() will make
      this broken state run into infinite loop.
      
      The solution, as suggested by Lai Jiangshan, is to ensure that
      the gpnum and completed fields are well synchronized when we catch
      up with completed grace periods on their behalf by other cpus.
      This way we won't start noting spurious new grace periods.
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5ff8e6f0
    • F
      rcu: Stop chasing QS if another CPU did it for us · 20377f32
      Frederic Weisbecker 提交于
      When a CPU is idle and others CPUs handled its extended
      quiescent state to complete grace periods on its behalf,
      it will catch up with completed grace periods numbers
      when it wakes up.
      
      But at this point there might be no more grace period to
      complete, but still the woken CPU always keeps its stale
      qs_pending value and will then continue to chase quiescent
      states even if its not needed anymore.
      
      This results in clusters of spurious softirqs until a new
      real grace period is started. Because if we continue to
      chase quiescent states but we have completed every grace
      periods, rcu_report_qs_rdp() is puzzled and makes that
      state run into infinite loops.
      
      As suggested by Lai Jiangshan, just reset qs_pending if
      someone completed every grace periods on our behalf.
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      20377f32
  3. 17 12月, 2010 1 次提交
  4. 30 11月, 2010 2 次提交
  5. 08 10月, 2010 1 次提交
  6. 24 9月, 2010 1 次提交
    • P
      rcu: Add tracing data to support queueing models · 269dcc1c
      Paul E. McKenney 提交于
      The current tracing data is not sufficient to deduce the average time
      that a callback spends waiting for a grace period to end.  Add three
      per-CPU counters recording the number of callbacks invoked (ci), the
      number of callbacks orphaned (co), and the number of callbacks adopted
      (ca).  Given the existing callback queue length (ql), the average wait
      time in absence of CPU hotplug operations is ql/ci.  The units of wait
      time will be in terms of the duration over which ci was measured.
      
      In the presence of CPU hotplug operations, there is room for argument,
      but ql/(ci-co+ca) won't steer you too far wrong.
      
      Also fixes a typo called out by Lucas De Marchi <lucas.de.marchi@gmail.com>.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      269dcc1c
  7. 21 8月, 2010 2 次提交
  8. 20 8月, 2010 4 次提交
  9. 15 6月, 2010 1 次提交
    • M
      tree/tiny rcu: Add debug RCU head objects · 551d55a9
      Mathieu Desnoyers 提交于
      Helps finding racy users of call_rcu(), which results in hangs because list
      entries are overwritten and/or skipped.
      
      Changelog since v4:
      - Bissectability is now OK
      - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to
        call_rcu(). Statically initialized objects are detected with
        object_is_static().
      - Rename rcu_head_init_on_stack to init_rcu_head_on_stack.
      - Remove init_rcu_head() completely.
      
      Changelog since v3:
      - Include comments from Lai Jiangshan
      
      This new patch version is based on the debugobjects with the newly introduced
      "active state" tracker.
      
      Non-initialized entries are all considered as "statically initialized". An
      activation fixup (triggered by call_rcu()) takes care of performing the debug
      object initialization without issuing any warning. Since we cannot increase the
      size of struct rcu_head, I don't see much room to put an identifier for
      statically initialized rcu_head structures. So for now, we have to live without
      "activation without explicit init" detection. But the main purpose of this debug
      option is to detect double-activations (double call_rcu() use of a rcu_head
      before the callback is executed), which is correctly addressed here.
      
      This also detects potential internal RCU callback corruption, which would cause
      the callbacks to be executed twice.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      CC: akpm@linux-foundation.org
      CC: mingo@elte.hu
      CC: laijs@cn.fujitsu.com
      CC: dipankar@in.ibm.com
      CC: josh@joshtriplett.org
      CC: dvhltc@us.ibm.com
      CC: niv@us.ibm.com
      CC: tglx@linutronix.de
      CC: peterz@infradead.org
      CC: rostedt@goodmis.org
      CC: Valdis.Kletnieks@vt.edu
      CC: dhowells@redhat.com
      CC: eric.dumazet@gmail.com
      CC: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      551d55a9
  10. 12 5月, 2010 1 次提交
  11. 11 5月, 2010 7 次提交
    • P
      rcu: reduce the number of spurious RCU_SOFTIRQ invocations · d21670ac
      Paul E. McKenney 提交于
      Lai Jiangshan noted that up to 10% of the RCU_SOFTIRQ are spurious, and
      traced this down to the fact that the current grace-period machinery
      will uselessly raise RCU_SOFTIRQ when a given CPU needs to go through
      a quiescent state, but has not yet done so.  In this situation, there
      might well be nothing that RCU_SOFTIRQ can do, and the overhead can be
      worth worrying about in the ksoftirqd case.  This patch therefore avoids
      raising RCU_SOFTIRQ in this situation.
      
      Changes since v1 (http://lkml.org/lkml/2010/3/30/122 from Lai Jiangshan):
      
      o	Omit the rcu_qs_pending() prechecks, as they aren't that
      	much less expensive than the quiescent-state checks.
      
      o	Merge with the set_need_resched() patch that reduces IPIs.
      
      o	Add the new n_rp_report_qs field to the rcu_pending tracing output.
      
      o	Update the tracing documentation accordingly.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d21670ac
    • P
      rcu: permit discontiguous cpu_possible_mask CPU numbering · 4a90a068
      Paul E. McKenney 提交于
      TREE_RCU assumes that CPU numbering is contiguous, but some users need
      large holes in the numbering to better map to hardware layout.  This patch
      makes TREE_RCU (and TREE_PREEMPT_RCU) tolerate large holes in the CPU
      numbering.  However, NR_CPUS must still be greater than the largest
      CPU number.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4a90a068
    • P
      rcu: improve RCU CPU stall-warning messages · 4300aa64
      Paul E. McKenney 提交于
      The existing RCU CPU stall-warning messages can be confusing, especially
      in the case where one CPU detects a single other stalled CPU.  In addition,
      the console messages did not say which flavor of RCU detected the stall,
      which can make it difficult to work out exactly what is causing the stall.
      This commit improves these messages.
      Requested-by: NDhaval Giani <dhaval.giani@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4300aa64
    • P
      rcu: print boot-time console messages if RCU configs out of ordinary · 26845c28
      Paul E. McKenney 提交于
      Print boot-time messages if tracing is enabled, if fanout is set
      to non-default values, if exact fanout is specified, if accelerated
      dyntick-idle grace periods have been enabled, if RCU-lockdep is enabled,
      if rcutorture has been boot-time enabled, if the CPU stall detector has
      been disabled, or if four-level hierarchy has been enabled.
      
      This is all for TREE_RCU and TREE_PREEMPT_RCU.  TINY_RCU will be handled
      separately, if at all.
      Suggested-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      26845c28
    • P
      rcu: disable CPU stall warnings upon panic · c68de209
      Paul E. McKenney 提交于
      The current RCU CPU stall warnings remain enabled even after a panic
      occurs, which some people have found to be a bit counterproductive.
      This patch therefore uses a notifier to disable stall warnings once a
      panic occurs.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c68de209
    • P
      rcu: slim down rcutiny by removing rcu_scheduler_active and friends · bbad9379
      Paul E. McKenney 提交于
      TINY_RCU does not need rcu_scheduler_active unless CONFIG_DEBUG_LOCK_ALLOC.
      So conditionally compile rcu_scheduler_active in order to slim down
      rcutiny a bit more.  Also gets rid of an EXPORT_SYMBOL_GPL, which is
      responsible for most of the slimming.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bbad9379
    • P
      rcu: refactor RCU's context-switch handling · 25502a6c
      Paul E. McKenney 提交于
      The addition of preemptible RCU to treercu resulted in a bit of
      confusion and inefficiency surrounding the handling of context switches
      for RCU-sched and for RCU-preempt.  For RCU-sched, a context switch
      is a quiescent state, pure and simple, just like it always has been.
      For RCU-preempt, a context switch is in no way a quiescent state, but
      special handling is required when a task blocks in an RCU read-side
      critical section.
      
      However, the callout from the scheduler and the outer loop in ksoftirqd
      still calls something named rcu_sched_qs(), whose name is no longer
      accurate.  Furthermore, when rcu_check_callbacks() notes an RCU-sched
      quiescent state, it ends up unnecessarily (though harmlessly, aside
      from the performance hit) enqueuing the current task if it happens to
      be running in an RCU-preempt read-side critical section.  This not only
      increases the maximum latency of scheduler_tick(), it also needlessly
      increases the overhead of the next outermost rcu_read_unlock() invocation.
      
      This patch addresses this situation by separating the notion of RCU's
      context-switch handling from that of RCU-sched's quiescent states.
      The context-switch handling is covered by rcu_note_context_switch() in
      general and by rcu_preempt_note_context_switch() for preemptible RCU.
      This permits rcu_sched_qs() to handle quiescent states and only quiescent
      states.  It also reduces the maximum latency of scheduler_tick(), though
      probably by much less than a microsecond.  Finally, it means that tasks
      within preemptible-RCU read-side critical sections avoid incurring the
      overhead of queuing unless there really is a context switch.
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Acked-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      25502a6c