1. 18 12月, 2010 5 次提交
    • P
      rcu: reduce __call_rcu()-induced contention on rcu_node structures · b52573d2
      Paul E. McKenney 提交于
      When the current __call_rcu() function was written, the expedited
      APIs did not exist.  The __call_rcu() implementation therefore went
      to great lengths to detect the end of old grace periods and to start
      new ones, all in the name of reducing grace-period latency.  Now the
      expedited APIs do exist, and the usage of __call_rcu() has increased
      considerably.  This commit therefore causes __call_rcu() to avoid
      worrying about grace periods unless there are a large number of
      RCU callbacks stacked up on the current CPU.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b52573d2
    • P
      rcu: limit rcu_node leaf-level fanout · 0209f649
      Paul E. McKenney 提交于
      Some recent benchmarks have indicated possible lock contention on the
      leaf-level rcu_node locks.  This commit therefore limits the number of
      CPUs per leaf-level rcu_node structure to 16, in other words, there
      can be at most 16 rcu_data structures fanning into a given rcu_node
      structure.  Prior to this, the limit was 32 on 32-bit systems and 64 on
      64-bit systems.
      
      Note that the fanout of non-leaf rcu_node structures is unchanged.  The
      organization of accesses to the rcu_node tree is such that references
      to non-leaf rcu_node structures are much less frequent than to the
      leaf structures.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0209f649
    • P
      rcu: fine-tune grace-period begin/end checks · 121dfc4b
      Paul E. McKenney 提交于
      Use the CPU's bit in rnp->qsmask to determine whether or not the CPU
      should try to report a quiescent state.  Handle overflow in the check
      for rdp->gpnum having fallen behind.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      121dfc4b
    • F
      rcu: Keep gpnum and completed fields synchronized · 5ff8e6f0
      Frederic Weisbecker 提交于
      When a CPU that was in an extended quiescent state wakes
      up and catches up with grace periods that remote CPUs
      completed on its behalf, we update the completed field
      but not the gpnum that keeps a stale value of a backward
      grace period ID.
      
      Later, note_new_gpnum() will interpret the shift between
      the local CPU and the node grace period ID as some new grace
      period to handle and will then start to hunt quiescent state.
      
      But if every grace periods have already been completed, this
      interpretation becomes broken. And we'll be stuck in clusters
      of spurious softirqs because rcu_report_qs_rdp() will make
      this broken state run into infinite loop.
      
      The solution, as suggested by Lai Jiangshan, is to ensure that
      the gpnum and completed fields are well synchronized when we catch
      up with completed grace periods on their behalf by other cpus.
      This way we won't start noting spurious new grace periods.
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5ff8e6f0
    • F
      rcu: Stop chasing QS if another CPU did it for us · 20377f32
      Frederic Weisbecker 提交于
      When a CPU is idle and others CPUs handled its extended
      quiescent state to complete grace periods on its behalf,
      it will catch up with completed grace periods numbers
      when it wakes up.
      
      But at this point there might be no more grace period to
      complete, but still the woken CPU always keeps its stale
      qs_pending value and will then continue to chase quiescent
      states even if its not needed anymore.
      
      This results in clusters of spurious softirqs until a new
      real grace period is started. Because if we continue to
      chase quiescent states but we have completed every grace
      periods, rcu_report_qs_rdp() is puzzled and makes that
      state run into infinite loops.
      
      As suggested by Lai Jiangshan, just reset qs_pending if
      someone completed every grace periods on our behalf.
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      20377f32
  2. 17 12月, 2010 1 次提交
  3. 30 11月, 2010 2 次提交
  4. 08 10月, 2010 1 次提交
  5. 24 9月, 2010 1 次提交
    • P
      rcu: Add tracing data to support queueing models · 269dcc1c
      Paul E. McKenney 提交于
      The current tracing data is not sufficient to deduce the average time
      that a callback spends waiting for a grace period to end.  Add three
      per-CPU counters recording the number of callbacks invoked (ci), the
      number of callbacks orphaned (co), and the number of callbacks adopted
      (ca).  Given the existing callback queue length (ql), the average wait
      time in absence of CPU hotplug operations is ql/ci.  The units of wait
      time will be in terms of the duration over which ci was measured.
      
      In the presence of CPU hotplug operations, there is room for argument,
      but ql/(ci-co+ca) won't steer you too far wrong.
      
      Also fixes a typo called out by Lucas De Marchi <lucas.de.marchi@gmail.com>.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      269dcc1c
  6. 21 8月, 2010 2 次提交
  7. 20 8月, 2010 4 次提交
  8. 15 6月, 2010 1 次提交
    • M
      tree/tiny rcu: Add debug RCU head objects · 551d55a9
      Mathieu Desnoyers 提交于
      Helps finding racy users of call_rcu(), which results in hangs because list
      entries are overwritten and/or skipped.
      
      Changelog since v4:
      - Bissectability is now OK
      - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to
        call_rcu(). Statically initialized objects are detected with
        object_is_static().
      - Rename rcu_head_init_on_stack to init_rcu_head_on_stack.
      - Remove init_rcu_head() completely.
      
      Changelog since v3:
      - Include comments from Lai Jiangshan
      
      This new patch version is based on the debugobjects with the newly introduced
      "active state" tracker.
      
      Non-initialized entries are all considered as "statically initialized". An
      activation fixup (triggered by call_rcu()) takes care of performing the debug
      object initialization without issuing any warning. Since we cannot increase the
      size of struct rcu_head, I don't see much room to put an identifier for
      statically initialized rcu_head structures. So for now, we have to live without
      "activation without explicit init" detection. But the main purpose of this debug
      option is to detect double-activations (double call_rcu() use of a rcu_head
      before the callback is executed), which is correctly addressed here.
      
      This also detects potential internal RCU callback corruption, which would cause
      the callbacks to be executed twice.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      CC: akpm@linux-foundation.org
      CC: mingo@elte.hu
      CC: laijs@cn.fujitsu.com
      CC: dipankar@in.ibm.com
      CC: josh@joshtriplett.org
      CC: dvhltc@us.ibm.com
      CC: niv@us.ibm.com
      CC: tglx@linutronix.de
      CC: peterz@infradead.org
      CC: rostedt@goodmis.org
      CC: Valdis.Kletnieks@vt.edu
      CC: dhowells@redhat.com
      CC: eric.dumazet@gmail.com
      CC: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      551d55a9
  9. 12 5月, 2010 1 次提交
  10. 11 5月, 2010 10 次提交
    • P
      rcu: reduce the number of spurious RCU_SOFTIRQ invocations · d21670ac
      Paul E. McKenney 提交于
      Lai Jiangshan noted that up to 10% of the RCU_SOFTIRQ are spurious, and
      traced this down to the fact that the current grace-period machinery
      will uselessly raise RCU_SOFTIRQ when a given CPU needs to go through
      a quiescent state, but has not yet done so.  In this situation, there
      might well be nothing that RCU_SOFTIRQ can do, and the overhead can be
      worth worrying about in the ksoftirqd case.  This patch therefore avoids
      raising RCU_SOFTIRQ in this situation.
      
      Changes since v1 (http://lkml.org/lkml/2010/3/30/122 from Lai Jiangshan):
      
      o	Omit the rcu_qs_pending() prechecks, as they aren't that
      	much less expensive than the quiescent-state checks.
      
      o	Merge with the set_need_resched() patch that reduces IPIs.
      
      o	Add the new n_rp_report_qs field to the rcu_pending tracing output.
      
      o	Update the tracing documentation accordingly.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d21670ac
    • P
      rcu: permit discontiguous cpu_possible_mask CPU numbering · 4a90a068
      Paul E. McKenney 提交于
      TREE_RCU assumes that CPU numbering is contiguous, but some users need
      large holes in the numbering to better map to hardware layout.  This patch
      makes TREE_RCU (and TREE_PREEMPT_RCU) tolerate large holes in the CPU
      numbering.  However, NR_CPUS must still be greater than the largest
      CPU number.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4a90a068
    • P
      rcu: improve RCU CPU stall-warning messages · 4300aa64
      Paul E. McKenney 提交于
      The existing RCU CPU stall-warning messages can be confusing, especially
      in the case where one CPU detects a single other stalled CPU.  In addition,
      the console messages did not say which flavor of RCU detected the stall,
      which can make it difficult to work out exactly what is causing the stall.
      This commit improves these messages.
      Requested-by: NDhaval Giani <dhaval.giani@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4300aa64
    • P
      rcu: print boot-time console messages if RCU configs out of ordinary · 26845c28
      Paul E. McKenney 提交于
      Print boot-time messages if tracing is enabled, if fanout is set
      to non-default values, if exact fanout is specified, if accelerated
      dyntick-idle grace periods have been enabled, if RCU-lockdep is enabled,
      if rcutorture has been boot-time enabled, if the CPU stall detector has
      been disabled, or if four-level hierarchy has been enabled.
      
      This is all for TREE_RCU and TREE_PREEMPT_RCU.  TINY_RCU will be handled
      separately, if at all.
      Suggested-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      26845c28
    • P
      rcu: disable CPU stall warnings upon panic · c68de209
      Paul E. McKenney 提交于
      The current RCU CPU stall warnings remain enabled even after a panic
      occurs, which some people have found to be a bit counterproductive.
      This patch therefore uses a notifier to disable stall warnings once a
      panic occurs.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c68de209
    • P
      rcu: slim down rcutiny by removing rcu_scheduler_active and friends · bbad9379
      Paul E. McKenney 提交于
      TINY_RCU does not need rcu_scheduler_active unless CONFIG_DEBUG_LOCK_ALLOC.
      So conditionally compile rcu_scheduler_active in order to slim down
      rcutiny a bit more.  Also gets rid of an EXPORT_SYMBOL_GPL, which is
      responsible for most of the slimming.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bbad9379
    • P
      rcu: refactor RCU's context-switch handling · 25502a6c
      Paul E. McKenney 提交于
      The addition of preemptible RCU to treercu resulted in a bit of
      confusion and inefficiency surrounding the handling of context switches
      for RCU-sched and for RCU-preempt.  For RCU-sched, a context switch
      is a quiescent state, pure and simple, just like it always has been.
      For RCU-preempt, a context switch is in no way a quiescent state, but
      special handling is required when a task blocks in an RCU read-side
      critical section.
      
      However, the callout from the scheduler and the outer loop in ksoftirqd
      still calls something named rcu_sched_qs(), whose name is no longer
      accurate.  Furthermore, when rcu_check_callbacks() notes an RCU-sched
      quiescent state, it ends up unnecessarily (though harmlessly, aside
      from the performance hit) enqueuing the current task if it happens to
      be running in an RCU-preempt read-side critical section.  This not only
      increases the maximum latency of scheduler_tick(), it also needlessly
      increases the overhead of the next outermost rcu_read_unlock() invocation.
      
      This patch addresses this situation by separating the notion of RCU's
      context-switch handling from that of RCU-sched's quiescent states.
      The context-switch handling is covered by rcu_note_context_switch() in
      general and by rcu_preempt_note_context_switch() for preemptible RCU.
      This permits rcu_sched_qs() to handle quiescent states and only quiescent
      states.  It also reduces the maximum latency of scheduler_tick(), though
      probably by much less than a microsecond.  Finally, it means that tasks
      within preemptible-RCU read-side critical sections avoid incurring the
      overhead of queuing unless there really is a context switch.
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Acked-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      25502a6c
    • L
      rcu: move some code from macro to function · 0c34029a
      Lai Jiangshan 提交于
      Shrink the RCU_INIT_FLAVOR() macro by moving all but the initialization
      of the ->rda[] array to rcu_init_one().  The call to rcu_init_one()
      can then be moved to the end of the RCU_INIT_FLAVOR() macro, which is
      required because rcu_boot_init_percpu_data(), which is now called from
      rcu_init_one(), depends on the initialization of the ->rda[] array.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0c34029a
    • L
      rcu: make dead code really dead · f261414f
      Lai Jiangshan 提交于
      cleanup: make dead code really dead
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      f261414f
    • P
      rcu: substitute set_need_resched for sending resched IPIs · d25eb944
      Paul E. McKenney 提交于
      This patch adds a check to __rcu_pending() that does a local
      set_need_resched() if the current CPU is holding up the current grace
      period and if force_quiescent_state() will be called soon.  The goal is
      to reduce the probability that force_quiescent_state() will need to do
      smp_send_reschedule(), which sends an IPI and is therefore more expensive
      on most architectures.
      Signed-off-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      d25eb944
  11. 27 2月, 2010 1 次提交
  12. 26 2月, 2010 1 次提交
    • P
      rcu: Make rcu_read_lock_sched_held() take boot time into account · d9f1bb6a
      Paul E. McKenney 提交于
      Before the scheduler starts, all tasks are non-preemptible by
      definition. So, during that time, rcu_read_lock_sched_held()
      needs to always return "true".  This patch makes that be so.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1267135607-7056-2-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9f1bb6a
  13. 25 2月, 2010 5 次提交
    • P
      rcu: Add RCU_CPU_STALL_VERBOSE to dump detailed per-task information · 1ed509a2
      Paul E. McKenney 提交于
      When RCU detects a grace-period stall, it currently just prints
      out the PID of any tasks doing the stalling.  This patch adds
      RCU_CPU_STALL_VERBOSE, which enables the more-verbose reporting
      from sched_show_task().
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-21-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1ed509a2
    • P
      rcu: Fix deadlock in TREE_PREEMPT_RCU CPU stall detection · 3acd9eb3
      Paul E. McKenney 提交于
      Under TREE_PREEMPT_RCU, print_other_cpu_stall() invokes
      rcu_print_task_stall() with the root rcu_node structure's ->lock
      held, and rcu_print_task_stall() acquires that same lock for
      self-deadlock. Fix this by removing the lock acquisition from
      rcu_print_task_stall(), and making all callers acquire the lock
      instead.
      Tested-by: NJohn Kacur <jkacur@redhat.com>
      Tested-by: NThomas Gleixner <tglx@linutronix.de>
      Located-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-19-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3acd9eb3
    • P
      rcu: Convert to raw_spinlocks · 1304afb2
      Paul E. McKenney 提交于
      The spinlocks in rcutree need to be real spinlocks in
      preempt-rt. Convert them to raw_spinlocks.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-18-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1304afb2
    • P
      rcu: Stop overflowing signed integers · 20133cfc
      Paul E. McKenney 提交于
      The C standard does not specify the result of an operation that
      overflows a signed integer, so such operations need to be
      avoided.  This patch changes the type of several fields from
      "long" to "unsigned long" and adjusts operations as needed.
      ULONG_CMP_GE() and ULONG_CMP_LT() macros are introduced to do
      the modular comparisons that are appropriate given that overflow
      is an expected event.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-17-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      20133cfc
    • P
      rcu: Accelerate grace period if last non-dynticked CPU · 8bd93a2c
      Paul E. McKenney 提交于
      Currently, rcu_needs_cpu() simply checks whether the current CPU
      has an outstanding RCU callback, which means that the last CPU
      to go into dyntick-idle mode might wait a few ticks for the
      relevant grace periods to complete.  However, if all the other
      CPUs are in dyntick-idle mode, and if this CPU is in a quiescent
      state (which it is for RCU-bh and RCU-sched any time that we are
      considering going into dyntick-idle mode), then the grace period
      is instantly complete.
      
      This patch therefore repeatedly invokes the RCU grace-period
      machinery in order to force any needed grace periods to complete
      quickly.  It does so a limited number of times in order to
      prevent starvation by an RCU callback function that might pass
      itself to call_rcu().
      
      However, if any CPU other than the current one is not in
      dyntick-idle mode, fall back to simply checking (with fix to bug
      noted by Lai Jiangshan).  Also, take advantage of last
      grace-period forcing, the opportunity to do so noted by Steve
      Rostedt.  And apply simplified #ifdef condition suggested by
      Frederic Weisbecker.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-15-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8bd93a2c
  14. 16 1月, 2010 1 次提交
    • P
      rcu: Fix sparse warnings · 017c4261
      Paul E. McKenney 提交于
      Rename local variable "i" in rcu_init() to avoid conflict with
      RCU_INIT_FLAVOR(), restrict the scope of RCU_TREE_NONCORE, and
      make __synchronize_srcu() static.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12635142581560-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      017c4261
  15. 13 1月, 2010 4 次提交
    • P
      rcu: Give different levels of the rcu_node hierarchy distinct lockdep names · b6407e86
      Paul E. McKenney 提交于
      Previously, each level of the rcu_node hierarchy had the same
      rather unimaginative name: "&rcu_node_class[i]".  This makes
      lockdep diagnostics involving these lockdep classes less helpful
      than would be nice. This patch fixes this by giving each level
      of the rcu_node hierarchy a distinct name: "rcu_node_level_0",
      "rcu_node_level_1", and so on. This version of the patch
      includes improved diagnostics suggested by Josh Triplett and
      Peter Zijlstra.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12626498421830-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b6407e86
    • P
      rcu: Add force_quiescent_state() testing to rcutorture · bf66f18e
      Paul E. McKenney 提交于
      Add force_quiescent_state() testing to rcutorture, with a
      separate thread that repeatedly invokes force_quiescent_state()
      in bursts. This can greatly increase the probability of
      encountering certain types of race conditions.
      Suggested-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1262646551116-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bf66f18e
    • P
      rcu: Make force_quiescent_state() start grace period if needed · 46a1e34e
      Paul E. McKenney 提交于
      Grace periods cannot be started while force_quiescent_state() is
      active.  This is OK in that the affected CPUs will try again
      later, but it does induce needless grace-period delays.  This
      patch causes rcu_start_gp() to record a failed attempt to start
      a grace period. When force_quiescent_state() prepares to return,
      it then starts the grace period if there was such a failed
      attempt.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12626465501854-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      46a1e34e
    • P
      rcu: Remove redundant grace-period check · 45f014c5
      Paul E. McKenney 提交于
      The rcu_process_dyntick() function checks twice for the end of
      the current grace period.  However, it holds the current
      rcu_node structure's ->lock field throughout, and doesn't get to
      the second call to rcu_gp_in_progress() unless there is at least
      one CPU corresponding to this rcu_node structure that has not
      yet checked in for the current grace period, which would prevent
      the current grace period from ending. So the current grace
      period cannot have ended, and the second check is redundant, so
      remove it.
      
      Also, given that this function is used even with !CONFIG_NO_HZ,
      its name is quite misleading.  Change from rcu_process_dyntick()
      to force_qs_rnp().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1262646550562-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      45f014c5