1. 19 5月, 2008 2 次提交
    • P
      rcu: add rcu_barrier_sched() and rcu_barrier_bh() · 70f12f84
      Paul E. McKenney 提交于
      Add rcu_barrier_sched() and rcu_barrier_bh().  With these in place,
      rcutorture no longer gives the occasional oops when repeatedly starting
      and stopping torturing rcu_bh.  Also adds the API needed to flush out
      pre-existing call_rcu_sched() callbacks.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      70f12f84
    • P
      rcu: add call_rcu_sched() · 4446a36f
      Paul E. McKenney 提交于
      Fourth cut of patch to provide the call_rcu_sched().  This is again to
      synchronize_sched() as call_rcu() is to synchronize_rcu().
      
      Should be fine for experimental and -rt use, but not ready for inclusion.
      With some luck, I will be able to tell Andrew to come out of hiding on
      the next round.
      
      Passes multi-day rcutorture sessions with concurrent CPU hotplugging.
      
      Fixes since the first version include a bug that could result in
      indefinite blocking (spotted by Gautham Shenoy), better resiliency
      against CPU-hotplug operations, and other minor fixes.
      
      Fixes since the second version include reworking grace-period detection
      to avoid deadlocks that could happen when running concurrently with
      CPU hotplug, adding Mathieu's fix to avoid the softlockup messages,
      as well as Mathieu's fix to allow use earlier in boot.
      
      Fixes since the third version include a wrong-CPU bug spotted by
      Andrew, getting rid of the obsolete synchronize_kernel API that somehow
      snuck back in, merging spin_unlock() and local_irq_restore() in a
      few places, commenting the code that checks for quiescent states based
      on interrupting from user-mode execution or the idle loop, removing
      some inline attributes, and some code-style changes.
      
      Known/suspected shortcomings:
      
      o	I still do not entirely trust the sleep/wakeup logic.  Next step
      	will be to use a private snapshot of the CPU online mask in
      	rcu_sched_grace_period() -- if the CPU wasn't there at the start
      	of the grace period, we don't need to hear from it.  And the
      	bit about accounting for changes in online CPUs inside of
      	rcu_sched_grace_period() is ugly anyway.
      
      o	It might be good for rcu_sched_grace_period() to invoke
      	resched_cpu() when a given CPU wasn't responding quickly,
      	but resched_cpu() is declared static...
      
      This patch also fixes a long-standing bug in the earlier preemptable-RCU
      implementation of synchronize_rcu() that could result in loss of
      concurrent external changes to a task's CPU affinity mask.  I still cannot
      remember who reported this...
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4446a36f
  2. 14 2月, 2008 1 次提交
  3. 26 1月, 2008 3 次提交
  4. 23 1月, 2008 1 次提交
  5. 17 10月, 2007 1 次提交
  6. 12 10月, 2007 1 次提交
  7. 10 5月, 2007 1 次提交
    • R
      Add suspend-related notifications for CPU hotplug · 8bb78442
      Rafael J. Wysocki 提交于
      Since nonboot CPUs are now disabled after tasks and devices have been
      frozen and the CPU hotplug infrastructure is used for this purpose, we need
      special CPU hotplug notifications that will help the CPU-hotplug-aware
      subsystems distinguish normal CPU hotplug events from CPU hotplug events
      related to a system-wide suspend or resume operation in progress.  This
      patch introduces such notifications and causes them to be used during
      suspend and resume transitions.  It also changes all of the
      CPU-hotplug-aware subsystems to take these notifications into consideration
      (for now they are handled in the same way as the corresponding "normal"
      ones).
      
      [oleg@tv-sign.ru: cleanups]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8bb78442
  8. 08 12月, 2006 1 次提交
    • E
      [PATCH] rcu: add a prefetch() in rcu_do_batch() · 1c69d921
      Eric Dumazet 提交于
      On some workloads, (for example when lot of close() syscalls are done), RCU
      qlen can be quite large, and RCU heads are no longer in cpu cache when
      rcu_do_batch() is called.
      
      This patch adds a prefetch() in rcu_do_batch() to give CPU a hint to bring
      back cache lines containing 'struct rcu_head's.
      
      Most list manipulations macros include prefetch(), but not open coded ones
      (at least with current C compilers :) )
      
      I got a nice speedup on a trivial benchmark (3.48 us per iteration instead
      of 3.95 us on a 1.6 GHz Pentium-M)
      
      while (1) { pipe(p); close(fd[0]); close(fd[1]);}
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1c69d921
  9. 04 10月, 2006 1 次提交
    • O
      [PATCH] rcu: simplify/improve batch tuning · 20e9751b
      Oleg Nesterov 提交于
      Kill a hard-to-calculate 'rsinterval' boot parameter and per-cpu
      rcu_data.last_rs_qlen.  Instead, it adds adds a flag rcu_ctrlblk.signaled,
      which records the fact that one of CPUs has sent a resched IPI since the
      last rcu_start_batch().
      
      Roughly speaking, we need two rcu_start_batch()s in order to move callbacks
      from ->nxtlist to ->donelist.  This means that when ->qlen exceeds qhimark
      and continues to grow, we should send a resched IPI, and then do it again
      after we gone through a quiescent state.
      
      On the other hand, if it was already sent, we don't need to do it again
      when another CPU detects overflow of the queue.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NPaul E. McKenney <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      20e9751b
  10. 13 9月, 2006 1 次提交
  11. 01 8月, 2006 1 次提交
  12. 04 7月, 2006 1 次提交
  13. 28 6月, 2006 3 次提交
  14. 23 6月, 2006 1 次提交
  15. 16 5月, 2006 1 次提交
  16. 26 4月, 2006 2 次提交
  17. 24 3月, 2006 1 次提交
  18. 23 3月, 2006 2 次提交
  19. 21 3月, 2006 1 次提交
  20. 09 3月, 2006 1 次提交
    • D
      [PATCH] rcu batch tuning · 21a1ea9e
      Dipankar Sarma 提交于
      This patch adds new tunables for RCU queue and finished batches.  There are
      two types of controls - number of completed RCU updates invoked in a batch
      (blimit) and monitoring for high rate of incoming RCUs on a cpu (qhimark,
      qlowmark).
      
      By default, the per-cpu batch limit is set to a small value.  If the input
      RCU rate exceeds the high watermark, we do two things - force quiescent
      state on all cpus and set the batch limit of the CPU to INTMAX.  Setting
      batch limit to INTMAX forces all finished RCUs to be processed in one shot.
       If we have more than INTMAX RCUs queued up, then we have bigger problems
      anyway.  Once the incoming queued RCUs fall below the low watermark, the
      batch limit is set to the default.
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      21a1ea9e
  21. 11 1月, 2006 2 次提交
  22. 10 1月, 2006 2 次提交
  23. 09 1月, 2006 3 次提交
  24. 13 12月, 2005 2 次提交
    • S
      [PATCH] Fix RCU race in access of nohz_cpu_mask · c3f59023
      Srivatsa Vaddagiri 提交于
      Accessing nohz_cpu_mask before incrementing rcp->cur is racy.  It can cause
      tickless idle CPUs to be included in rsp->cpumask, which will extend
      graceperiods unnecessarily.
      
      Fix this race.  It has been tested using extensions to RCU torture module
      that forces various CPUs to become idle.
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c3f59023
    • D
      [PATCH] add rcu_barrier() synchronization point · ab4720ec
      Dipankar Sarma 提交于
      This introduces a new interface - rcu_barrier() which waits until all
      the RCUs queued until this call have been completed.
      
      Reiser4 needs this, because we do more than just freeing memory object
      in our RCU callback: we also remove it from the list hanging off
      super-block.  This means, that before freeing reiser4-specific portion
      of super-block (during umount) we have to wait until all pending RCU
      callbacks are executed.
      
      The only change of reiser4 made to the original patch, is exporting of
      rcu_barrier().
      
      Cc: Hans Reiser <reiser@namesys.com>
      Cc: Vladimir V. Saveliev <vs@namesys.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab4720ec
  25. 31 10月, 2005 1 次提交
  26. 18 10月, 2005 2 次提交
    • E
      [PATCH] rcu: keep rcu callback event counter · 5ee832db
      Eric Dumazet 提交于
      This makes call_rcu() keep track of how many events there are on the RCU
      list, and cause a reschedule event when the list gets too long.
      
      This helps keep RCU event lists down.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5ee832db
    • L
      Increase default RCU batching sharply · 2cc78eb5
      Linus Torvalds 提交于
      Dipankar made RCU limit the batch size to improve latency, but that
      approach is unworkable: it can cause the RCU queues to grow without
      bounds, since the batch limiter ended up limiting the callbacks.
      
      So make the limit much higher, and start planning on instead limiting
      the batch size by doing RCU callbacks more often if the queue looks like
      it might be growing too long.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2cc78eb5
  27. 10 9月, 2005 1 次提交