1. 07 10月, 2009 1 次提交
    • P
      rcu: Move rcu_barrier() to rcutree · d0ec774c
      Paul E. McKenney 提交于
      Move the existing rcu_barrier() implementation to rcutree.c,
      consistent with the fact that the rcu_barrier() implementation is
      tied quite tightly to the RCU implementation.
      
      This opens the way to simplify and fix rcutree.c's rcu_barrier()
      implementation in a later patch.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12548908982563-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d0ec774c
  2. 06 10月, 2009 2 次提交
  3. 24 9月, 2009 1 次提交
  4. 19 9月, 2009 1 次提交
    • P
      rcu: Fix whitespace inconsistencies · a71fca58
      Paul E. McKenney 提交于
      Fix a number of whitespace ^Ierrors in the include/linux/rcu*
      and the kernel/rcu* files.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      LKML-Reference: <20090918172819.GA24405@linux.vnet.ibm.com>
      [ did more checkpatch fixlets ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a71fca58
  5. 18 9月, 2009 1 次提交
    • P
      rcu: Fix synchronize_rcu() for TREE_PREEMPT_RCU · 16e30811
      Paul E. McKenney 提交于
      The redirection of synchronize_sched() to synchronize_rcu() was
      appropriate for TREE_RCU, but not for TREE_PREEMPT_RCU.
      
      Fix this by creating an underlying synchronize_sched().  TREE_RCU
      then redirects synchronize_rcu() to synchronize_sched(), while
      TREE_PREEMPT_RCU has its own version of synchronize_rcu().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      LKML-Reference: <12528585111916-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      16e30811
  6. 19 8月, 2009 1 次提交
    • P
      rcu: Delay rcu_barrier() wait until beginning of next CPU-hotunplug operation. · 1423cc03
      Paul E. McKenney 提交于
      Ingo Molnar reported this lockup:
      
       [  200.380003] Hangcheck: hangcheck value past margin!
       [  248.192003] INFO: task S99local:2974 blocked for more than 120 seconds.
       [  248.194532] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
       [  248.202330] S99local      D 0000000c  6256  2974   2687 0x00000000
       [  248.208929]  9c7ebe90 00000086 6b67ef8b 0000000c 9f25a610 81a69869 00000001 820b6990
       [  248.216123]  820b6990 820b6990 9c6e4c20 9c6e4eb4 82c78990 00000000 6b993559 0000000c
       [  248.220616]  9c7ebe90 8105f22a 9c6e4eb4 9c6e4c20 00000001 9c7ebe98 9c7ebeb4 81a65cb3
       [  248.229990] Call Trace:
       [  248.234049]  [<81a69869>] ? _spin_unlock_irqrestore+0x22/0x37
       [  248.239769]  [<8105f22a>] ? prepare_to_wait+0x48/0x4e
       [  248.244796]  [<81a65cb3>] rcu_barrier_cpu_hotplug+0xaa/0xc9
       [  248.250343]  [<8105f029>] ? autoremove_wake_function+0x0/0x38
       [  248.256063]  [<81062cf2>] notifier_call_chain+0x49/0x71
       [  248.261263]  [<81062da0>] raw_notifier_call_chain+0x11/0x13
       [  248.266809]  [<81a0b475>] _cpu_down+0x272/0x288
       [  248.271316]  [<81a0b4d5>] cpu_down+0x4a/0xa2
       [  248.275563]  [<81a0c48a>] store_online+0x2a/0x5e
       [  248.280156]  [<81a0c460>] ? store_online+0x0/0x5e
       [  248.284836]  [<814ddc35>] sysdev_store+0x20/0x28
       [  248.289429]  [<8112e403>] sysfs_write_file+0xb8/0xe3
       [  248.294369]  [<8112e34b>] ? sysfs_write_file+0x0/0xe3
       [  248.299396]  [<810e4c8f>] vfs_write+0x91/0x120
       [  248.303817]  [<810e4dc1>] sys_write+0x40/0x65
       [  248.308150]  [<81002d73>] sysenter_do_call+0x12/0x28
      
      This change moves an RCU grace period delay off of the
      critical path for CPU-hotunplug operations.
      
      Since RCU callback migration is only performed on
      CPU-hotunplug operations, and since the rcu_barrier() race is
      provoked only by consecutive CPU-hotunplug operations, it is
      not necessary to delay the end of a given CPU-hotunplug
      operation.
      
      We can instead choose to delay the beginning of the next
      CPU-hotunplug operation.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josht@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: hugh.dickins@tiscali.co.uk
      Cc: benh@kernel.crashing.org
      LKML-Reference: <20090819060614.GA14383@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1423cc03
  7. 16 8月, 2009 1 次提交
    • P
      rcu: Simplify RCU CPU-hotplug notification · 2e597558
      Paul E. McKenney 提交于
      Use the new cpu_notifier() API to simplify RCU's CPU-hotplug
      notifiers, collapsing down to a single such notifier.
      
      This makes it trivial to provide the notifier-ordering
      guarantee that rcu_barrier() depends on.
      
      Also remove redundant open_softirq() calls from Hierarchical
      RCU notifier.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: josht@linux.vnet.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: hugh.dickins@tiscali.co.uk
      Cc: benh@kernel.crashing.org
      LKML-Reference: <12503552312510-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e597558
  8. 03 7月, 2009 1 次提交
    • P
      rcu: Add synchronize_sched_expedited() primitive · 03b042bf
      Paul E. McKenney 提交于
      This adds the synchronize_sched_expedited() primitive that
      implements the "big hammer" expedited RCU grace periods.
      
      This primitive is placed in kernel/sched.c rather than
      kernel/rcupdate.c due to its need to interact closely with the
      migration_thread() kthread.
      
      The idea is to wake up this kthread with req->task set to NULL,
      in response to which the kthread reports the quiescent state
      resulting from the kthread having been scheduled.
      
      Because this patch needs to fallback to the slow versions of
      the primitives in response to some races with CPU onlining and
      offlining, a new synchronize_rcu_bh() primitive is added as
      well.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Cc: davem@davemloft.net
      Cc: dada1@cosmosbay.com
      Cc: zbr@ioremap.net
      Cc: jeff.chua.linux@gmail.com
      Cc: paulus@samba.org
      Cc: laijs@cn.fujitsu.com
      Cc: jengelh@medozas.de
      Cc: r000n@r000n.net
      Cc: benh@kernel.crashing.org
      Cc: mathieu.desnoyers@polymtl.ca
      LKML-Reference: <12459460982947-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      03b042bf
  9. 16 4月, 2009 1 次提交
    • D
      RCU: Don't try and predeclare inline funcs as it upsets some versions of gcc · 5b1d07ed
      David Howells 提交于
      Don't try and predeclare inline funcs like this:
      
      	static inline void wait_migrated_callbacks(void)
      	...
      	static void _rcu_barrier(enum rcu_barrier type)
      	{
      		...
      		wait_migrated_callbacks();
      	}
      	...
      	static inline void wait_migrated_callbacks(void)
      	{
      		wait_event(rcu_migrate_wq, !atomic_read(&rcu_migrate_type_count));
      	}
      
      as it upsets some versions of gcc under some circumstances:
      
      	kernel/rcupdate.c: In function `_rcu_barrier':
      	kernel/rcupdate.c:125: sorry, unimplemented: inlining failed in call to 'wait_migrated_callbacks': function body not available
      	kernel/rcupdate.c:152: sorry, unimplemented: called from here
      
      This can be dealt with by simply putting the static variables (rcu_migrate_*)
      at the top, and moving the implementation of the function up so that it
      replaces its forward declaration.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b1d07ed
  10. 31 3月, 2009 1 次提交
    • L
      rcu: rcu_barrier VS cpu_hotplug: Ensure callbacks in dead cpu are migrated to online cpu · f69b17d7
      Lai Jiangshan 提交于
      cpu hotplug may happen asynchronously, some rcu callbacks are maybe
      still on dead cpu, rcu_barrier() also needs to wait for these rcu
      callbacks to complete, so we must ensure callbacks in dead cpu are
      migrated to online cpu.
      
      Paul E. McKenney's review:
      
        Good stuff, Lai!!!  Simpler than any of the approaches that I was
        considering, and, better yet, independent of the underlying RCU
        implementation!!!
      
        I was initially worried that wake_up() might wake only one of two
        possible wait_event()s, namely rcu_barrier() and the CPU_POST_DEAD code,
        but the fact that wait_event() clears WQ_FLAG_EXCLUSIVE avoids that issue.
        I was also worried about the fact that different RCU implementations have
        different mappings of call_rcu(), call_rcu_bh(), and call_rcu_sched(), but
        this is OK as well because we just get an extra (harmless) callback in the
        case that they map together (for example, Classic RCU has call_rcu_sched()
        mapping to call_rcu()).
      
        Overlap of CPU-hotplug operations is prevented by cpu_add_remove_lock,
        and any stray callbacks that arrive (for example, from irq handlers
        running on the dying CPU) either are ahead of the CPU_DYING callbacks on
        the one hand (and thus accounted for), or happened after the rcu_barrier()
        started on the other (and thus don't need to be accounted for).
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <49C36476.1010400@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f69b17d7
  11. 26 2月, 2009 1 次提交
    • P
      rcu: Teach RCU that idle task is not quiscent state at boot · a6826048
      Paul E. McKenney 提交于
      This patch fixes a bug located by Vegard Nossum with the aid of
      kmemcheck, updated based on review comments from Nick Piggin,
      Ingo Molnar, and Andrew Morton.  And cleans up the variable-name
      and function-name language.  ;-)
      
      The boot CPU runs in the context of its idle thread during boot-up.
      During this time, idle_cpu(0) will always return nonzero, which will
      fool Classic and Hierarchical RCU into deciding that a large chunk of
      the boot-up sequence is a big long quiescent state.  This in turn causes
      RCU to prematurely end grace periods during this time.
      
      This patch changes the rcutree.c and rcuclassic.c rcu_check_callbacks()
      function to ignore the idle task as a quiescent state until the
      system has started up the scheduler in rest_init(), introducing a
      new non-API function rcu_idle_now_means_idle() to inform RCU of this
      transition.  RCU maintains an internal rcu_idle_cpu_truthful variable
      to track this state, which is then used by rcu_check_callback() to
      determine if it should believe idle_cpu().
      
      Because this patch has the effect of disallowing RCU grace periods
      during long stretches of the boot-up sequence, this patch also introduces
      Josh Triplett's UP-only optimization that makes synchronize_rcu() be a
      no-op if num_online_cpus() returns 1.  This allows boot-time code that
      calls synchronize_rcu() to proceed normally.  Note, however, that RCU
      callbacks registered by call_rcu() will likely queue up until later in
      the boot sequence.  Although rcuclassic and rcutree can also use this
      same optimization after boot completes, rcupreempt must restrict its
      use of this optimization to the portion of the boot sequence before the
      scheduler starts up, given that an rcupreempt RCU read-side critical
      section may be preeempted.
      
      In addition, this patch takes Nick Piggin's suggestion to make the
      system_state global variable be __read_mostly.
      
      Changes since v4:
      
      o	Changes the name of the introduced function and variable to
      	be less emotional.  ;-)
      
      Changes since v3:
      
      o	WARN_ON(nr_context_switches() > 0) to verify that RCU
      	switches out of boot-time mode before the first context
      	switch, as suggested by Nick Piggin.
      
      Changes since v2:
      
      o	Created rcu_blocking_is_gp() internal-to-RCU API that
      	determines whether a call to synchronize_rcu() is itself
      	a grace period.
      
      o	The definition of rcu_blocking_is_gp() for rcuclassic and
      	rcutree checks to see if but a single CPU is online.
      
      o	The definition of rcu_blocking_is_gp() for rcupreempt
      	checks to see both if but a single CPU is online and if
      	the system is still in early boot.
      
      	This allows rcupreempt to again work correctly if running
      	on a single CPU after booting is complete.
      
      o	Added check to rcupreempt's synchronize_sched() for there
      	being but one online CPU.
      
      Tested all three variants both SMP and !SMP, booted fine, passed a short
      rcutorture test on both x86 and Power.
      Located-by: NVegard Nossum <vegard.nossum@gmail.com>
      Tested-by: NVegard Nossum <vegard.nossum@gmail.com>
      Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a6826048
  12. 05 1月, 2009 1 次提交
  13. 21 10月, 2008 1 次提交
    • L
      rcupdate: fix bug of rcu_barrier*() · 5f865151
      Lai Jiangshan 提交于
      current rcu_barrier_bh() is like this:
      
      void rcu_barrier_bh(void)
      {
      	BUG_ON(in_interrupt());
      	/* Take cpucontrol mutex to protect against CPU hotplug */
      	mutex_lock(&rcu_barrier_mutex);
      	init_completion(&rcu_barrier_completion);
      	atomic_set(&rcu_barrier_cpu_count, 0);
      	/*
      	 * The queueing of callbacks in all CPUs must be atomic with
      	 * respect to RCU, otherwise one CPU may queue a callback,
      	 * wait for a grace period, decrement barrier count and call
      	 * complete(), while other CPUs have not yet queued anything.
      	 * So, we need to make sure that grace periods cannot complete
      	 * until all the callbacks are queued.
      	 */
      	rcu_read_lock();
      	on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1);
      	rcu_read_unlock();
      	wait_for_completion(&rcu_barrier_completion);
      	mutex_unlock(&rcu_barrier_mutex);
      }
      
      The inconsistency of the code and the comments show a bug here.
      rcu_read_lock() cannot make sure that "grace periods for RCU_BH
      cannot complete until all the callbacks are queued".
      it only make sure that race periods for RCU cannot complete
      until all the callbacks are queued.
      
      so we must use rcu_read_lock_bh() for rcu_barrier_bh().
      like this:
      
      void rcu_barrier_bh(void)
      {
      	......
      	rcu_read_lock_bh();
      	on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1);
      	rcu_read_unlock_bh();
      	......
      }
      
      and also rcu_barrier() rcu_barrier_sched() are implemented like this.
      it will bring a lot of duplicate code. My patch uses another way to
      fix this bug, please see the comment of my patch.
      Thank Paul E. McKenney for he rewrote the comment.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5f865151
  14. 21 8月, 2008 1 次提交
  15. 26 6月, 2008 1 次提交
  16. 19 5月, 2008 2 次提交
    • P
      rcu: add rcu_barrier_sched() and rcu_barrier_bh() · 70f12f84
      Paul E. McKenney 提交于
      Add rcu_barrier_sched() and rcu_barrier_bh().  With these in place,
      rcutorture no longer gives the occasional oops when repeatedly starting
      and stopping torturing rcu_bh.  Also adds the API needed to flush out
      pre-existing call_rcu_sched() callbacks.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      70f12f84
    • P
      rcu: add call_rcu_sched() · 4446a36f
      Paul E. McKenney 提交于
      Fourth cut of patch to provide the call_rcu_sched().  This is again to
      synchronize_sched() as call_rcu() is to synchronize_rcu().
      
      Should be fine for experimental and -rt use, but not ready for inclusion.
      With some luck, I will be able to tell Andrew to come out of hiding on
      the next round.
      
      Passes multi-day rcutorture sessions with concurrent CPU hotplugging.
      
      Fixes since the first version include a bug that could result in
      indefinite blocking (spotted by Gautham Shenoy), better resiliency
      against CPU-hotplug operations, and other minor fixes.
      
      Fixes since the second version include reworking grace-period detection
      to avoid deadlocks that could happen when running concurrently with
      CPU hotplug, adding Mathieu's fix to avoid the softlockup messages,
      as well as Mathieu's fix to allow use earlier in boot.
      
      Fixes since the third version include a wrong-CPU bug spotted by
      Andrew, getting rid of the obsolete synchronize_kernel API that somehow
      snuck back in, merging spin_unlock() and local_irq_restore() in a
      few places, commenting the code that checks for quiescent states based
      on interrupting from user-mode execution or the idle loop, removing
      some inline attributes, and some code-style changes.
      
      Known/suspected shortcomings:
      
      o	I still do not entirely trust the sleep/wakeup logic.  Next step
      	will be to use a private snapshot of the CPU online mask in
      	rcu_sched_grace_period() -- if the CPU wasn't there at the start
      	of the grace period, we don't need to hear from it.  And the
      	bit about accounting for changes in online CPUs inside of
      	rcu_sched_grace_period() is ugly anyway.
      
      o	It might be good for rcu_sched_grace_period() to invoke
      	resched_cpu() when a given CPU wasn't responding quickly,
      	but resched_cpu() is declared static...
      
      This patch also fixes a long-standing bug in the earlier preemptable-RCU
      implementation of synchronize_rcu() that could result in loss of
      concurrent external changes to a task's CPU affinity mask.  I still cannot
      remember who reported this...
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4446a36f
  17. 14 2月, 2008 1 次提交
  18. 26 1月, 2008 3 次提交
  19. 23 1月, 2008 1 次提交
  20. 17 10月, 2007 1 次提交
  21. 12 10月, 2007 1 次提交
  22. 10 5月, 2007 1 次提交
    • R
      Add suspend-related notifications for CPU hotplug · 8bb78442
      Rafael J. Wysocki 提交于
      Since nonboot CPUs are now disabled after tasks and devices have been
      frozen and the CPU hotplug infrastructure is used for this purpose, we need
      special CPU hotplug notifications that will help the CPU-hotplug-aware
      subsystems distinguish normal CPU hotplug events from CPU hotplug events
      related to a system-wide suspend or resume operation in progress.  This
      patch introduces such notifications and causes them to be used during
      suspend and resume transitions.  It also changes all of the
      CPU-hotplug-aware subsystems to take these notifications into consideration
      (for now they are handled in the same way as the corresponding "normal"
      ones).
      
      [oleg@tv-sign.ru: cleanups]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8bb78442
  23. 08 12月, 2006 1 次提交
    • E
      [PATCH] rcu: add a prefetch() in rcu_do_batch() · 1c69d921
      Eric Dumazet 提交于
      On some workloads, (for example when lot of close() syscalls are done), RCU
      qlen can be quite large, and RCU heads are no longer in cpu cache when
      rcu_do_batch() is called.
      
      This patch adds a prefetch() in rcu_do_batch() to give CPU a hint to bring
      back cache lines containing 'struct rcu_head's.
      
      Most list manipulations macros include prefetch(), but not open coded ones
      (at least with current C compilers :) )
      
      I got a nice speedup on a trivial benchmark (3.48 us per iteration instead
      of 3.95 us on a 1.6 GHz Pentium-M)
      
      while (1) { pipe(p); close(fd[0]); close(fd[1]);}
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1c69d921
  24. 04 10月, 2006 1 次提交
    • O
      [PATCH] rcu: simplify/improve batch tuning · 20e9751b
      Oleg Nesterov 提交于
      Kill a hard-to-calculate 'rsinterval' boot parameter and per-cpu
      rcu_data.last_rs_qlen.  Instead, it adds adds a flag rcu_ctrlblk.signaled,
      which records the fact that one of CPUs has sent a resched IPI since the
      last rcu_start_batch().
      
      Roughly speaking, we need two rcu_start_batch()s in order to move callbacks
      from ->nxtlist to ->donelist.  This means that when ->qlen exceeds qhimark
      and continues to grow, we should send a resched IPI, and then do it again
      after we gone through a quiescent state.
      
      On the other hand, if it was already sent, we don't need to do it again
      when another CPU detects overflow of the queue.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NPaul E. McKenney <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      20e9751b
  25. 13 9月, 2006 1 次提交
  26. 01 8月, 2006 1 次提交
  27. 04 7月, 2006 1 次提交
  28. 28 6月, 2006 3 次提交
  29. 23 6月, 2006 1 次提交
  30. 16 5月, 2006 1 次提交
  31. 26 4月, 2006 2 次提交
  32. 24 3月, 2006 1 次提交
  33. 23 3月, 2006 1 次提交