1. 26 2月, 2015 1 次提交
  2. 16 1月, 2015 1 次提交
    • P
      rcu: Make cond_resched_rcu_qs() apply to normal RCU flavors · 5cd37193
      Paul E. McKenney 提交于
      Although cond_resched_rcu_qs() only applies to TASKS_RCU, it is used
      in places where it would be useful for it to apply to the normal RCU
      flavors, rcu_preempt, rcu_sched, and rcu_bh.  This is especially the
      case for workloads that aggressively overload the system, particularly
      those that generate large numbers of RCU updates on systems running
      NO_HZ_FULL CPUs.  This commit therefore communicates quiescent states
      from cond_resched_rcu_qs() to the normal RCU flavors.
      
      Note that it is unfortunately necessary to leave the old ->passed_quiesce
      mechanism in place to allow quiescent states that apply to only one
      flavor to be recorded.  (Yes, we could decrement ->rcu_qs_ctr_snap in
      that case, but that is not so good for debugging of RCU internals.)
      In addition, if one of the RCU flavor's grace period has stalled, this
      will invoke rcu_momentary_dyntick_idle(), resulting in a heavy-weight
      quiescent state visible from other CPUs.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Merge commit from Sasha Levin fixing a bug where __this_cpu()
        was used in preemptible code. ]
      5cd37193
  3. 07 1月, 2015 1 次提交
  4. 14 11月, 2014 3 次提交
  5. 04 11月, 2014 2 次提交
  6. 30 10月, 2014 1 次提交
  7. 29 10月, 2014 1 次提交
  8. 17 9月, 2014 2 次提交
  9. 08 9月, 2014 9 次提交
    • P
      rcu: Per-CPU operation cleanups to rcu_*_qs() functions · 284a8c93
      Paul E. McKenney 提交于
      The rcu_bh_qs(), rcu_preempt_qs(), and rcu_sched_qs() functions use
      old-style per-CPU variable access and write to ->passed_quiesce even
      if it is already set.  This commit therefore updates to use the new-style
      per-CPU variable access functions and avoids the spurious writes.
      This commit also eliminates the "cpu" argument to these functions because
      they are always invoked on the indicated CPU.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      284a8c93
    • P
      rcu: Remove redundant preempt_disable() from rcu_note_voluntary_context_switch() · 01a81330
      Paul E. McKenney 提交于
      In theory, synchronize_sched() requires a read-side critical section
      to order against.  In practice, preemption can be thought of as
      being disabled across every machine instruction, at least for those
      machine instructions that are not in the idle loop and not on offline
      CPUs.  So this commit removes the redundant preempt_disable() from
      rcu_note_voluntary_context_switch().
      
      Please note that the single instruction in question is the store of
      zero to ->rcu_tasks_holdout.  The "if" is simply a performance optimization
      that avoids unnecessary stores.  To see this, keep in mind that both
      the "if" condition and the store are in a quiescent state.  Therefore,
      even if the task is preempted for a full grace period (presumably due
      to its having done a context switch beforehand), the store will be
      recording a legitimate quiescent state.
      Reported-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      
      Conflicts:
      	include/linux/rcupdate.h
      01a81330
    • P
      rcutorture: Add torture tests for RCU-tasks · 69c60455
      Paul E. McKenney 提交于
      This commit adds torture tests for RCU-tasks.  It also fixes a bug that
      would segfault for an RCU flavor lacking a callback-barrier function.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      69c60455
    • P
      rcu: Make TASKS_RCU handle tasks that are almost done exiting · 3f95aa81
      Paul E. McKenney 提交于
      Once a task has passed exit_notify() in the do_exit() code path, it
      is no longer on the task lists, and is therefore no longer visible
      to rcu_tasks_kthread().  This means that an almost-exited task might
      be preempted while within a trampoline, and this task won't be waited
      on by rcu_tasks_kthread().  This commit fixes this bug by adding an
      srcu_struct.  An exiting task does srcu_read_lock() just before calling
      exit_notify(), and does the corresponding srcu_read_unlock() after
      doing the final preempt_disable().  This means that rcu_tasks_kthread()
      can do synchronize_srcu() to wait for all mostly-exited tasks to reach
      their final preempt_disable() region, and then use synchronize_sched()
      to wait for those tasks to finish exiting.
      Reported-by: NOleg Nesterov <oleg@redhat.com>
      Suggested-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      3f95aa81
    • P
      rcu: Add synchronous grace-period waiting for RCU-tasks · 53c6d4ed
      Paul E. McKenney 提交于
      It turns out to be easier to add the synchronous grace-period waiting
      functions to RCU-tasks than to work around their absense in rcutorture,
      so this commit adds them.  The key point is that the existence of
      call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      53c6d4ed
    • P
      rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loops · bde6c3aa
      Paul E. McKenney 提交于
      RCU-tasks requires the occasional voluntary context switch
      from CPU-bound in-kernel tasks.  In some cases, this requires
      instrumenting cond_resched().  However, there is some reluctance
      to countenance unconditionally instrumenting cond_resched() (see
      http://lwn.net/Articles/603252/), so this commit creates a separate
      cond_resched_rcu_qs() that may be used in place of cond_resched() in
      locations prone to long-duration in-kernel looping.
      
      This commit currently instruments only RCU-tasks.  Future possibilities
      include also instrumenting RCU, RCU-bh, and RCU-sched in order to reduce
      IPI usage.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bde6c3aa
    • P
      rcu: Add call_rcu_tasks() · 8315f422
      Paul E. McKenney 提交于
      This commit adds a new RCU-tasks flavor of RCU, which provides
      call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
      context switch (not preemption!) and userspace execution (not the idle
      loop -- use some sort of schedule_on_each_cpu() if you need to handle the
      idle tasks.  Note that unlike other RCU flavors, these quiescent states
      occur in tasks, not necessarily CPUs.  Includes fixes from Steven Rostedt.
      
      This RCU flavor is assumed to have very infrequent latency-tolerant
      updaters.  This assumption permits significant simplifications, including
      a single global callback list protected by a single global lock, along
      with a single task-private linked list containing all tasks that have not
      yet passed through a quiescent state.  If experience shows this assumption
      to be incorrect, the required additional complexity will be added.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      8315f422
    • O
      rcu: Uninline rcu_read_lock_held() · 85b39d30
      Oleg Nesterov 提交于
      This commit uninlines rcu_read_lock_held(). According to "size vmlinux"
      this saves 28549 in .text:
      
      	- 5541731 3014560 14757888 23314179
      	+ 5513182 3026848 14757888 23297918
      
      Note: it looks as if the data grows by 12288 bytes but this is not true,
      it does not actually grow. But .data starts with ALIGN(THREAD_SIZE) and
      since .text shrinks the padding grows, and thus .data grows too as it
      seen by /bin/size. diff System.map:
      
      	- ffffffff81510000 D _sdata
      	- ffffffff81510000 D init_thread_union
      	+ ffffffff81509000 D _sdata
      	+ ffffffff8150c000 D init_thread_union
      
      Perhaps we can change vmlinux.lds.S to .data itself, so that /bin/size
      can't "wrongly" report that .data grows if .text shinks.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      85b39d30
    • P
      rcu: Return bool type in rcu_lockdep_current_cpu_online() · 521d24ee
      Pranith Kumar 提交于
      Return true instead of 1 in rcu_lockdep_current_cpu_online() as this
      has bool as return type.
      Signed-off-by: NPranith Kumar <bobby.prani@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      521d24ee
  10. 10 7月, 2014 2 次提交
  11. 24 6月, 2014 2 次提交
    • P
      rcu: Reduce overhead of cond_resched() checks for RCU · 4a81e832
      Paul E. McKenney 提交于
      Commit ac1bea85 (Make cond_resched() report RCU quiescent states)
      fixed a problem where a CPU looping in the kernel with but one runnable
      task would give RCU CPU stall warnings, even if the in-kernel loop
      contained cond_resched() calls.  Unfortunately, in so doing, it introduced
      performance regressions in Anton Blanchard's will-it-scale "open1" test.
      The problem appears to be not so much the increased cond_resched() path
      length as an increase in the rate at which grace periods complete, which
      increased per-update grace-period overhead.
      
      This commit takes a different approach to fixing this bug, mainly by
      moving the RCU-visible quiescent state from cond_resched() to
      rcu_note_context_switch(), and by further reducing the check to a
      simple non-zero test of a single per-CPU variable.  However, this
      approach requires that the force-quiescent-state processing send
      resched IPIs to the offending CPUs.  These will be sent only once
      the grace period has reached an age specified by the boot/sysfs
      parameter rcutree.jiffies_till_sched_qs, or once the grace period
      reaches an age halfway to the point at which RCU CPU stall warnings
      will be emitted, whichever comes first.
      Reported-by: NDave Hansen <dave.hansen@intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Christoph Lameter <cl@gentwo.org>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      [ paulmck: Made rcu_momentary_dyntick_idle() as suggested by the
        ktest build robot.  Also fixed smp_mb() comment as noted by
        Oleg Nesterov. ]
      
      Merge with e552592e (Reduce overhead of cond_resched() checks for RCU)
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4a81e832
    • P
      rcu: Export debug_init_rcu_head() and and debug_init_rcu_head() · 546a9d85
      Paul E. McKenney 提交于
      Currently, call_rcu() relies on implicit allocation and initialization
      for the debug-objects handling of RCU callbacks.  If you hammer the
      kernel hard enough with Sasha's modified version of trinity, you can end
      up with the sl*b allocators recursing into themselves via this implicit
      call_rcu() allocation.
      
      This commit therefore exports the debug_init_rcu_head() and
      debug_rcu_head_free() functions, which permits the allocators to allocated
      and pre-initialize the debug-objects information, so that there no longer
      any need for call_rcu() to do that initialization, which in turn prevents
      the recursion into the memory allocators.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Looks-good-to: Christoph Lameter <cl@linux.com>
      546a9d85
  12. 20 5月, 2014 1 次提交
  13. 15 5月, 2014 1 次提交
    • P
      sched,rcu: Make cond_resched() report RCU quiescent states · ac1bea85
      Paul E. McKenney 提交于
      Given a CPU running a loop containing cond_resched(), with no
      other tasks runnable on that CPU, RCU will eventually report RCU
      CPU stall warnings due to lack of quiescent states.  Fortunately,
      every call to cond_resched() is a perfectly good quiescent state.
      Unfortunately, invoking rcu_note_context_switch() is a bit heavyweight
      for cond_resched(), especially given the need to disable preemption,
      and, for RCU-preempt, interrupts as well.
      
      This commit therefore maintains a per-CPU counter that causes
      cond_resched(), cond_resched_lock(), and cond_resched_softirq() to call
      rcu_note_context_switch(), but only about once per 256 invocations.
      This ratio was chosen in keeping with the relative time constants of
      RCU grace periods.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      ac1bea85
  14. 14 5月, 2014 1 次提交
  15. 29 4月, 2014 2 次提交
  16. 26 2月, 2014 1 次提交
  17. 18 2月, 2014 6 次提交
  18. 10 2月, 2014 1 次提交
    • O
      lockdep: Make held_lock->check and "int check" argument bool · fb9edbe9
      Oleg Nesterov 提交于
      The "int check" argument of lock_acquire() and held_lock->check are
      misleading. This is actually a boolean: 2 means "true", everything
      else is "false".
      
      And there is no need to pass 1 or 0 to lock_acquire() depending on
      CONFIG_PROVE_LOCKING, __lock_acquire() checks prove_locking at the
      start and clears "check" if !CONFIG_PROVE_LOCKING.
      
      Note: probably we can simply kill this member/arg. The only explicit
      user of check => 0 is rcu_lock_acquire(), perhaps we can change it to
      use lock_acquire(trylock =>, read => 2). __lockdep_no_validate means
      check => 0 implicitly, but we can change validate_chain() to check
      hlock->instance->key instead. Not to mention it would be nice to get
      rid of lockdep_set_novalidate_class().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182006.GA26495@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb9edbe9
  19. 25 1月, 2014 1 次提交
    • O
      introduce __fcheck_files() to fix rcu_dereference_check_fdtable(), kill rcu_my_thread_group_empty() · a8d4b834
      Oleg Nesterov 提交于
      rcu_dereference_check_fdtable() looks very wrong,
      
      1. rcu_my_thread_group_empty() was added by 844b9a87 "vfs: fix
         RCU-lockdep false positive due to /proc" but it doesn't really
         fix the problem. A CLONE_THREAD (without CLONE_FILES) task can
         hit the same race with get_files_struct().
      
         And otoh rcu_my_thread_group_empty() can suppress the correct
         warning if the caller is the CLONE_FILES (without CLONE_THREAD)
         task.
      
      2. files->count == 1 check is not really right too. Even if this
         files_struct is not shared it is not safe to access it lockless
         unless the caller is the owner.
      
         Otoh, this check is sub-optimal. files->count == 0 always means
         it is safe to use it lockless even if files != current->files,
         but put_files_struct() has to take rcu_read_lock(). See the next
         patch.
      
      This patch removes the buggy checks and turns fcheck_files() into
      __fcheck_files() which uses rcu_dereference_raw(), the "unshared"
      callers, fget_light() and fget_raw_light(), can use it to avoid
      the warning from RCU-lockdep.
      
      fcheck_files() is trivially reimplemented as rcu_lockdep_assert()
      plus __fcheck_files().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a8d4b834
  20. 13 12月, 2013 1 次提交