1. 07 10月, 2015 4 次提交
    • P
      rcu: Finish folding ->fqs_state into ->gp_state · 77f81fe0
      Petr Mladek 提交于
      Commit commit 4cdfc175 ("rcu: Move quiescent-state forcing
      into kthread") started the process of folding the old ->fqs_state into
      ->gp_state, but did not complete it.  This situation does not cause
      any malfunction, but can result in extremely confusing trace output.
      This commit completes this task of eliminating ->fqs_state in favor
      of ->gp_state.
      
      The old ->fqs_state was also used to decide when to collect dyntick-idle
      snapshots.  For this purpose, we add a boolean variable into the kthread,
      which is set on the first call to rcu_gp_fqs() for a given grace period
      and clear otherwise.
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      77f81fe0
    • P
      rcu: Eliminate panic when silly boot-time fanout specified · ee968ac6
      Paul E. McKenney 提交于
      This commit loosens rcutree.rcu_fanout_leaf range checks
      and replaces a panic() with a fallback to compile-time values.
      This fallback is accompanied by a WARN_ON(), and both occur when the
      rcutree.rcu_fanout_leaf value is too small to accommodate the number of
      CPUs.  For example, given the current four-level limit for the rcu_node
      tree, a system with more than 16 CPUs built with CONFIG_FANOUT=2 must
      have rcutree.rcu_fanout_leaf larger than 2.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      ee968ac6
    • B
      rcu: Don't disable preemption for Tiny and Tree RCU readers · bb73c52b
      Boqun Feng 提交于
      Because preempt_disable() maps to barrier() for non-debug builds,
      it forces the compiler to spill and reload registers.  Because Tree
      RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
      barrier() instances generate needless extra code for each instance of
      rcu_read_lock() and rcu_read_unlock().  This extra code slows down Tree
      RCU and bloats Tiny RCU.
      
      This commit therefore removes the preempt_disable() and preempt_enable()
      from the non-preemptible implementations of __rcu_read_lock() and
      __rcu_read_unlock(), respectively.  However, for debug purposes,
      preempt_disable() and preempt_enable() are still invoked if
      CONFIG_PREEMPT_COUNT=y, because this allows detection of sleeping inside
      atomic sections in non-preemptible kernels.
      
      However, Tiny and Tree RCU operates by coalescing all RCU read-side
      critical sections on a given CPU that lie between successive quiescent
      states.  It is therefore necessary to compensate for removing barriers
      from __rcu_read_lock() and __rcu_read_unlock() by adding them to a
      couple of the RCU functions invoked during quiescent states, namely to
      rcu_all_qs() and rcu_note_context_switch().  However, note that the latter
      is more paranoia than necessity, at least until link-time optimizations
      become more aggressive.
      
      This is based on an earlier patch by Paul E. McKenney, fixing
      a bug encountered in kernels built with CONFIG_PREEMPT=n and
      CONFIG_PREEMPT_COUNT=y.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bb73c52b
    • B
      rcu: Use rcu_callback_t in call_rcu*() and friends · b6a4ae76
      Boqun Feng 提交于
      As we now have rcu_callback_t typedefs as the type of rcu callbacks, we
      should use it in call_rcu*() and friends as the type of parameters. This
      could save us a few lines of code and make it clear which function
      requires an rcu callbacks rather than other callbacks as its argument.
      
      Besides, this can also help cscope to generate a better database for
      code reading.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      b6a4ae76
  2. 21 9月, 2015 1 次提交
    • P
      rcu: Suppress lockdep false positive for rcp->exp_funnel_mutex · 19a5ecde
      Paul E. McKenney 提交于
      In kernels built with CONFIG_PREEMPT=y, synchronize_rcu_expedited()
      invokes synchronize_sched_expedited() while holding RCU-preempt's
      root rcu_node structure's ->exp_funnel_mutex, which is acquired after
      the rcu_data structure's ->exp_funnel_mutex.  The first thing that
      synchronize_sched_expedited() will do is acquire RCU-sched's rcu_data
      structure's ->exp_funnel_mutex.   There is no danger of an actual deadlock
      because the locking order is always from RCU-preempt's expedited mutexes
      to those of RCU-sched.  Unfortunately, lockdep considers both rcu_data
      structures' ->exp_funnel_mutex to be in the same lock class and therefore
      reports a deadlock cycle.
      
      This commit silences this false positive by placing RCU-sched's rcu_data
      structures' ->exp_funnel_mutex locks into their own lock class.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      19a5ecde
  3. 04 8月, 2015 1 次提交
    • P
      rcu: Silence lockdep false positive for expedited grace periods · af859bea
      Paul E. McKenney 提交于
      In a CONFIG_PREEMPT=y kernel, synchronize_rcu_expedited()
      acquires the ->exp_funnel_mutex in rcu_preempt_state, then invokes
      synchronize_sched_expedited, which acquires the ->exp_funnel_mutex in
      rcu_sched_state.  There can be no deadlock because rcu_preempt_state
      ->exp_funnel_mutex acquisition always precedes that of rcu_sched_state.
      But lockdep does not know that, so it gives false-positive splats.
      
      This commit therefore associates a separate lock_class_key structure
      with the rcu_sched_state structure's ->exp_funnel_mutex, allowing
      lockdep to see the lock ordering, avoiding the false positives.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      af859bea
  4. 23 7月, 2015 3 次提交
  5. 18 7月, 2015 16 次提交
  6. 16 7月, 2015 8 次提交
  7. 07 7月, 2015 1 次提交
  8. 28 5月, 2015 6 次提交