1. 05 12月, 2015 6 次提交
  2. 24 11月, 2015 2 次提交
  3. 08 10月, 2015 8 次提交
  4. 07 10月, 2015 4 次提交
    • P
      rcu: Finish folding ->fqs_state into ->gp_state · 77f81fe0
      Petr Mladek 提交于
      Commit commit 4cdfc175 ("rcu: Move quiescent-state forcing
      into kthread") started the process of folding the old ->fqs_state into
      ->gp_state, but did not complete it.  This situation does not cause
      any malfunction, but can result in extremely confusing trace output.
      This commit completes this task of eliminating ->fqs_state in favor
      of ->gp_state.
      
      The old ->fqs_state was also used to decide when to collect dyntick-idle
      snapshots.  For this purpose, we add a boolean variable into the kthread,
      which is set on the first call to rcu_gp_fqs() for a given grace period
      and clear otherwise.
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      77f81fe0
    • P
      rcu: Eliminate panic when silly boot-time fanout specified · ee968ac6
      Paul E. McKenney 提交于
      This commit loosens rcutree.rcu_fanout_leaf range checks
      and replaces a panic() with a fallback to compile-time values.
      This fallback is accompanied by a WARN_ON(), and both occur when the
      rcutree.rcu_fanout_leaf value is too small to accommodate the number of
      CPUs.  For example, given the current four-level limit for the rcu_node
      tree, a system with more than 16 CPUs built with CONFIG_FANOUT=2 must
      have rcutree.rcu_fanout_leaf larger than 2.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      ee968ac6
    • B
      rcu: Don't disable preemption for Tiny and Tree RCU readers · bb73c52b
      Boqun Feng 提交于
      Because preempt_disable() maps to barrier() for non-debug builds,
      it forces the compiler to spill and reload registers.  Because Tree
      RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
      barrier() instances generate needless extra code for each instance of
      rcu_read_lock() and rcu_read_unlock().  This extra code slows down Tree
      RCU and bloats Tiny RCU.
      
      This commit therefore removes the preempt_disable() and preempt_enable()
      from the non-preemptible implementations of __rcu_read_lock() and
      __rcu_read_unlock(), respectively.  However, for debug purposes,
      preempt_disable() and preempt_enable() are still invoked if
      CONFIG_PREEMPT_COUNT=y, because this allows detection of sleeping inside
      atomic sections in non-preemptible kernels.
      
      However, Tiny and Tree RCU operates by coalescing all RCU read-side
      critical sections on a given CPU that lie between successive quiescent
      states.  It is therefore necessary to compensate for removing barriers
      from __rcu_read_lock() and __rcu_read_unlock() by adding them to a
      couple of the RCU functions invoked during quiescent states, namely to
      rcu_all_qs() and rcu_note_context_switch().  However, note that the latter
      is more paranoia than necessity, at least until link-time optimizations
      become more aggressive.
      
      This is based on an earlier patch by Paul E. McKenney, fixing
      a bug encountered in kernels built with CONFIG_PREEMPT=n and
      CONFIG_PREEMPT_COUNT=y.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bb73c52b
    • B
      rcu: Use rcu_callback_t in call_rcu*() and friends · b6a4ae76
      Boqun Feng 提交于
      As we now have rcu_callback_t typedefs as the type of rcu callbacks, we
      should use it in call_rcu*() and friends as the type of parameters. This
      could save us a few lines of code and make it clear which function
      requires an rcu callbacks rather than other callbacks as its argument.
      
      Besides, this can also help cscope to generate a better database for
      code reading.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      b6a4ae76
  5. 21 9月, 2015 9 次提交
    • P
      rcu: Make ->cpu_no_qs be a union for aggregate OR · 5b74c458
      Paul E. McKenney 提交于
      This commit converts the rcu_data structure's ->cpu_no_qs field
      to a union.  The bytewise side of this union allows individual access
      to indications as to whether this CPU needs to find a quiescent state
      for a normal (.norm) and/or expedited (.exp) grace period.  The setwise
      side of the union allows testing whether or not a quiescent state is
      needed at all, for either type of grace period.
      
      For now, only .norm is used.  A later commit will introduce the expedited
      usage.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5b74c458
    • P
      rcu: Invert passed_quiesce and rename to cpu_no_qs · 0d43eb34
      Paul E. McKenney 提交于
      This commit inverts the sense of the rcu_data structure's ->passed_quiesce
      field and renames it to ->cpu_no_qs.  This will allow a later commit to
      use an "aggregate OR" operation to test expedited as well as normal grace
      periods without added overhead.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0d43eb34
    • P
      rcu: Rename qs_pending to core_needs_qs · 97c668b8
      Paul E. McKenney 提交于
      An upcoming commit needs to invert the sense of the ->passed_quiesce
      rcu_data structure field, so this commit is taking this opportunity
      to clarify things a bit by renaming ->qs_pending to ->core_needs_qs.
      
      So if !rdp->core_needs_qs, then this CPU need not concern itself with
      quiescent states, in particular, it need not acquire its leaf rcu_node
      structure's ->lock to check.  Otherwise, it needs to report the next
      quiescent state.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      97c668b8
    • P
      rcu: Move synchronize_sched_expedited() to combining tree · bce5fa12
      Paul E. McKenney 提交于
      Currently, synchronize_sched_expedited() uses a single global counter
      to track the number of remaining context switches that the current
      expedited grace period must wait on.  This is problematic on large
      systems, where the resulting memory contention can be pathological.
      This commit therefore makes synchronize_sched_expedited() instead use
      the combining tree in the same manner as synchronize_rcu_expedited(),
      keeping memory contention down to a dull roar.
      
      This commit creates a temporary function sync_sched_exp_select_cpus()
      that is very similar to sync_rcu_exp_select_cpus().  A later commit
      will consolidate these two functions, which becomes possible when
      synchronize_sched_expedited() switches from stop_one_cpu_nowait() to
      smp_call_function_single().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bce5fa12
    • P
      rcu: Use single-stage IPI algorithm for RCU expedited grace period · 8203d6d0
      Paul E. McKenney 提交于
      The current preemptible-RCU expedited grace-period algorithm invokes
      synchronize_sched_expedited() to enqueue all tasks currently running
      in a preemptible-RCU read-side critical section, then waits for all the
      ->blkd_tasks lists to drain.  This works, but results in both an IPI and
      a double context switch even on CPUs that do not happen to be running
      in a preemptible RCU read-side critical section.
      
      This commit implements a new algorithm that causes less OS jitter.
      This new algorithm IPIs all online CPUs that are not idle (from an
      RCU perspective), but refrains from self-IPIs.  If a CPU receiving
      this IPI is not in a preemptible RCU read-side critical section (or
      is just now exiting one), it pushes quiescence up the rcu_node tree,
      otherwise, it sets a flag that will be handled by the upcoming outermost
      rcu_read_unlock(), which will then push quiescence up the tree.
      
      The expedited grace period must of course wait on any pre-existing blocked
      readers, and newly blocked readers must be queued carefully based on
      the state of both the normal and the expedited grace periods.  This
      new queueing approach also avoids the need to update boost state,
      courtesy of the fact that blocked tasks are no longer ever migrated to
      the root rcu_node structure.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      8203d6d0
    • P
      rcu: Consolidate tree setup for synchronize_rcu_expedited() · b9585e94
      Paul E. McKenney 提交于
      This commit replaces sync_rcu_preempt_exp_init1(() and
      sync_rcu_preempt_exp_init2() with sync_exp_reset_tree_hotplug()
      and sync_exp_reset_tree(), which will also be used by
      synchronize_sched_expedited(), and sync_rcu_exp_select_nodes(), which
      contains code specific to synchronize_rcu_expedited().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b9585e94
    • P
      rcu: Move rcu_report_exp_rnp() to allow consolidation · 7922cd0e
      Paul E. McKenney 提交于
      This is a nearly pure code-movement commit, moving rcu_report_exp_rnp(),
      sync_rcu_preempt_exp_done(), and rcu_preempted_readers_exp() so
      that later commits can make synchronize_sched_expedited() use them.
      The non-code-movement portion of this commit tags rcu_report_exp_rnp()
      as __maybe_unused to avoid build errors when CONFIG_PREEMPT=n.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      7922cd0e
    • P
      rcu: Use rsp->expedited_wq instead of sync_rcu_preempt_exp_wq · f4ecea30
      Paul E. McKenney 提交于
      Now that there is an ->expedited_wq waitqueue in each rcu_state structure,
      there is no need for the sync_rcu_preempt_exp_wq global variable.  This
      commit therefore substitutes ->expedited_wq for sync_rcu_preempt_exp_wq.
      It also initializes ->expedited_wq only once at boot instead of at the
      start of each expedited grace period.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      f4ecea30
    • P
      rcu: Suppress lockdep false positive for rcp->exp_funnel_mutex · 19a5ecde
      Paul E. McKenney 提交于
      In kernels built with CONFIG_PREEMPT=y, synchronize_rcu_expedited()
      invokes synchronize_sched_expedited() while holding RCU-preempt's
      root rcu_node structure's ->exp_funnel_mutex, which is acquired after
      the rcu_data structure's ->exp_funnel_mutex.  The first thing that
      synchronize_sched_expedited() will do is acquire RCU-sched's rcu_data
      structure's ->exp_funnel_mutex.   There is no danger of an actual deadlock
      because the locking order is always from RCU-preempt's expedited mutexes
      to those of RCU-sched.  Unfortunately, lockdep considers both rcu_data
      structures' ->exp_funnel_mutex to be in the same lock class and therefore
      reports a deadlock cycle.
      
      This commit silences this false positive by placing RCU-sched's rcu_data
      structures' ->exp_funnel_mutex locks into their own lock class.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      19a5ecde
  6. 04 8月, 2015 1 次提交
    • P
      rcu: Silence lockdep false positive for expedited grace periods · af859bea
      Paul E. McKenney 提交于
      In a CONFIG_PREEMPT=y kernel, synchronize_rcu_expedited()
      acquires the ->exp_funnel_mutex in rcu_preempt_state, then invokes
      synchronize_sched_expedited, which acquires the ->exp_funnel_mutex in
      rcu_sched_state.  There can be no deadlock because rcu_preempt_state
      ->exp_funnel_mutex acquisition always precedes that of rcu_sched_state.
      But lockdep does not know that, so it gives false-positive splats.
      
      This commit therefore associates a separate lock_class_key structure
      with the rcu_sched_state structure's ->exp_funnel_mutex, allowing
      lockdep to see the lock ordering, avoiding the false positives.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      af859bea
  7. 23 7月, 2015 3 次提交
  8. 18 7月, 2015 7 次提交