“ea9c102cb0a7969df5733d34f26e0b12c8a3c889”上不存在“arch/powerpc/kernel/entry_32.S”
  1. 18 7月, 2015 9 次提交
    • P
      rcu: Use funnel locking for synchronize_rcu_expedited()'s polling loop · 29fd9309
      Paul E. McKenney 提交于
      This commit gets rid of synchronize_rcu_expedited()'s mutex_trylock()
      polling loop in favor of the funnel-locking scheme that was abstracted
      from synchronize_sched_expedited().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      29fd9309
    • P
      rcu: Fix synchronize_sched_expedited() type error for "s" · 7fd0ddc5
      Paul E. McKenney 提交于
      The type of "s" has been "long" rather than the correct "unsigned long"
      for quite some time.  This commit fixes this type error.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      7fd0ddc5
    • P
      rcu: Abstract funnel locking from synchronize_sched_expedited() · b09e5f86
      Paul E. McKenney 提交于
      This commit abstracts funnel locking from synchronize_sched_expedited()
      so that it may be used by synchronize_rcu_expedited().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b09e5f86
    • P
      rcu: Abstract sequence counting from synchronize_sched_expedited() · 28f00767
      Paul E. McKenney 提交于
      This commit creates rcu_exp_gp_seq_start() and rcu_exp_gp_seq_end() to
      bracket an expedited grace period, rcu_exp_gp_seq_snap() to snapshot the
      sequence counter, and rcu_exp_gp_seq_done() to check to see if a full
      expedited grace period has elapsed since the snapshot.  These will be
      applied to synchronize_rcu_expedited().  These are defined in terms of
      underlying rcu_seq_start(), rcu_seq_end(), rcu_seq_snap(), rcu_seq_done(),
      which will be applied to _rcu_barrier().
      
      One reason that this commit doesn't use the seqcount primitives themselves
      is that the smp_wmb() in those primitive is insufficient due to the fact
      that expedited grace periods do reads as well as writes.  In addition,
      the read-side seqcount primitives detect a potentially partial change,
      where the expedited primitives instead need a guaranteed full change.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      28f00767
    • P
      rcu: Make expedited GP CPU stoppage asynchronous · 3a6d7c64
      Peter Zijlstra 提交于
      Sequentially stopping the CPUs slows down expedited grace periods by
      at least a factor of two, based on rcutorture's grace-period-per-second
      rate.  This is a conservative measure because rcutorture uses unusually
      long RCU read-side critical sections and because rcutorture periodically
      quiesces the system in order to test RCU's ability to ramp down to and
      up from the idle state.  This commit therefore replaces the stop_one_cpu()
      with stop_one_cpu_nowait(), using an atomic-counter scheme to determine
      when all CPUs have passed through the stopped state.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      3a6d7c64
    • P
      rcu: Get rid of synchronize_sched_expedited()'s polling loop · 385b73c0
      Paul E. McKenney 提交于
      This commit gets rid of synchronize_sched_expedited()'s mutex_trylock()
      polling loop in favor of a funnel-locking scheme based on the rcu_node
      tree.  The work-done check is done at each level of the tree, allowing
      high-contention situations to be resolved quickly with reasonable levels
      of mutex contention.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      385b73c0
    • P
      rcu: Rework synchronize_sched_expedited() counter handling · d6ada2cf
      Paul E. McKenney 提交于
      Now that synchronize_sched_expedited() have a mutex, it can use simpler
      work-already-done detection scheme.  This commit simplifies this scheme
      by using something similar to the sequence-locking counter scheme.
      A counter is incremented before and after each grace period, so that
      the counter is odd in the midst of the grace period and even otherwise.
      So if the counter has advanced to the second even number that is
      greater than or equal to the snapshot, the required grace period has
      already happened.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d6ada2cf
    • P
      rcu: Switch synchronize_sched_expedited() to stop_one_cpu() · c190c3b1
      Peter Zijlstra 提交于
      The synchronize_sched_expedited() currently invokes try_stop_cpus(),
      which schedules the stopper kthreads on each online non-idle CPU,
      and waits until all those kthreads are running before letting any
      of them stop.  This is disastrous for real-time workloads, which
      get hit with a preemption that is as long as the longest scheduling
      latency on any CPU, including any non-realtime housekeeping CPUs.
      This commit therefore switches to using stop_one_cpu() on each CPU
      in turn.  This avoids inflicting the worst-case scheduling latency
      on the worst-case CPU onto all other CPUs, and also simplifies the
      code a little bit.
      
      Follow-up commits will simplify the counter-snapshotting algorithm
      and convert a number of the counters that are now protected by the
      new ->expedited_mutex to non-atomic.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      [ paulmck: Kept stop_one_cpu(), dropped disabling of "guardrails". ]
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c190c3b1
    • P
      rcu: Reset rcu_fanout_leaf if out of bounds · 13bd6494
      Paul E. McKenney 提交于
      Currently if the rcu_fanout_leaf boot parameter is out of bounds (that
      is, less than RCU_FANOUT_LEAF or greater than the number of bits in an
      unsigned long), a warning is issued and execution continues with the
      out-of-bounds value.  This can result in all manner of failures, so this
      patch resets rcu_fanout_leaf to RCU_FANOUT_LEAF when an out-of-bounds
      condition is detected.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      13bd6494
  2. 16 7月, 2015 8 次提交
  3. 28 5月, 2015 21 次提交
  4. 14 5月, 2015 1 次提交
  5. 15 4月, 2015 1 次提交
    • P
      rcu: Control grace-period delays directly from value · 8d7dc928
      Paul E. McKenney 提交于
      In a misguided attempt to avoid an #ifdef, the use of the
      gp_init_delay module parameter was conditioned on the corresponding
      RCU_TORTURE_TEST_SLOW_INIT Kconfig variable, using IS_ENABLED() at
      the point of use in the code.  This meant that the compiler always saw
      the delay, which meant that RCU_TORTURE_TEST_SLOW_INIT_DELAY had to be
      unconditionally defined.  This in turn caused "make oldconfig" to ask
      pointless questions about the value of RCU_TORTURE_TEST_SLOW_INIT_DELAY
      in cases where it was not even used.
      
      This commit avoids these pointless questions by defining gp_init_delay
      under #ifdef.  In one branch, gp_init_delay is initialized to
      RCU_TORTURE_TEST_SLOW_INIT_DELAY and is also a module parameter (thus
      allowing boot-time modification), and in the other branch gp_init_delay
      is a const variable initialized by default to zero.
      
      This approach also simplifies the code at the delay point by eliminating
      the IS_DEFINED().  Because gp_init_delay is constant zero in the no-delay
      case intended for production use, the "gp_init_delay > 0" check causes
      the delay to become dead code, as desired in this case.  In addition,
      this commit replaces magic constant "10" with the preprocessor variable
      PER_RCU_NODE_PERIOD, which controls the number of grace periods that
      are allowed to elapse at full speed before a delay is inserted.
      
      Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
      Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      8d7dc928