1. 09 11月, 2012 3 次提交
    • P
      rcu: Move synchronize_sched_expedited() state to rcu_state · 40694d66
      Paul E. McKenney 提交于
      Tracing (debugfs) of expedited RCU primitives is required, which in turn
      requires that the relevant data be located where the tracing code can find
      it, not in its current static global variables in kernel/rcutree.c.
      This commit therefore moves sync_sched_expedited_started and
      sync_sched_expedited_done to the rcu_state structure, as fields
      ->expedited_start and ->expedited_done, respectively.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      40694d66
    • P
      rcu: Avoid counter wrap in synchronize_sched_expedited() · 1924bcb0
      Paul E. McKenney 提交于
      There is a counter scheme similar to ticket locking that
      synchronize_sched_expedited() uses to service multiple concurrent
      callers with the same expedited grace period.  Upon entry, a
      sync_sched_expedited_started variable is atomically incremented,
      and upon completion of a expedited grace period a separate
      sync_sched_expedited_done variable is atomically incremented.
      
      However, if a synchronize_sched_expedited() is delayed while
      in try_stop_cpus(), concurrent invocations will increment the
      sync_sched_expedited_started counter, which will eventually overflow.
      If the original synchronize_sched_expedited() resumes execution just
      as the counter overflows, a concurrent invocation could incorrectly
      conclude that an expedited grace period elapsed in zero time, which
      would be bad.  One could rely on counter size to prevent this from
      happening in practice, but the goal is to formally validate this
      code, so it needs to be fixed anyway.
      
      This commit therefore checks the gap between the two counters before
      incrementing sync_sched_expedited_started, and if the gap is too
      large, does a normal grace period instead.  Overflow is thus only
      possible if there are more than about 3.5 billion threads on 32-bit
      systems, which can be excluded until such time as task_struct fits
      into a single byte and 4G/4G patches are accepted into mainline.
      It is also easy to encode this limitation into mechanical theorem
      provers.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1924bcb0
    • P
      rcu: Rename ->onofflock to ->orphan_lock · 7b2e6011
      Paul E. McKenney 提交于
      The ->onofflock field in the rcu_state structure at one time synchronized
      CPU-hotplug operations for RCU.  However, its scope has decreased over time
      so that it now only protects the lists of orphaned RCU callbacks.  This
      commit therefore renames it to ->orphan_lock to reflect its current use.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      7b2e6011
  2. 24 10月, 2012 2 次提交
  3. 21 10月, 2012 2 次提交
  4. 20 10月, 2012 33 次提交