• P
    rcu: Avoid counter wrap in synchronize_sched_expedited() · 1924bcb0
    Paul E. McKenney 提交于
    There is a counter scheme similar to ticket locking that
    synchronize_sched_expedited() uses to service multiple concurrent
    callers with the same expedited grace period.  Upon entry, a
    sync_sched_expedited_started variable is atomically incremented,
    and upon completion of a expedited grace period a separate
    sync_sched_expedited_done variable is atomically incremented.
    
    However, if a synchronize_sched_expedited() is delayed while
    in try_stop_cpus(), concurrent invocations will increment the
    sync_sched_expedited_started counter, which will eventually overflow.
    If the original synchronize_sched_expedited() resumes execution just
    as the counter overflows, a concurrent invocation could incorrectly
    conclude that an expedited grace period elapsed in zero time, which
    would be bad.  One could rely on counter size to prevent this from
    happening in practice, but the goal is to formally validate this
    code, so it needs to be fixed anyway.
    
    This commit therefore checks the gap between the two counters before
    incrementing sync_sched_expedited_started, and if the gap is too
    large, does a normal grace period instead.  Overflow is thus only
    possible if there are more than about 3.5 billion threads on 32-bit
    systems, which can be excluded until such time as task_struct fits
    into a single byte and 4G/4G patches are accepted into mainline.
    It is also easy to encode this limitation into mechanical theorem
    provers.
    Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
    Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
    1924bcb0
rcutree.c 91.3 KB