• M
    markers: use synchronize_sched() · 6496968e
    Mathieu Desnoyers 提交于
    Markers do not mix well with CONFIG_PREEMPT_RCU because it uses
    preempt_disable/enable() and not rcu_read_lock/unlock for minimal
    intrusiveness.  We would need call_sched and sched_barrier primitives.
    
    Currently, the modification (connection and disconnection) of probes
    from markers requires changes to the data structure done in RCU-style :
    a new data structure is created, the pointer is changed atomically, a
    quiescent state is reached and then the old data structure is freed.
    
    The quiescent state is reached once all the currently running
    preempt_disable regions are done running.  We use the call_rcu mechanism
    to execute kfree() after such quiescent state has been reached.
    However, the new CONFIG_PREEMPT_RCU version of call_rcu and rcu_barrier
    does not guarantee that all preempt_disable code regions have finished,
    hence the race.
    
    The "proper" way to do this is to use rcu_read_lock/unlock, but we don't
    want to use it to minimize intrusiveness on the traced system.  (we do
    not want the marker code to call into much of the OS code, because it
    would quickly restrict what can and cannot be instrumented, such as the
    scheduler).
    
    The temporary fix, until we get call_rcu_sched and rcu_barrier_sched in
    mainline, is to use synchronize_sched before each call_rcu calls, so we
    wait for the quiescent state in the system call code path.  It will slow
    down batch marker enable/disable, but will make sure the race is gone.
    Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
    Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    6496968e
marker.c 23.7 KB