• C
    x86, perf: P4 PMU -- protect sensible procedures from preemption · 137351e0
    Cyrill Gorcunov 提交于
    Steven reported:
    
    |
    | I'm getting:
    |
    | Pid: 3477, comm: perf Not tainted 2.6.34-rc6 #2727
    | Call Trace:
    |  [<ffffffff811c7565>] debug_smp_processor_id+0xd5/0xf0
    |  [<ffffffff81019874>] p4_hw_config+0x2b/0x15c
    |  [<ffffffff8107acbc>] ? trace_hardirqs_on_caller+0x12b/0x14f
    |  [<ffffffff81019143>] hw_perf_event_init+0x468/0x7be
    |  [<ffffffff810782fd>] ? debug_mutex_init+0x31/0x3c
    |  [<ffffffff810c68b2>] T.850+0x273/0x42e
    |  [<ffffffff810c6cab>] sys_perf_event_open+0x23e/0x3f1
    |  [<ffffffff81009e6a>] ? sysret_check+0x2e/0x69
    |  [<ffffffff81009e32>] system_call_fastpath+0x16/0x1b
    |
    | When running perf record in latest tip/perf/core
    |
    
    Due to the fact that p4 counters are shared between HT threads
    we synthetically divide the whole set of counters into two
    non-intersected subsets. And while we're "borrowing" counters
    from these subsets we should not be preempted (well, strictly
    speaking in p4_hw_config we just pre-set reference to the
    subset which allow to save some cycles in schedule routine
    if it happens on the same cpu). So use get_cpu/put_cpu pair.
    
    Also p4_pmu_schedule_events should use smp_processor_id rather
    than raw_ version. This allow us to catch up preemption issue
    (if there will ever be).
    Reported-by: NSteven Rostedt <rostedt@goodmis.org>
    Tested-by: NSteven Rostedt <rostedt@goodmis.org>
    Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Lin Ming <ming.m.lin@intel.com>
    LKML-Reference: <20100508112716.963478928@openvz.org>
    Signed-off-by: NIngo Molnar <mingo@elte.hu>
    137351e0
perf_event_p4.c 24.4 KB