提交 128f048f 编写于 作者: I Ingo Molnar

perf_counter: Fix throttling lock-up

Throttling logic is broken and we can lock up with too small
hw sampling intervals.

Make the throttling code more robust: disable counters even
if we already disabled them.

( Also clean up whitespace damage i noticed while reading
  various pieces of code related to throttling. )

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <new-submission>
Signed-off-by: NIngo Molnar <mingo@elte.hu>
上级 233f0b95
......@@ -91,7 +91,7 @@ static u64 intel_pmu_raw_event(u64 event)
#define CORE_EVNTSEL_INV_MASK 0x00800000ULL
#define CORE_EVNTSEL_COUNTER_MASK 0xFF000000ULL
#define CORE_EVNTSEL_MASK \
#define CORE_EVNTSEL_MASK \
(CORE_EVNTSEL_EVENT_MASK | \
CORE_EVNTSEL_UNIT_MASK | \
CORE_EVNTSEL_EDGE_MASK | \
......
......@@ -2822,11 +2822,20 @@ int perf_counter_overflow(struct perf_counter *counter,
if (!throttle) {
counter->hw.interrupts++;
} else if (counter->hw.interrupts != MAX_INTERRUPTS) {
counter->hw.interrupts++;
if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) {
counter->hw.interrupts = MAX_INTERRUPTS;
perf_log_throttle(counter, 0);
} else {
if (counter->hw.interrupts != MAX_INTERRUPTS) {
counter->hw.interrupts++;
if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) {
counter->hw.interrupts = MAX_INTERRUPTS;
perf_log_throttle(counter, 0);
ret = 1;
}
} else {
/*
* Keep re-disabling counters even though on the previous
* pass we disabled it - just in case we raced with a
* sched-in and the counter got enabled again:
*/
ret = 1;
}
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册