提交 79202ba9 编写于 作者: I Ingo Molnar

perf_counter, x86: Fix APIC NMI programming

My Nehalem box locks up in certain situations (with an
always-asserted NMI causing a lockup) if the PMU LVT
entry is programmed between NMI and IRQ mode with a
high frequency.

Standardize exlusively on NMIs instead.

[ Impact: fix lockup ]

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: NIngo Molnar <mingo@elte.hu>
上级 8a7b8cb9
......@@ -285,14 +285,10 @@ static int __hw_perf_counter_init(struct perf_counter *counter)
hwc->config |= ARCH_PERFMON_EVENTSEL_OS;
/*
* If privileged enough, allow NMI events:
* Use NMI events all the time:
*/
hwc->nmi = 0;
if (hw_event->nmi) {
if (sysctl_perf_counter_priv && !capable(CAP_SYS_ADMIN))
return -EACCES;
hwc->nmi = 1;
}
hwc->nmi = 1;
hw_event->nmi = 1;
if (!hwc->irq_period)
hwc->irq_period = x86_pmu.max_period;
......@@ -553,9 +549,6 @@ fixed_mode_idx(struct perf_counter *counter, struct hw_perf_counter *hwc)
if (!x86_pmu.num_counters_fixed)
return -1;
if (unlikely(hwc->nmi))
return -1;
event = hwc->config & ARCH_PERFMON_EVENT_MASK;
if (unlikely(event == x86_pmu.event_map(PERF_COUNT_INSTRUCTIONS)))
......@@ -806,9 +799,6 @@ static int amd_pmu_handle_irq(struct pt_regs *regs, int nmi)
counter = cpuc->counters[idx];
hwc = &counter->hw;
if (counter->hw_event.nmi != nmi)
continue;
val = x86_perf_counter_update(counter, hwc, idx);
if (val & (1ULL << (x86_pmu.counter_bits - 1)))
continue;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册