提交 4d7b4ac2 编写于 作者: L Linus Torvalds

Merge branch 'perf-core-for-linus' of...

Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (311 commits)
  perf tools: Add mode to build without newt support
  perf symbols: symbol inconsistency message should be done only at verbose=1
  perf tui: Add explicit -lslang option
  perf options: Type check all the remaining OPT_ variants
  perf options: Type check OPT_BOOLEAN and fix the offenders
  perf options: Check v type in OPT_U?INTEGER
  perf options: Introduce OPT_UINTEGER
  perf tui: Add workaround for slang < 2.1.4
  perf record: Fix bug mismatch with -c option definition
  perf options: Introduce OPT_U64
  perf tui: Add help window to show key associations
  perf tui: Make <- exit menus too
  perf newt: Add single key shortcuts for zoom into DSO and threads
  perf newt: Exit browser unconditionally when CTRL+C, q or Q is pressed
  perf newt: Fix the 'A'/'a' shortcut for annotate
  perf newt: Make <- exit the ui_browser
  x86, perf: P4 PMU - fix counters management logic
  perf newt: Make <- zoom out filters
  perf report: Report number of events, not samples
  perf hist: Clarify events_stats fields usage
  ...

Fix up trivial conflicts in kernel/fork.c and tools/perf/builtin-record.c
...@@ -165,8 +165,8 @@ the user entry_handler invocation is also skipped. ...@@ -165,8 +165,8 @@ the user entry_handler invocation is also skipped.
1.4 How Does Jump Optimization Work? 1.4 How Does Jump Optimization Work?
If you configured your kernel with CONFIG_OPTPROBES=y (currently If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
this option is supported on x86/x86-64, non-preemptive kernel) and is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
the "debug.kprobes_optimization" kernel parameter is set to 1 (see the "debug.kprobes_optimization" kernel parameter is set to 1 (see
sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
instruction instead of a breakpoint instruction at each probepoint. instruction instead of a breakpoint instruction at each probepoint.
...@@ -271,8 +271,6 @@ tweak the kernel's execution path, you need to suppress optimization, ...@@ -271,8 +271,6 @@ tweak the kernel's execution path, you need to suppress optimization,
using one of the following techniques: using one of the following techniques:
- Specify an empty function for the kprobe's post_handler or break_handler. - Specify an empty function for the kprobe's post_handler or break_handler.
or or
- Config CONFIG_OPTPROBES=n.
or
- Execute 'sysctl -w debug.kprobes_optimization=n' - Execute 'sysctl -w debug.kprobes_optimization=n'
2. Architectures Supported 2. Architectures Supported
...@@ -307,10 +305,6 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO), ...@@ -307,10 +305,6 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
so you can use "objdump -d -l vmlinux" to see the source-to-object so you can use "objdump -d -l vmlinux" to see the source-to-object
code mapping. code mapping.
If you want to reduce probing overhead, set "Kprobes jump optimization
support" (CONFIG_OPTPROBES) to "y". You can find this option under the
"Kprobes" line.
4. API Reference 4. API Reference
The Kprobes API includes a "register" function and an "unregister" The Kprobes API includes a "register" function and an "unregister"
......
...@@ -40,7 +40,9 @@ Synopsis of kprobe_events ...@@ -40,7 +40,9 @@ Synopsis of kprobe_events
$stack : Fetch stack address. $stack : Fetch stack address.
$retval : Fetch return value.(*) $retval : Fetch return value.(*)
+|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**) +|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**)
NAME=FETCHARG: Set NAME as the argument name of FETCHARG. NAME=FETCHARG : Set NAME as the argument name of FETCHARG.
FETCHARG:TYPE : Set TYPE as the type of FETCHARG. Currently, basic types
(u8/u16/u32/u64/s8/s16/s32/s64) are supported.
(*) only for return probe. (*) only for return probe.
(**) this is useful for fetching a field of data structures. (**) this is useful for fetching a field of data structures.
......
...@@ -4354,13 +4354,13 @@ M: Paul Mackerras <paulus@samba.org> ...@@ -4354,13 +4354,13 @@ M: Paul Mackerras <paulus@samba.org>
M: Ingo Molnar <mingo@elte.hu> M: Ingo Molnar <mingo@elte.hu>
M: Arnaldo Carvalho de Melo <acme@redhat.com> M: Arnaldo Carvalho de Melo <acme@redhat.com>
S: Supported S: Supported
F: kernel/perf_event.c F: kernel/perf_event*.c
F: include/linux/perf_event.h F: include/linux/perf_event.h
F: arch/*/kernel/perf_event.c F: arch/*/kernel/perf_event*.c
F: arch/*/kernel/*/perf_event.c F: arch/*/kernel/*/perf_event*.c
F: arch/*/kernel/*/*/perf_event.c F: arch/*/kernel/*/*/perf_event*.c
F: arch/*/include/asm/perf_event.h F: arch/*/include/asm/perf_event.h
F: arch/*/lib/perf_event.c F: arch/*/lib/perf_event*.c
F: arch/*/kernel/perf_callchain.c F: arch/*/kernel/perf_callchain.c
F: tools/perf/ F: tools/perf/
......
...@@ -42,15 +42,10 @@ config KPROBES ...@@ -42,15 +42,10 @@ config KPROBES
If in doubt, say "N". If in doubt, say "N".
config OPTPROBES config OPTPROBES
bool "Kprobes jump optimization support (EXPERIMENTAL)" def_bool y
default y depends on KPROBES && HAVE_OPTPROBES
depends on KPROBES
depends on !PREEMPT depends on !PREEMPT
depends on HAVE_OPTPROBES
select KALLSYMS_ALL select KALLSYMS_ALL
help
This option will allow kprobes to optimize breakpoint to
a jump for reducing its overhead.
config HAVE_EFFICIENT_UNALIGNED_ACCESS config HAVE_EFFICIENT_UNALIGNED_ACCESS
bool bool
...@@ -142,6 +137,17 @@ config HAVE_HW_BREAKPOINT ...@@ -142,6 +137,17 @@ config HAVE_HW_BREAKPOINT
bool bool
depends on PERF_EVENTS depends on PERF_EVENTS
config HAVE_MIXED_BREAKPOINTS_REGS
bool
depends on HAVE_HW_BREAKPOINT
help
Depending on the arch implementation of hardware breakpoints,
some of them have separate registers for data and instruction
breakpoints addresses, others have mixed registers to store
them but define the access type in a control register.
Select this option if your arch implements breakpoints under the
latter fashion.
config HAVE_USER_RETURN_NOTIFIER config HAVE_USER_RETURN_NOTIFIER
bool bool
......
...@@ -35,6 +35,9 @@ struct cpu_hw_events { ...@@ -35,6 +35,9 @@ struct cpu_hw_events {
u64 alternatives[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; u64 alternatives[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES];
unsigned long amasks[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; unsigned long amasks[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES];
unsigned long avalues[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; unsigned long avalues[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES];
unsigned int group_flag;
int n_txn_start;
}; };
DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
...@@ -718,66 +721,6 @@ static int collect_events(struct perf_event *group, int max_count, ...@@ -718,66 +721,6 @@ static int collect_events(struct perf_event *group, int max_count,
return n; return n;
} }
static void event_sched_in(struct perf_event *event)
{
event->state = PERF_EVENT_STATE_ACTIVE;
event->oncpu = smp_processor_id();
event->tstamp_running += event->ctx->time - event->tstamp_stopped;
if (is_software_event(event))
event->pmu->enable(event);
}
/*
* Called to enable a whole group of events.
* Returns 1 if the group was enabled, or -EAGAIN if it could not be.
* Assumes the caller has disabled interrupts and has
* frozen the PMU with hw_perf_save_disable.
*/
int hw_perf_group_sched_in(struct perf_event *group_leader,
struct perf_cpu_context *cpuctx,
struct perf_event_context *ctx)
{
struct cpu_hw_events *cpuhw;
long i, n, n0;
struct perf_event *sub;
if (!ppmu)
return 0;
cpuhw = &__get_cpu_var(cpu_hw_events);
n0 = cpuhw->n_events;
n = collect_events(group_leader, ppmu->n_counter - n0,
&cpuhw->event[n0], &cpuhw->events[n0],
&cpuhw->flags[n0]);
if (n < 0)
return -EAGAIN;
if (check_excludes(cpuhw->event, cpuhw->flags, n0, n))
return -EAGAIN;
i = power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n + n0);
if (i < 0)
return -EAGAIN;
cpuhw->n_events = n0 + n;
cpuhw->n_added += n;
/*
* OK, this group can go on; update event states etc.,
* and enable any software events
*/
for (i = n0; i < n0 + n; ++i)
cpuhw->event[i]->hw.config = cpuhw->events[i];
cpuctx->active_oncpu += n;
n = 1;
event_sched_in(group_leader);
list_for_each_entry(sub, &group_leader->sibling_list, group_entry) {
if (sub->state != PERF_EVENT_STATE_OFF) {
event_sched_in(sub);
++n;
}
}
ctx->nr_active += n;
return 1;
}
/* /*
* Add a event to the PMU. * Add a event to the PMU.
* If all events are not already frozen, then we disable and * If all events are not already frozen, then we disable and
...@@ -805,12 +748,22 @@ static int power_pmu_enable(struct perf_event *event) ...@@ -805,12 +748,22 @@ static int power_pmu_enable(struct perf_event *event)
cpuhw->event[n0] = event; cpuhw->event[n0] = event;
cpuhw->events[n0] = event->hw.config; cpuhw->events[n0] = event->hw.config;
cpuhw->flags[n0] = event->hw.event_base; cpuhw->flags[n0] = event->hw.event_base;
/*
* If group events scheduling transaction was started,
* skip the schedulability test here, it will be peformed
* at commit time(->commit_txn) as a whole
*/
if (cpuhw->group_flag & PERF_EVENT_TXN_STARTED)
goto nocheck;
if (check_excludes(cpuhw->event, cpuhw->flags, n0, 1)) if (check_excludes(cpuhw->event, cpuhw->flags, n0, 1))
goto out; goto out;
if (power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n0 + 1)) if (power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n0 + 1))
goto out; goto out;
event->hw.config = cpuhw->events[n0]; event->hw.config = cpuhw->events[n0];
nocheck:
++cpuhw->n_events; ++cpuhw->n_events;
++cpuhw->n_added; ++cpuhw->n_added;
...@@ -896,11 +849,65 @@ static void power_pmu_unthrottle(struct perf_event *event) ...@@ -896,11 +849,65 @@ static void power_pmu_unthrottle(struct perf_event *event)
local_irq_restore(flags); local_irq_restore(flags);
} }
/*
* Start group events scheduling transaction
* Set the flag to make pmu::enable() not perform the
* schedulability test, it will be performed at commit time
*/
void power_pmu_start_txn(const struct pmu *pmu)
{
struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
cpuhw->group_flag |= PERF_EVENT_TXN_STARTED;
cpuhw->n_txn_start = cpuhw->n_events;
}
/*
* Stop group events scheduling transaction
* Clear the flag and pmu::enable() will perform the
* schedulability test.
*/
void power_pmu_cancel_txn(const struct pmu *pmu)
{
struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
cpuhw->group_flag &= ~PERF_EVENT_TXN_STARTED;
}
/*
* Commit group events scheduling transaction
* Perform the group schedulability test as a whole
* Return 0 if success
*/
int power_pmu_commit_txn(const struct pmu *pmu)
{
struct cpu_hw_events *cpuhw;
long i, n;
if (!ppmu)
return -EAGAIN;
cpuhw = &__get_cpu_var(cpu_hw_events);
n = cpuhw->n_events;
if (check_excludes(cpuhw->event, cpuhw->flags, 0, n))
return -EAGAIN;
i = power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n);
if (i < 0)
return -EAGAIN;
for (i = cpuhw->n_txn_start; i < n; ++i)
cpuhw->event[i]->hw.config = cpuhw->events[i];
return 0;
}
struct pmu power_pmu = { struct pmu power_pmu = {
.enable = power_pmu_enable, .enable = power_pmu_enable,
.disable = power_pmu_disable, .disable = power_pmu_disable,
.read = power_pmu_read, .read = power_pmu_read,
.unthrottle = power_pmu_unthrottle, .unthrottle = power_pmu_unthrottle,
.start_txn = power_pmu_start_txn,
.cancel_txn = power_pmu_cancel_txn,
.commit_txn = power_pmu_commit_txn,
}; };
/* /*
......
...@@ -44,6 +44,7 @@ config SUPERH32 ...@@ -44,6 +44,7 @@ config SUPERH32
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_MIXED_BREAKPOINTS_REGS
select PERF_EVENTS if HAVE_HW_BREAKPOINT select PERF_EVENTS if HAVE_HW_BREAKPOINT
select ARCH_HIBERNATION_POSSIBLE if MMU select ARCH_HIBERNATION_POSSIBLE if MMU
......
...@@ -46,10 +46,14 @@ struct pmu; ...@@ -46,10 +46,14 @@ struct pmu;
/* Maximum number of UBC channels */ /* Maximum number of UBC channels */
#define HBP_NUM 2 #define HBP_NUM 2
static inline int hw_breakpoint_slots(int type)
{
return HBP_NUM;
}
/* arch/sh/kernel/hw_breakpoint.c */ /* arch/sh/kernel/hw_breakpoint.c */
extern int arch_check_va_in_userspace(unsigned long va, u16 hbp_len); extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp, extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
struct task_struct *tsk);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -119,26 +119,17 @@ static int get_hbp_len(u16 hbp_len) ...@@ -119,26 +119,17 @@ static int get_hbp_len(u16 hbp_len)
return len_in_bytes; return len_in_bytes;
} }
/*
* Check for virtual address in user space.
*/
int arch_check_va_in_userspace(unsigned long va, u16 hbp_len)
{
unsigned int len;
len = get_hbp_len(hbp_len);
return (va <= TASK_SIZE - len);
}
/* /*
* Check for virtual address in kernel space. * Check for virtual address in kernel space.
*/ */
static int arch_check_va_in_kernelspace(unsigned long va, u8 hbp_len) int arch_check_bp_in_kernelspace(struct perf_event *bp)
{ {
unsigned int len; unsigned int len;
unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
len = get_hbp_len(hbp_len); va = info->address;
len = get_hbp_len(info->len);
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
...@@ -226,8 +217,7 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -226,8 +217,7 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings * Validate the arch-specific HW Breakpoint register settings
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp, int arch_validate_hwbkpt_settings(struct perf_event *bp)
struct task_struct *tsk)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp); struct arch_hw_breakpoint *info = counter_arch_bp(bp);
unsigned int align; unsigned int align;
...@@ -270,15 +260,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp, ...@@ -270,15 +260,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp,
if (info->address & align) if (info->address & align)
return -EINVAL; return -EINVAL;
/* Check that the virtual address is in the proper range */
if (tsk) {
if (!arch_check_va_in_userspace(info->address, info->len))
return -EFAULT;
} else {
if (!arch_check_va_in_kernelspace(info->address, info->len))
return -EFAULT;
}
return 0; return 0;
} }
...@@ -363,8 +344,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args) ...@@ -363,8 +344,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args)
perf_bp_event(bp, args->regs); perf_bp_event(bp, args->regs);
/* Deliver the signal to userspace */ /* Deliver the signal to userspace */
if (arch_check_va_in_userspace(bp->attr.bp_addr, if (!arch_check_bp_in_kernelspace(bp)) {
bp->attr.bp_len)) {
siginfo_t info; siginfo_t info;
info.si_signo = args->signr; info.si_signo = args->signr;
......
...@@ -85,7 +85,7 @@ static int set_single_step(struct task_struct *tsk, unsigned long addr) ...@@ -85,7 +85,7 @@ static int set_single_step(struct task_struct *tsk, unsigned long addr)
bp = thread->ptrace_bps[0]; bp = thread->ptrace_bps[0];
if (!bp) { if (!bp) {
hw_breakpoint_init(&attr); ptrace_breakpoint_init(&attr);
attr.bp_addr = addr; attr.bp_addr = addr;
attr.bp_len = HW_BREAKPOINT_LEN_2; attr.bp_len = HW_BREAKPOINT_LEN_2;
......
...@@ -53,11 +53,15 @@ config X86 ...@@ -53,11 +53,15 @@ config X86
select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZMA
select HAVE_KERNEL_LZO select HAVE_KERNEL_LZO
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_MIXED_BREAKPOINTS_REGS
select PERF_EVENTS select PERF_EVENTS
select ANON_INODES select ANON_INODES
select HAVE_ARCH_KMEMCHECK select HAVE_ARCH_KMEMCHECK
select HAVE_USER_RETURN_NOTIFIER select HAVE_USER_RETURN_NOTIFIER
config INSTRUCTION_DECODER
def_bool (KPROBES || PERF_EVENTS)
config OUTPUT_FORMAT config OUTPUT_FORMAT
string string
default "elf32-i386" if X86_32 default "elf32-i386" if X86_32
......
...@@ -502,23 +502,3 @@ config CPU_SUP_UMC_32 ...@@ -502,23 +502,3 @@ config CPU_SUP_UMC_32
CPU might render the kernel unbootable. CPU might render the kernel unbootable.
If unsure, say N. If unsure, say N.
config X86_DS
def_bool X86_PTRACE_BTS
depends on X86_DEBUGCTLMSR
select HAVE_HW_BRANCH_TRACER
config X86_PTRACE_BTS
bool "Branch Trace Store"
default y
depends on X86_DEBUGCTLMSR
depends on BROKEN
---help---
This adds a ptrace interface to the hardware's branch trace store.
Debuggers may use it to collect an execution trace of the debugged
application in order to answer the question 'how did I get here?'.
Debuggers may trace user mode as well as kernel mode.
Say Y unless there is no application development on this machine
and you want to save a small amount of code size.
...@@ -174,15 +174,6 @@ config IOMMU_LEAK ...@@ -174,15 +174,6 @@ config IOMMU_LEAK
Add a simple leak tracer to the IOMMU code. This is useful when you Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings. are debugging a buggy device driver that leaks IOMMU mappings.
config X86_DS_SELFTEST
bool "DS selftest"
default y
depends on DEBUG_KERNEL
depends on X86_DS
---help---
Perform Debug Store selftests at boot time.
If in doubt, say "N".
config HAVE_MMIOTRACE_SUPPORT config HAVE_MMIOTRACE_SUPPORT
def_bool y def_bool y
......
...@@ -373,6 +373,7 @@ extern atomic_t init_deasserted; ...@@ -373,6 +373,7 @@ extern atomic_t init_deasserted;
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip); extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
#endif #endif
#ifdef CONFIG_X86_LOCAL_APIC
static inline u32 apic_read(u32 reg) static inline u32 apic_read(u32 reg)
{ {
return apic->read(reg); return apic->read(reg);
...@@ -403,10 +404,19 @@ static inline u32 safe_apic_wait_icr_idle(void) ...@@ -403,10 +404,19 @@ static inline u32 safe_apic_wait_icr_idle(void)
return apic->safe_wait_icr_idle(); return apic->safe_wait_icr_idle();
} }
#else /* CONFIG_X86_LOCAL_APIC */
static inline u32 apic_read(u32 reg) { return 0; }
static inline void apic_write(u32 reg, u32 val) { }
static inline u64 apic_icr_read(void) { return 0; }
static inline void apic_icr_write(u32 low, u32 high) { }
static inline void apic_wait_icr_idle(void) { }
static inline u32 safe_apic_wait_icr_idle(void) { return 0; }
#endif /* CONFIG_X86_LOCAL_APIC */
static inline void ack_APIC_irq(void) static inline void ack_APIC_irq(void)
{ {
#ifdef CONFIG_X86_LOCAL_APIC
/* /*
* ack_APIC_irq() actually gets compiled as a single instruction * ack_APIC_irq() actually gets compiled as a single instruction
* ... yummie. * ... yummie.
...@@ -414,7 +424,6 @@ static inline void ack_APIC_irq(void) ...@@ -414,7 +424,6 @@ static inline void ack_APIC_irq(void)
/* Docs say use 0 for future compatibility */ /* Docs say use 0 for future compatibility */
apic_write(APIC_EOI, 0); apic_write(APIC_EOI, 0);
#endif
} }
static inline unsigned default_get_apic_id(unsigned long x) static inline unsigned default_get_apic_id(unsigned long x)
......
/*
* Debug Store (DS) support
*
* This provides a low-level interface to the hardware's Debug Store
* feature that is used for branch trace store (BTS) and
* precise-event based sampling (PEBS).
*
* It manages:
* - DS and BTS hardware configuration
* - buffer overflow handling (to be done)
* - buffer access
*
* It does not do:
* - security checking (is the caller allowed to trace the task)
* - buffer allocation (memory accounting)
*
*
* Copyright (C) 2007-2009 Intel Corporation.
* Markus Metzger <markus.t.metzger@intel.com>, 2007-2009
*/
#ifndef _ASM_X86_DS_H
#define _ASM_X86_DS_H
#include <linux/types.h>
#include <linux/init.h>
#include <linux/err.h>
#ifdef CONFIG_X86_DS
struct task_struct;
struct ds_context;
struct ds_tracer;
struct bts_tracer;
struct pebs_tracer;
typedef void (*bts_ovfl_callback_t)(struct bts_tracer *);
typedef void (*pebs_ovfl_callback_t)(struct pebs_tracer *);
/*
* A list of features plus corresponding macros to talk about them in
* the ds_request function's flags parameter.
*
* We use the enum to index an array of corresponding control bits;
* we use the macro to index a flags bit-vector.
*/
enum ds_feature {
dsf_bts = 0,
dsf_bts_kernel,
#define BTS_KERNEL (1 << dsf_bts_kernel)
/* trace kernel-mode branches */
dsf_bts_user,
#define BTS_USER (1 << dsf_bts_user)
/* trace user-mode branches */
dsf_bts_overflow,
dsf_bts_max,
dsf_pebs = dsf_bts_max,
dsf_pebs_max,
dsf_ctl_max = dsf_pebs_max,
dsf_bts_timestamps = dsf_ctl_max,
#define BTS_TIMESTAMPS (1 << dsf_bts_timestamps)
/* add timestamps into BTS trace */
#define BTS_USER_FLAGS (BTS_KERNEL | BTS_USER | BTS_TIMESTAMPS)
};
/*
* Request BTS or PEBS
*
* Due to alignement constraints, the actual buffer may be slightly
* smaller than the requested or provided buffer.
*
* Returns a pointer to a tracer structure on success, or
* ERR_PTR(errcode) on failure.
*
* The interrupt threshold is independent from the overflow callback
* to allow users to use their own overflow interrupt handling mechanism.
*
* The function might sleep.
*
* task: the task to request recording for
* cpu: the cpu to request recording for
* base: the base pointer for the (non-pageable) buffer;
* size: the size of the provided buffer in bytes
* ovfl: pointer to a function to be called on buffer overflow;
* NULL if cyclic buffer requested
* th: the interrupt threshold in records from the end of the buffer;
* -1 if no interrupt threshold is requested.
* flags: a bit-mask of the above flags
*/
extern struct bts_tracer *ds_request_bts_task(struct task_struct *task,
void *base, size_t size,
bts_ovfl_callback_t ovfl,
size_t th, unsigned int flags);
extern struct bts_tracer *ds_request_bts_cpu(int cpu, void *base, size_t size,
bts_ovfl_callback_t ovfl,
size_t th, unsigned int flags);
extern struct pebs_tracer *ds_request_pebs_task(struct task_struct *task,
void *base, size_t size,
pebs_ovfl_callback_t ovfl,
size_t th, unsigned int flags);
extern struct pebs_tracer *ds_request_pebs_cpu(int cpu,
void *base, size_t size,
pebs_ovfl_callback_t ovfl,
size_t th, unsigned int flags);
/*
* Release BTS or PEBS resources
* Suspend and resume BTS or PEBS tracing
*
* Must be called with irq's enabled.
*
* tracer: the tracer handle returned from ds_request_~()
*/
extern void ds_release_bts(struct bts_tracer *tracer);
extern void ds_suspend_bts(struct bts_tracer *tracer);
extern void ds_resume_bts(struct bts_tracer *tracer);
extern void ds_release_pebs(struct pebs_tracer *tracer);
extern void ds_suspend_pebs(struct pebs_tracer *tracer);
extern void ds_resume_pebs(struct pebs_tracer *tracer);
/*
* Release BTS or PEBS resources
* Suspend and resume BTS or PEBS tracing
*
* Cpu tracers must call this on the traced cpu.
* Task tracers must call ds_release_~_noirq() for themselves.
*
* May be called with irq's disabled.
*
* Returns 0 if successful;
* -EPERM if the cpu tracer does not trace the current cpu.
* -EPERM if the task tracer does not trace itself.
*
* tracer: the tracer handle returned from ds_request_~()
*/
extern int ds_release_bts_noirq(struct bts_tracer *tracer);
extern int ds_suspend_bts_noirq(struct bts_tracer *tracer);
extern int ds_resume_bts_noirq(struct bts_tracer *tracer);
extern int ds_release_pebs_noirq(struct pebs_tracer *tracer);
extern int ds_suspend_pebs_noirq(struct pebs_tracer *tracer);
extern int ds_resume_pebs_noirq(struct pebs_tracer *tracer);
/*
* The raw DS buffer state as it is used for BTS and PEBS recording.
*
* This is the low-level, arch-dependent interface for working
* directly on the raw trace data.
*/
struct ds_trace {
/* the number of bts/pebs records */
size_t n;
/* the size of a bts/pebs record in bytes */
size_t size;
/* pointers into the raw buffer:
- to the first entry */
void *begin;
/* - one beyond the last entry */
void *end;
/* - one beyond the newest entry */
void *top;
/* - the interrupt threshold */
void *ith;
/* flags given on ds_request() */
unsigned int flags;
};
/*
* An arch-independent view on branch trace data.
*/
enum bts_qualifier {
bts_invalid,
#define BTS_INVALID bts_invalid
bts_branch,
#define BTS_BRANCH bts_branch
bts_task_arrives,
#define BTS_TASK_ARRIVES bts_task_arrives
bts_task_departs,
#define BTS_TASK_DEPARTS bts_task_departs
bts_qual_bit_size = 4,
bts_qual_max = (1 << bts_qual_bit_size),
};
struct bts_struct {
__u64 qualifier;
union {
/* BTS_BRANCH */
struct {
__u64 from;
__u64 to;
} lbr;
/* BTS_TASK_ARRIVES or BTS_TASK_DEPARTS */
struct {
__u64 clock;
pid_t pid;
} event;
} variant;
};
/*
* The BTS state.
*
* This gives access to the raw DS state and adds functions to provide
* an arch-independent view of the BTS data.
*/
struct bts_trace {
struct ds_trace ds;
int (*read)(struct bts_tracer *tracer, const void *at,
struct bts_struct *out);
int (*write)(struct bts_tracer *tracer, const struct bts_struct *in);
};
/*
* The PEBS state.
*
* This gives access to the raw DS state and the PEBS-specific counter
* reset value.
*/
struct pebs_trace {
struct ds_trace ds;
/* the number of valid counters in the below array */
unsigned int counters;
#define MAX_PEBS_COUNTERS 4
/* the counter reset value */
unsigned long long counter_reset[MAX_PEBS_COUNTERS];
};
/*
* Read the BTS or PEBS trace.
*
* Returns a view on the trace collected for the parameter tracer.
*
* The view remains valid as long as the traced task is not running or
* the tracer is suspended.
* Writes into the trace buffer are not reflected.
*
* tracer: the tracer handle returned from ds_request_~()
*/
extern const struct bts_trace *ds_read_bts(struct bts_tracer *tracer);
extern const struct pebs_trace *ds_read_pebs(struct pebs_tracer *tracer);
/*
* Reset the write pointer of the BTS/PEBS buffer.
*
* Returns 0 on success; -Eerrno on error
*
* tracer: the tracer handle returned from ds_request_~()
*/
extern int ds_reset_bts(struct bts_tracer *tracer);
extern int ds_reset_pebs(struct pebs_tracer *tracer);
/*
* Set the PEBS counter reset value.
*
* Returns 0 on success; -Eerrno on error
*
* tracer: the tracer handle returned from ds_request_pebs()
* counter: the index of the counter
* value: the new counter reset value
*/
extern int ds_set_pebs_reset(struct pebs_tracer *tracer,
unsigned int counter, u64 value);
/*
* Initialization
*/
struct cpuinfo_x86;
extern void __cpuinit ds_init_intel(struct cpuinfo_x86 *);
/*
* Context switch work
*/
extern void ds_switch_to(struct task_struct *prev, struct task_struct *next);
#else /* CONFIG_X86_DS */
struct cpuinfo_x86;
static inline void __cpuinit ds_init_intel(struct cpuinfo_x86 *ignored) {}
static inline void ds_switch_to(struct task_struct *prev,
struct task_struct *next) {}
#endif /* CONFIG_X86_DS */
#endif /* _ASM_X86_DS_H */
...@@ -41,12 +41,16 @@ struct arch_hw_breakpoint { ...@@ -41,12 +41,16 @@ struct arch_hw_breakpoint {
/* Total number of available HW breakpoint registers */ /* Total number of available HW breakpoint registers */
#define HBP_NUM 4 #define HBP_NUM 4
static inline int hw_breakpoint_slots(int type)
{
return HBP_NUM;
}
struct perf_event; struct perf_event;
struct pmu; struct pmu;
extern int arch_check_va_in_userspace(unsigned long va, u8 hbp_len); extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp, extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
struct task_struct *tsk);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -68,6 +68,8 @@ struct insn { ...@@ -68,6 +68,8 @@ struct insn {
const insn_byte_t *next_byte; const insn_byte_t *next_byte;
}; };
#define MAX_INSN_SIZE 16
#define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6) #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
#define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3) #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
#define X86_MODRM_RM(modrm) ((modrm) & 0x07) #define X86_MODRM_RM(modrm) ((modrm) & 0x07)
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <asm/insn.h>
#define __ARCH_WANT_KPROBES_INSN_SLOT #define __ARCH_WANT_KPROBES_INSN_SLOT
...@@ -36,7 +37,6 @@ typedef u8 kprobe_opcode_t; ...@@ -36,7 +37,6 @@ typedef u8 kprobe_opcode_t;
#define RELATIVEJUMP_SIZE 5 #define RELATIVEJUMP_SIZE 5
#define RELATIVECALL_OPCODE 0xe8 #define RELATIVECALL_OPCODE 0xe8
#define RELATIVE_ADDR_SIZE 4 #define RELATIVE_ADDR_SIZE 4
#define MAX_INSN_SIZE 16
#define MAX_STACK_SIZE 64 #define MAX_STACK_SIZE 64
#define MIN_STACK_SIZE(ADDR) \ #define MIN_STACK_SIZE(ADDR) \
(((MAX_STACK_SIZE) < (((unsigned long)current_thread_info()) + \ (((MAX_STACK_SIZE) < (((unsigned long)current_thread_info()) + \
......
...@@ -71,11 +71,14 @@ ...@@ -71,11 +71,14 @@
#define MSR_IA32_LASTINTTOIP 0x000001de #define MSR_IA32_LASTINTTOIP 0x000001de
/* DEBUGCTLMSR bits (others vary by model): */ /* DEBUGCTLMSR bits (others vary by model): */
#define _DEBUGCTLMSR_LBR 0 /* last branch recording */ #define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */
#define _DEBUGCTLMSR_BTF 1 /* single-step on branches */ #define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */
#define DEBUGCTLMSR_TR (1UL << 6)
#define DEBUGCTLMSR_LBR (1UL << _DEBUGCTLMSR_LBR) #define DEBUGCTLMSR_BTS (1UL << 7)
#define DEBUGCTLMSR_BTF (1UL << _DEBUGCTLMSR_BTF) #define DEBUGCTLMSR_BTINT (1UL << 8)
#define DEBUGCTLMSR_BTS_OFF_OS (1UL << 9)
#define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10)
#define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11)
#define MSR_IA32_MC0_CTL 0x00000400 #define MSR_IA32_MC0_CTL 0x00000400
#define MSR_IA32_MC0_STATUS 0x00000401 #define MSR_IA32_MC0_STATUS 0x00000401
...@@ -359,6 +362,8 @@ ...@@ -359,6 +362,8 @@
#define MSR_P4_U2L_ESCR0 0x000003b0 #define MSR_P4_U2L_ESCR0 0x000003b0
#define MSR_P4_U2L_ESCR1 0x000003b1 #define MSR_P4_U2L_ESCR1 0x000003b1
#define MSR_P4_PEBS_MATRIX_VERT 0x000003f2
/* Intel Core-based CPU performance counters */ /* Intel Core-based CPU performance counters */
#define MSR_CORE_PERF_FIXED_CTR0 0x00000309 #define MSR_CORE_PERF_FIXED_CTR0 0x00000309
#define MSR_CORE_PERF_FIXED_CTR1 0x0000030a #define MSR_CORE_PERF_FIXED_CTR1 0x0000030a
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
* Performance event hw details: * Performance event hw details:
*/ */
#define X86_PMC_MAX_GENERIC 8 #define X86_PMC_MAX_GENERIC 32
#define X86_PMC_MAX_FIXED 3 #define X86_PMC_MAX_FIXED 3
#define X86_PMC_IDX_GENERIC 0 #define X86_PMC_IDX_GENERIC 0
...@@ -18,39 +18,31 @@ ...@@ -18,39 +18,31 @@
#define MSR_ARCH_PERFMON_EVENTSEL0 0x186 #define MSR_ARCH_PERFMON_EVENTSEL0 0x186
#define MSR_ARCH_PERFMON_EVENTSEL1 0x187 #define MSR_ARCH_PERFMON_EVENTSEL1 0x187
#define ARCH_PERFMON_EVENTSEL_ENABLE (1 << 22) #define ARCH_PERFMON_EVENTSEL_EVENT 0x000000FFULL
#define ARCH_PERFMON_EVENTSEL_ANY (1 << 21) #define ARCH_PERFMON_EVENTSEL_UMASK 0x0000FF00ULL
#define ARCH_PERFMON_EVENTSEL_INT (1 << 20) #define ARCH_PERFMON_EVENTSEL_USR (1ULL << 16)
#define ARCH_PERFMON_EVENTSEL_OS (1 << 17) #define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17)
#define ARCH_PERFMON_EVENTSEL_USR (1 << 16) #define ARCH_PERFMON_EVENTSEL_EDGE (1ULL << 18)
#define ARCH_PERFMON_EVENTSEL_INT (1ULL << 20)
/* #define ARCH_PERFMON_EVENTSEL_ANY (1ULL << 21)
* Includes eventsel and unit mask as well: #define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22)
*/ #define ARCH_PERFMON_EVENTSEL_INV (1ULL << 23)
#define ARCH_PERFMON_EVENTSEL_CMASK 0xFF000000ULL
#define INTEL_ARCH_EVTSEL_MASK 0x000000FFULL #define AMD64_EVENTSEL_EVENT \
#define INTEL_ARCH_UNIT_MASK 0x0000FF00ULL (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32))
#define INTEL_ARCH_EDGE_MASK 0x00040000ULL #define INTEL_ARCH_EVENT_MASK \
#define INTEL_ARCH_INV_MASK 0x00800000ULL (ARCH_PERFMON_EVENTSEL_UMASK | ARCH_PERFMON_EVENTSEL_EVENT)
#define INTEL_ARCH_CNT_MASK 0xFF000000ULL
#define INTEL_ARCH_EVENT_MASK (INTEL_ARCH_UNIT_MASK|INTEL_ARCH_EVTSEL_MASK) #define X86_RAW_EVENT_MASK \
(ARCH_PERFMON_EVENTSEL_EVENT | \
/* ARCH_PERFMON_EVENTSEL_UMASK | \
* filter mask to validate fixed counter events. ARCH_PERFMON_EVENTSEL_EDGE | \
* the following filters disqualify for fixed counters: ARCH_PERFMON_EVENTSEL_INV | \
* - inv ARCH_PERFMON_EVENTSEL_CMASK)
* - edge #define AMD64_RAW_EVENT_MASK \
* - cnt-mask (X86_RAW_EVENT_MASK | \
* The other filters are supported by fixed counters. AMD64_EVENTSEL_EVENT)
* The any-thread option is supported starting with v3.
*/
#define INTEL_ARCH_FIXED_MASK \
(INTEL_ARCH_CNT_MASK| \
INTEL_ARCH_INV_MASK| \
INTEL_ARCH_EDGE_MASK|\
INTEL_ARCH_UNIT_MASK|\
INTEL_ARCH_EVENT_MASK)
#define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL 0x3c #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL 0x3c
#define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK (0x00 << 8) #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK (0x00 << 8)
...@@ -67,7 +59,7 @@ ...@@ -67,7 +59,7 @@
union cpuid10_eax { union cpuid10_eax {
struct { struct {
unsigned int version_id:8; unsigned int version_id:8;
unsigned int num_events:8; unsigned int num_counters:8;
unsigned int bit_width:8; unsigned int bit_width:8;
unsigned int mask_length:8; unsigned int mask_length:8;
} split; } split;
...@@ -76,7 +68,7 @@ union cpuid10_eax { ...@@ -76,7 +68,7 @@ union cpuid10_eax {
union cpuid10_edx { union cpuid10_edx {
struct { struct {
unsigned int num_events_fixed:4; unsigned int num_counters_fixed:4;
unsigned int reserved:28; unsigned int reserved:28;
} split; } split;
unsigned int full; unsigned int full;
...@@ -136,6 +128,18 @@ extern void perf_events_lapic_init(void); ...@@ -136,6 +128,18 @@ extern void perf_events_lapic_init(void);
#define PERF_EVENT_INDEX_OFFSET 0 #define PERF_EVENT_INDEX_OFFSET 0
/*
* Abuse bit 3 of the cpu eflags register to indicate proper PEBS IP fixups.
* This flag is otherwise unused and ABI specified to be 0, so nobody should
* care what we do with it.
*/
#define PERF_EFLAGS_EXACT (1UL << 3)
struct pt_regs;
extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
extern unsigned long perf_misc_flags(struct pt_regs *regs);
#define perf_misc_flags(regs) perf_misc_flags(regs)
#else #else
static inline void init_hw_perf_events(void) { } static inline void init_hw_perf_events(void) { }
static inline void perf_events_lapic_init(void) { } static inline void perf_events_lapic_init(void) { }
......
此差异已折叠。
...@@ -21,7 +21,6 @@ struct mm_struct; ...@@ -21,7 +21,6 @@ struct mm_struct;
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/desc_defs.h> #include <asm/desc_defs.h>
#include <asm/nops.h> #include <asm/nops.h>
#include <asm/ds.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
...@@ -29,6 +28,7 @@ struct mm_struct; ...@@ -29,6 +28,7 @@ struct mm_struct;
#include <linux/threads.h> #include <linux/threads.h>
#include <linux/math64.h> #include <linux/math64.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/err.h>
#define HBP_NUM 4 #define HBP_NUM 4
/* /*
...@@ -473,10 +473,6 @@ struct thread_struct { ...@@ -473,10 +473,6 @@ struct thread_struct {
unsigned long iopl; unsigned long iopl;
/* Max allowed port in the bitmap, in bytes: */ /* Max allowed port in the bitmap, in bytes: */
unsigned io_bitmap_max; unsigned io_bitmap_max;
/* MSR_IA32_DEBUGCTLMSR value to switch in if TIF_DEBUGCTLMSR is set. */
unsigned long debugctlmsr;
/* Debug Store context; see asm/ds.h */
struct ds_context *ds_ctx;
}; };
static inline unsigned long native_get_debugreg(int regno) static inline unsigned long native_get_debugreg(int regno)
...@@ -803,7 +799,7 @@ extern void cpu_init(void); ...@@ -803,7 +799,7 @@ extern void cpu_init(void);
static inline unsigned long get_debugctlmsr(void) static inline unsigned long get_debugctlmsr(void)
{ {
unsigned long debugctlmsr = 0; unsigned long debugctlmsr = 0;
#ifndef CONFIG_X86_DEBUGCTLMSR #ifndef CONFIG_X86_DEBUGCTLMSR
if (boot_cpu_data.x86 < 6) if (boot_cpu_data.x86 < 6)
...@@ -811,21 +807,6 @@ static inline unsigned long get_debugctlmsr(void) ...@@ -811,21 +807,6 @@ static inline unsigned long get_debugctlmsr(void)
#endif #endif
rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr); rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr);
return debugctlmsr;
}
static inline unsigned long get_debugctlmsr_on_cpu(int cpu)
{
u64 debugctlmsr = 0;
u32 val1, val2;
#ifndef CONFIG_X86_DEBUGCTLMSR
if (boot_cpu_data.x86 < 6)
return 0;
#endif
rdmsr_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, &val1, &val2);
debugctlmsr = val1 | ((u64)val2 << 32);
return debugctlmsr; return debugctlmsr;
} }
...@@ -838,18 +819,6 @@ static inline void update_debugctlmsr(unsigned long debugctlmsr) ...@@ -838,18 +819,6 @@ static inline void update_debugctlmsr(unsigned long debugctlmsr)
wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr); wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr);
} }
static inline void update_debugctlmsr_on_cpu(int cpu,
unsigned long debugctlmsr)
{
#ifndef CONFIG_X86_DEBUGCTLMSR
if (boot_cpu_data.x86 < 6)
return;
#endif
wrmsr_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR,
(u32)((u64)debugctlmsr),
(u32)((u64)debugctlmsr >> 32));
}
/* /*
* from system description table in BIOS. Mostly for MCA use, but * from system description table in BIOS. Mostly for MCA use, but
* others may find it useful: * others may find it useful:
......
...@@ -82,61 +82,6 @@ ...@@ -82,61 +82,6 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/types.h> #include <linux/types.h>
#endif
/* configuration/status structure used in PTRACE_BTS_CONFIG and
PTRACE_BTS_STATUS commands.
*/
struct ptrace_bts_config {
/* requested or actual size of BTS buffer in bytes */
__u32 size;
/* bitmask of below flags */
__u32 flags;
/* buffer overflow signal */
__u32 signal;
/* actual size of bts_struct in bytes */
__u32 bts_size;
};
#endif /* __ASSEMBLY__ */
#define PTRACE_BTS_O_TRACE 0x1 /* branch trace */
#define PTRACE_BTS_O_SCHED 0x2 /* scheduling events w/ jiffies */
#define PTRACE_BTS_O_SIGNAL 0x4 /* send SIG<signal> on buffer overflow
instead of wrapping around */
#define PTRACE_BTS_O_ALLOC 0x8 /* (re)allocate buffer */
#define PTRACE_BTS_CONFIG 40
/* Configure branch trace recording.
ADDR points to a struct ptrace_bts_config.
DATA gives the size of that buffer.
A new buffer is allocated, if requested in the flags.
An overflow signal may only be requested for new buffers.
Returns the number of bytes read.
*/
#define PTRACE_BTS_STATUS 41
/* Return the current configuration in a struct ptrace_bts_config
pointed to by ADDR; DATA gives the size of that buffer.
Returns the number of bytes written.
*/
#define PTRACE_BTS_SIZE 42
/* Return the number of available BTS records for draining.
DATA and ADDR are ignored.
*/
#define PTRACE_BTS_GET 43
/* Get a single BTS record.
DATA defines the index into the BTS array, where 0 is the newest
entry, and higher indices refer to older entries.
ADDR is pointing to struct bts_struct (see asm/ds.h).
*/
#define PTRACE_BTS_CLEAR 44
/* Clear the BTS buffer.
DATA and ADDR are ignored.
*/
#define PTRACE_BTS_DRAIN 45
/* Read all available BTS records and clear the buffer.
ADDR points to an array of struct bts_struct.
DATA gives the size of that buffer.
BTS records are read from oldest to newest.
Returns number of BTS records drained.
*/
#endif /* _ASM_X86_PTRACE_ABI_H */ #endif /* _ASM_X86_PTRACE_ABI_H */
...@@ -289,12 +289,6 @@ extern int do_get_thread_area(struct task_struct *p, int idx, ...@@ -289,12 +289,6 @@ extern int do_get_thread_area(struct task_struct *p, int idx,
extern int do_set_thread_area(struct task_struct *p, int idx, extern int do_set_thread_area(struct task_struct *p, int idx,
struct user_desc __user *info, int can_allocate); struct user_desc __user *info, int can_allocate);
#ifdef CONFIG_X86_PTRACE_BTS
extern void ptrace_bts_untrace(struct task_struct *tsk);
#define arch_ptrace_untrace(tsk) ptrace_bts_untrace(tsk)
#endif /* CONFIG_X86_PTRACE_BTS */
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -92,8 +92,7 @@ struct thread_info { ...@@ -92,8 +92,7 @@ struct thread_info {
#define TIF_IO_BITMAP 22 /* uses I/O bitmap */ #define TIF_IO_BITMAP 22 /* uses I/O bitmap */
#define TIF_FREEZE 23 /* is freezing for suspend */ #define TIF_FREEZE 23 /* is freezing for suspend */
#define TIF_FORCED_TF 24 /* true if TF in eflags artificially */ #define TIF_FORCED_TF 24 /* true if TF in eflags artificially */
#define TIF_DEBUGCTLMSR 25 /* uses thread_struct.debugctlmsr */ #define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */
#define TIF_DS_AREA_MSR 26 /* uses thread_struct.ds_area_msr */
#define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */ #define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */
#define TIF_SYSCALL_TRACEPOINT 28 /* syscall tracepoint instrumentation */ #define TIF_SYSCALL_TRACEPOINT 28 /* syscall tracepoint instrumentation */
...@@ -115,8 +114,7 @@ struct thread_info { ...@@ -115,8 +114,7 @@ struct thread_info {
#define _TIF_IO_BITMAP (1 << TIF_IO_BITMAP) #define _TIF_IO_BITMAP (1 << TIF_IO_BITMAP)
#define _TIF_FREEZE (1 << TIF_FREEZE) #define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_FORCED_TF (1 << TIF_FORCED_TF) #define _TIF_FORCED_TF (1 << TIF_FORCED_TF)
#define _TIF_DEBUGCTLMSR (1 << TIF_DEBUGCTLMSR) #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_DS_AREA_MSR (1 << TIF_DS_AREA_MSR)
#define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES) #define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
...@@ -147,7 +145,7 @@ struct thread_info { ...@@ -147,7 +145,7 @@ struct thread_info {
/* flags to check in __switch_to() */ /* flags to check in __switch_to() */
#define _TIF_WORK_CTXSW \ #define _TIF_WORK_CTXSW \
(_TIF_IO_BITMAP|_TIF_DEBUGCTLMSR|_TIF_DS_AREA_MSR|_TIF_NOTSC) (_TIF_IO_BITMAP|_TIF_NOTSC|_TIF_BLOCKSTEP)
#define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY) #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW|_TIF_DEBUG) #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW|_TIF_DEBUG)
......
...@@ -47,8 +47,6 @@ obj-$(CONFIG_X86_TRAMPOLINE) += trampoline.o ...@@ -47,8 +47,6 @@ obj-$(CONFIG_X86_TRAMPOLINE) += trampoline.o
obj-y += process.o obj-y += process.o
obj-y += i387.o xsave.o obj-y += i387.o xsave.o
obj-y += ptrace.o obj-y += ptrace.o
obj-$(CONFIG_X86_DS) += ds.o
obj-$(CONFIG_X86_DS_SELFTEST) += ds_selftest.o
obj-$(CONFIG_X86_32) += tls.o obj-$(CONFIG_X86_32) += tls.o
obj-$(CONFIG_IA32_EMULATION) += tls.o obj-$(CONFIG_IA32_EMULATION) += tls.o
obj-y += step.o obj-y += step.o
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/ds.h>
#include <asm/bugs.h> #include <asm/bugs.h>
#include <asm/cpu.h> #include <asm/cpu.h>
...@@ -388,7 +387,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -388,7 +387,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_BTS); set_cpu_cap(c, X86_FEATURE_BTS);
if (!(l1 & (1<<12))) if (!(l1 & (1<<12)))
set_cpu_cap(c, X86_FEATURE_PEBS); set_cpu_cap(c, X86_FEATURE_PEBS);
ds_init_intel(c);
} }
if (c->x86 == 6 && c->x86_model == 29 && cpu_has_clflush) if (c->x86 == 6 && c->x86_model == 29 && cpu_has_clflush)
......
此差异已折叠。
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
static DEFINE_RAW_SPINLOCK(amd_nb_lock); static DEFINE_RAW_SPINLOCK(amd_nb_lock);
static __initconst u64 amd_hw_cache_event_ids static __initconst const u64 amd_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = [PERF_COUNT_HW_CACHE_RESULT_MAX] =
...@@ -111,22 +111,19 @@ static u64 amd_pmu_event_map(int hw_event) ...@@ -111,22 +111,19 @@ static u64 amd_pmu_event_map(int hw_event)
return amd_perfmon_event_map[hw_event]; return amd_perfmon_event_map[hw_event];
} }
static u64 amd_pmu_raw_event(u64 hw_event) static int amd_pmu_hw_config(struct perf_event *event)
{ {
#define K7_EVNTSEL_EVENT_MASK 0xF000000FFULL int ret = x86_pmu_hw_config(event);
#define K7_EVNTSEL_UNIT_MASK 0x00000FF00ULL
#define K7_EVNTSEL_EDGE_MASK 0x000040000ULL if (ret)
#define K7_EVNTSEL_INV_MASK 0x000800000ULL return ret;
#define K7_EVNTSEL_REG_MASK 0x0FF000000ULL
if (event->attr.type != PERF_TYPE_RAW)
#define K7_EVNTSEL_MASK \ return 0;
(K7_EVNTSEL_EVENT_MASK | \
K7_EVNTSEL_UNIT_MASK | \ event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
K7_EVNTSEL_EDGE_MASK | \
K7_EVNTSEL_INV_MASK | \ return 0;
K7_EVNTSEL_REG_MASK)
return hw_event & K7_EVNTSEL_MASK;
} }
/* /*
...@@ -165,7 +162,7 @@ static void amd_put_event_constraints(struct cpu_hw_events *cpuc, ...@@ -165,7 +162,7 @@ static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
* be removed on one CPU at a time AND PMU is disabled * be removed on one CPU at a time AND PMU is disabled
* when we come here * when we come here
*/ */
for (i = 0; i < x86_pmu.num_events; i++) { for (i = 0; i < x86_pmu.num_counters; i++) {
if (nb->owners[i] == event) { if (nb->owners[i] == event) {
cmpxchg(nb->owners+i, event, NULL); cmpxchg(nb->owners+i, event, NULL);
break; break;
...@@ -215,7 +212,7 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event) ...@@ -215,7 +212,7 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
struct amd_nb *nb = cpuc->amd_nb; struct amd_nb *nb = cpuc->amd_nb;
struct perf_event *old = NULL; struct perf_event *old = NULL;
int max = x86_pmu.num_events; int max = x86_pmu.num_counters;
int i, j, k = -1; int i, j, k = -1;
/* /*
...@@ -293,7 +290,7 @@ static struct amd_nb *amd_alloc_nb(int cpu, int nb_id) ...@@ -293,7 +290,7 @@ static struct amd_nb *amd_alloc_nb(int cpu, int nb_id)
/* /*
* initialize all possible NB constraints * initialize all possible NB constraints
*/ */
for (i = 0; i < x86_pmu.num_events; i++) { for (i = 0; i < x86_pmu.num_counters; i++) {
__set_bit(i, nb->event_constraints[i].idxmsk); __set_bit(i, nb->event_constraints[i].idxmsk);
nb->event_constraints[i].weight = 1; nb->event_constraints[i].weight = 1;
} }
...@@ -371,21 +368,22 @@ static void amd_pmu_cpu_dead(int cpu) ...@@ -371,21 +368,22 @@ static void amd_pmu_cpu_dead(int cpu)
raw_spin_unlock(&amd_nb_lock); raw_spin_unlock(&amd_nb_lock);
} }
static __initconst struct x86_pmu amd_pmu = { static __initconst const struct x86_pmu amd_pmu = {
.name = "AMD", .name = "AMD",
.handle_irq = x86_pmu_handle_irq, .handle_irq = x86_pmu_handle_irq,
.disable_all = x86_pmu_disable_all, .disable_all = x86_pmu_disable_all,
.enable_all = x86_pmu_enable_all, .enable_all = x86_pmu_enable_all,
.enable = x86_pmu_enable_event, .enable = x86_pmu_enable_event,
.disable = x86_pmu_disable_event, .disable = x86_pmu_disable_event,
.hw_config = amd_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_K7_EVNTSEL0, .eventsel = MSR_K7_EVNTSEL0,
.perfctr = MSR_K7_PERFCTR0, .perfctr = MSR_K7_PERFCTR0,
.event_map = amd_pmu_event_map, .event_map = amd_pmu_event_map,
.raw_event = amd_pmu_raw_event,
.max_events = ARRAY_SIZE(amd_perfmon_event_map), .max_events = ARRAY_SIZE(amd_perfmon_event_map),
.num_events = 4, .num_counters = 4,
.event_bits = 48, .cntval_bits = 48,
.event_mask = (1ULL << 48) - 1, .cntval_mask = (1ULL << 48) - 1,
.apic = 1, .apic = 1,
/* use highest bit to detect overflow */ /* use highest bit to detect overflow */
.max_period = (1ULL << 47) - 1, .max_period = (1ULL << 47) - 1,
......
...@@ -88,7 +88,7 @@ static u64 intel_pmu_event_map(int hw_event) ...@@ -88,7 +88,7 @@ static u64 intel_pmu_event_map(int hw_event)
return intel_perfmon_event_map[hw_event]; return intel_perfmon_event_map[hw_event];
} }
static __initconst u64 westmere_hw_cache_event_ids static __initconst const u64 westmere_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = [PERF_COUNT_HW_CACHE_RESULT_MAX] =
...@@ -179,7 +179,7 @@ static __initconst u64 westmere_hw_cache_event_ids ...@@ -179,7 +179,7 @@ static __initconst u64 westmere_hw_cache_event_ids
}, },
}; };
static __initconst u64 nehalem_hw_cache_event_ids static __initconst const u64 nehalem_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = [PERF_COUNT_HW_CACHE_RESULT_MAX] =
...@@ -270,7 +270,7 @@ static __initconst u64 nehalem_hw_cache_event_ids ...@@ -270,7 +270,7 @@ static __initconst u64 nehalem_hw_cache_event_ids
}, },
}; };
static __initconst u64 core2_hw_cache_event_ids static __initconst const u64 core2_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = [PERF_COUNT_HW_CACHE_RESULT_MAX] =
...@@ -361,7 +361,7 @@ static __initconst u64 core2_hw_cache_event_ids ...@@ -361,7 +361,7 @@ static __initconst u64 core2_hw_cache_event_ids
}, },
}; };
static __initconst u64 atom_hw_cache_event_ids static __initconst const u64 atom_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = [PERF_COUNT_HW_CACHE_RESULT_MAX] =
...@@ -452,60 +452,6 @@ static __initconst u64 atom_hw_cache_event_ids ...@@ -452,60 +452,6 @@ static __initconst u64 atom_hw_cache_event_ids
}, },
}; };
static u64 intel_pmu_raw_event(u64 hw_event)
{
#define CORE_EVNTSEL_EVENT_MASK 0x000000FFULL
#define CORE_EVNTSEL_UNIT_MASK 0x0000FF00ULL
#define CORE_EVNTSEL_EDGE_MASK 0x00040000ULL
#define CORE_EVNTSEL_INV_MASK 0x00800000ULL
#define CORE_EVNTSEL_REG_MASK 0xFF000000ULL
#define CORE_EVNTSEL_MASK \
(INTEL_ARCH_EVTSEL_MASK | \
INTEL_ARCH_UNIT_MASK | \
INTEL_ARCH_EDGE_MASK | \
INTEL_ARCH_INV_MASK | \
INTEL_ARCH_CNT_MASK)
return hw_event & CORE_EVNTSEL_MASK;
}
static void intel_pmu_enable_bts(u64 config)
{
unsigned long debugctlmsr;
debugctlmsr = get_debugctlmsr();
debugctlmsr |= X86_DEBUGCTL_TR;
debugctlmsr |= X86_DEBUGCTL_BTS;
debugctlmsr |= X86_DEBUGCTL_BTINT;
if (!(config & ARCH_PERFMON_EVENTSEL_OS))
debugctlmsr |= X86_DEBUGCTL_BTS_OFF_OS;
if (!(config & ARCH_PERFMON_EVENTSEL_USR))
debugctlmsr |= X86_DEBUGCTL_BTS_OFF_USR;
update_debugctlmsr(debugctlmsr);
}
static void intel_pmu_disable_bts(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
unsigned long debugctlmsr;
if (!cpuc->ds)
return;
debugctlmsr = get_debugctlmsr();
debugctlmsr &=
~(X86_DEBUGCTL_TR | X86_DEBUGCTL_BTS | X86_DEBUGCTL_BTINT |
X86_DEBUGCTL_BTS_OFF_OS | X86_DEBUGCTL_BTS_OFF_USR);
update_debugctlmsr(debugctlmsr);
}
static void intel_pmu_disable_all(void) static void intel_pmu_disable_all(void)
{ {
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
...@@ -514,12 +460,17 @@ static void intel_pmu_disable_all(void) ...@@ -514,12 +460,17 @@ static void intel_pmu_disable_all(void)
if (test_bit(X86_PMC_IDX_FIXED_BTS, cpuc->active_mask)) if (test_bit(X86_PMC_IDX_FIXED_BTS, cpuc->active_mask))
intel_pmu_disable_bts(); intel_pmu_disable_bts();
intel_pmu_pebs_disable_all();
intel_pmu_lbr_disable_all();
} }
static void intel_pmu_enable_all(void) static void intel_pmu_enable_all(int added)
{ {
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
intel_pmu_pebs_enable_all();
intel_pmu_lbr_enable_all();
wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, x86_pmu.intel_ctrl); wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, x86_pmu.intel_ctrl);
if (test_bit(X86_PMC_IDX_FIXED_BTS, cpuc->active_mask)) { if (test_bit(X86_PMC_IDX_FIXED_BTS, cpuc->active_mask)) {
...@@ -533,6 +484,42 @@ static void intel_pmu_enable_all(void) ...@@ -533,6 +484,42 @@ static void intel_pmu_enable_all(void)
} }
} }
/*
* Workaround for:
* Intel Errata AAK100 (model 26)
* Intel Errata AAP53 (model 30)
* Intel Errata BD53 (model 44)
*
* These chips need to be 'reset' when adding counters by programming
* the magic three (non counting) events 0x4300D2, 0x4300B1 and 0x4300B5
* either in sequence on the same PMC or on different PMCs.
*/
static void intel_pmu_nhm_enable_all(int added)
{
if (added) {
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
int i;
wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + 0, 0x4300D2);
wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + 1, 0x4300B1);
wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + 2, 0x4300B5);
wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0x3);
wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0x0);
for (i = 0; i < 3; i++) {
struct perf_event *event = cpuc->events[i];
if (!event)
continue;
__x86_pmu_enable_event(&event->hw,
ARCH_PERFMON_EVENTSEL_ENABLE);
}
}
intel_pmu_enable_all(added);
}
static inline u64 intel_pmu_get_status(void) static inline u64 intel_pmu_get_status(void)
{ {
u64 status; u64 status;
...@@ -547,8 +534,7 @@ static inline void intel_pmu_ack_status(u64 ack) ...@@ -547,8 +534,7 @@ static inline void intel_pmu_ack_status(u64 ack)
wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, ack); wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, ack);
} }
static inline void static void intel_pmu_disable_fixed(struct hw_perf_event *hwc)
intel_pmu_disable_fixed(struct hw_perf_event *hwc)
{ {
int idx = hwc->idx - X86_PMC_IDX_FIXED; int idx = hwc->idx - X86_PMC_IDX_FIXED;
u64 ctrl_val, mask; u64 ctrl_val, mask;
...@@ -557,71 +543,10 @@ intel_pmu_disable_fixed(struct hw_perf_event *hwc) ...@@ -557,71 +543,10 @@ intel_pmu_disable_fixed(struct hw_perf_event *hwc)
rdmsrl(hwc->config_base, ctrl_val); rdmsrl(hwc->config_base, ctrl_val);
ctrl_val &= ~mask; ctrl_val &= ~mask;
(void)checking_wrmsrl(hwc->config_base, ctrl_val); wrmsrl(hwc->config_base, ctrl_val);
}
static void intel_pmu_drain_bts_buffer(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct debug_store *ds = cpuc->ds;
struct bts_record {
u64 from;
u64 to;
u64 flags;
};
struct perf_event *event = cpuc->events[X86_PMC_IDX_FIXED_BTS];
struct bts_record *at, *top;
struct perf_output_handle handle;
struct perf_event_header header;
struct perf_sample_data data;
struct pt_regs regs;
if (!event)
return;
if (!ds)
return;
at = (struct bts_record *)(unsigned long)ds->bts_buffer_base;
top = (struct bts_record *)(unsigned long)ds->bts_index;
if (top <= at)
return;
ds->bts_index = ds->bts_buffer_base;
perf_sample_data_init(&data, 0);
data.period = event->hw.last_period;
regs.ip = 0;
/*
* Prepare a generic sample, i.e. fill in the invariant fields.
* We will overwrite the from and to address before we output
* the sample.
*/
perf_prepare_sample(&header, &data, event, &regs);
if (perf_output_begin(&handle, event,
header.size * (top - at), 1, 1))
return;
for (; at < top; at++) {
data.ip = at->from;
data.addr = at->to;
perf_output_sample(&handle, &header, &data, event);
}
perf_output_end(&handle);
/* There's new data available. */
event->hw.interrupts++;
event->pending_kill = POLL_IN;
} }
static inline void static void intel_pmu_disable_event(struct perf_event *event)
intel_pmu_disable_event(struct perf_event *event)
{ {
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
...@@ -637,14 +562,15 @@ intel_pmu_disable_event(struct perf_event *event) ...@@ -637,14 +562,15 @@ intel_pmu_disable_event(struct perf_event *event)
} }
x86_pmu_disable_event(event); x86_pmu_disable_event(event);
if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_disable(event);
} }
static inline void static void intel_pmu_enable_fixed(struct hw_perf_event *hwc)
intel_pmu_enable_fixed(struct hw_perf_event *hwc)
{ {
int idx = hwc->idx - X86_PMC_IDX_FIXED; int idx = hwc->idx - X86_PMC_IDX_FIXED;
u64 ctrl_val, bits, mask; u64 ctrl_val, bits, mask;
int err;
/* /*
* Enable IRQ generation (0x8), * Enable IRQ generation (0x8),
...@@ -669,7 +595,7 @@ intel_pmu_enable_fixed(struct hw_perf_event *hwc) ...@@ -669,7 +595,7 @@ intel_pmu_enable_fixed(struct hw_perf_event *hwc)
rdmsrl(hwc->config_base, ctrl_val); rdmsrl(hwc->config_base, ctrl_val);
ctrl_val &= ~mask; ctrl_val &= ~mask;
ctrl_val |= bits; ctrl_val |= bits;
err = checking_wrmsrl(hwc->config_base, ctrl_val); wrmsrl(hwc->config_base, ctrl_val);
} }
static void intel_pmu_enable_event(struct perf_event *event) static void intel_pmu_enable_event(struct perf_event *event)
...@@ -689,7 +615,10 @@ static void intel_pmu_enable_event(struct perf_event *event) ...@@ -689,7 +615,10 @@ static void intel_pmu_enable_event(struct perf_event *event)
return; return;
} }
__x86_pmu_enable_event(hwc); if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_enable(event);
__x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE);
} }
/* /*
...@@ -708,20 +637,20 @@ static void intel_pmu_reset(void) ...@@ -708,20 +637,20 @@ static void intel_pmu_reset(void)
unsigned long flags; unsigned long flags;
int idx; int idx;
if (!x86_pmu.num_events) if (!x86_pmu.num_counters)
return; return;
local_irq_save(flags); local_irq_save(flags);
printk("clearing PMU state on CPU#%d\n", smp_processor_id()); printk("clearing PMU state on CPU#%d\n", smp_processor_id());
for (idx = 0; idx < x86_pmu.num_events; idx++) { for (idx = 0; idx < x86_pmu.num_counters; idx++) {
checking_wrmsrl(x86_pmu.eventsel + idx, 0ull); checking_wrmsrl(x86_pmu.eventsel + idx, 0ull);
checking_wrmsrl(x86_pmu.perfctr + idx, 0ull); checking_wrmsrl(x86_pmu.perfctr + idx, 0ull);
} }
for (idx = 0; idx < x86_pmu.num_events_fixed; idx++) { for (idx = 0; idx < x86_pmu.num_counters_fixed; idx++)
checking_wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, 0ull); checking_wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, 0ull);
}
if (ds) if (ds)
ds->bts_index = ds->bts_buffer_base; ds->bts_index = ds->bts_buffer_base;
...@@ -747,7 +676,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) ...@@ -747,7 +676,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
intel_pmu_drain_bts_buffer(); intel_pmu_drain_bts_buffer();
status = intel_pmu_get_status(); status = intel_pmu_get_status();
if (!status) { if (!status) {
intel_pmu_enable_all(); intel_pmu_enable_all(0);
return 0; return 0;
} }
...@@ -762,6 +691,15 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) ...@@ -762,6 +691,15 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
inc_irq_stat(apic_perf_irqs); inc_irq_stat(apic_perf_irqs);
ack = status; ack = status;
intel_pmu_lbr_read();
/*
* PEBS overflow sets bit 62 in the global status register
*/
if (__test_and_clear_bit(62, (unsigned long *)&status))
x86_pmu.drain_pebs(regs);
for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) { for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
struct perf_event *event = cpuc->events[bit]; struct perf_event *event = cpuc->events[bit];
...@@ -787,26 +725,22 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) ...@@ -787,26 +725,22 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
goto again; goto again;
done: done:
intel_pmu_enable_all(); intel_pmu_enable_all(0);
return 1; return 1;
} }
static struct event_constraint bts_constraint =
EVENT_CONSTRAINT(0, 1ULL << X86_PMC_IDX_FIXED_BTS, 0);
static struct event_constraint * static struct event_constraint *
intel_special_constraints(struct perf_event *event) intel_bts_constraints(struct perf_event *event)
{ {
unsigned int hw_event; struct hw_perf_event *hwc = &event->hw;
unsigned int hw_event, bts_event;
hw_event = event->hw.config & INTEL_ARCH_EVENT_MASK;
if (unlikely((hw_event == hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;
x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS)) && bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
(event->hw.sample_period == 1))) {
if (unlikely(hw_event == bts_event && hwc->sample_period == 1))
return &bts_constraint; return &bts_constraint;
}
return NULL; return NULL;
} }
...@@ -815,24 +749,53 @@ intel_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event ...@@ -815,24 +749,53 @@ intel_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event
{ {
struct event_constraint *c; struct event_constraint *c;
c = intel_special_constraints(event); c = intel_bts_constraints(event);
if (c)
return c;
c = intel_pebs_constraints(event);
if (c) if (c)
return c; return c;
return x86_get_event_constraints(cpuc, event); return x86_get_event_constraints(cpuc, event);
} }
static __initconst struct x86_pmu core_pmu = { static int intel_pmu_hw_config(struct perf_event *event)
{
int ret = x86_pmu_hw_config(event);
if (ret)
return ret;
if (event->attr.type != PERF_TYPE_RAW)
return 0;
if (!(event->attr.config & ARCH_PERFMON_EVENTSEL_ANY))
return 0;
if (x86_pmu.version < 3)
return -EINVAL;
if (perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN))
return -EACCES;
event->hw.config |= ARCH_PERFMON_EVENTSEL_ANY;
return 0;
}
static __initconst const struct x86_pmu core_pmu = {
.name = "core", .name = "core",
.handle_irq = x86_pmu_handle_irq, .handle_irq = x86_pmu_handle_irq,
.disable_all = x86_pmu_disable_all, .disable_all = x86_pmu_disable_all,
.enable_all = x86_pmu_enable_all, .enable_all = x86_pmu_enable_all,
.enable = x86_pmu_enable_event, .enable = x86_pmu_enable_event,
.disable = x86_pmu_disable_event, .disable = x86_pmu_disable_event,
.hw_config = x86_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_ARCH_PERFMON_EVENTSEL0, .eventsel = MSR_ARCH_PERFMON_EVENTSEL0,
.perfctr = MSR_ARCH_PERFMON_PERFCTR0, .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
.event_map = intel_pmu_event_map, .event_map = intel_pmu_event_map,
.raw_event = intel_pmu_raw_event,
.max_events = ARRAY_SIZE(intel_perfmon_event_map), .max_events = ARRAY_SIZE(intel_perfmon_event_map),
.apic = 1, .apic = 1,
/* /*
...@@ -845,17 +808,32 @@ static __initconst struct x86_pmu core_pmu = { ...@@ -845,17 +808,32 @@ static __initconst struct x86_pmu core_pmu = {
.event_constraints = intel_core_event_constraints, .event_constraints = intel_core_event_constraints,
}; };
static __initconst struct x86_pmu intel_pmu = { static void intel_pmu_cpu_starting(int cpu)
{
init_debug_store_on_cpu(cpu);
/*
* Deal with CPUs that don't clear their LBRs on power-up.
*/
intel_pmu_lbr_reset();
}
static void intel_pmu_cpu_dying(int cpu)
{
fini_debug_store_on_cpu(cpu);
}
static __initconst const struct x86_pmu intel_pmu = {
.name = "Intel", .name = "Intel",
.handle_irq = intel_pmu_handle_irq, .handle_irq = intel_pmu_handle_irq,
.disable_all = intel_pmu_disable_all, .disable_all = intel_pmu_disable_all,
.enable_all = intel_pmu_enable_all, .enable_all = intel_pmu_enable_all,
.enable = intel_pmu_enable_event, .enable = intel_pmu_enable_event,
.disable = intel_pmu_disable_event, .disable = intel_pmu_disable_event,
.hw_config = intel_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_ARCH_PERFMON_EVENTSEL0, .eventsel = MSR_ARCH_PERFMON_EVENTSEL0,
.perfctr = MSR_ARCH_PERFMON_PERFCTR0, .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
.event_map = intel_pmu_event_map, .event_map = intel_pmu_event_map,
.raw_event = intel_pmu_raw_event,
.max_events = ARRAY_SIZE(intel_perfmon_event_map), .max_events = ARRAY_SIZE(intel_perfmon_event_map),
.apic = 1, .apic = 1,
/* /*
...@@ -864,14 +842,38 @@ static __initconst struct x86_pmu intel_pmu = { ...@@ -864,14 +842,38 @@ static __initconst struct x86_pmu intel_pmu = {
* the generic event period: * the generic event period:
*/ */
.max_period = (1ULL << 31) - 1, .max_period = (1ULL << 31) - 1,
.enable_bts = intel_pmu_enable_bts,
.disable_bts = intel_pmu_disable_bts,
.get_event_constraints = intel_get_event_constraints, .get_event_constraints = intel_get_event_constraints,
.cpu_starting = init_debug_store_on_cpu, .cpu_starting = intel_pmu_cpu_starting,
.cpu_dying = fini_debug_store_on_cpu, .cpu_dying = intel_pmu_cpu_dying,
}; };
static void intel_clovertown_quirks(void)
{
/*
* PEBS is unreliable due to:
*
* AJ67 - PEBS may experience CPL leaks
* AJ68 - PEBS PMI may be delayed by one event
* AJ69 - GLOBAL_STATUS[62] will only be set when DEBUGCTL[12]
* AJ106 - FREEZE_LBRS_ON_PMI doesn't work in combination with PEBS
*
* AJ67 could be worked around by restricting the OS/USR flags.
* AJ69 could be worked around by setting PMU_FREEZE_ON_PMI.
*
* AJ106 could possibly be worked around by not allowing LBR
* usage from PEBS, including the fixup.
* AJ68 could possibly be worked around by always programming
* a pebs_event_reset[0] value and coping with the lost events.
*
* But taken together it might just make sense to not enable PEBS on
* these chips.
*/
printk(KERN_WARNING "PEBS disabled due to CPU errata.\n");
x86_pmu.pebs = 0;
x86_pmu.pebs_constraints = NULL;
}
static __init int intel_pmu_init(void) static __init int intel_pmu_init(void)
{ {
union cpuid10_edx edx; union cpuid10_edx edx;
...@@ -881,12 +883,13 @@ static __init int intel_pmu_init(void) ...@@ -881,12 +883,13 @@ static __init int intel_pmu_init(void)
int version; int version;
if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) { if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) {
/* check for P6 processor family */ switch (boot_cpu_data.x86) {
if (boot_cpu_data.x86 == 6) { case 0x6:
return p6_pmu_init(); return p6_pmu_init();
} else { case 0xf:
return p4_pmu_init();
}
return -ENODEV; return -ENODEV;
}
} }
/* /*
...@@ -904,16 +907,28 @@ static __init int intel_pmu_init(void) ...@@ -904,16 +907,28 @@ static __init int intel_pmu_init(void)
x86_pmu = intel_pmu; x86_pmu = intel_pmu;
x86_pmu.version = version; x86_pmu.version = version;
x86_pmu.num_events = eax.split.num_events; x86_pmu.num_counters = eax.split.num_counters;
x86_pmu.event_bits = eax.split.bit_width; x86_pmu.cntval_bits = eax.split.bit_width;
x86_pmu.event_mask = (1ULL << eax.split.bit_width) - 1; x86_pmu.cntval_mask = (1ULL << eax.split.bit_width) - 1;
/* /*
* Quirk: v2 perfmon does not report fixed-purpose events, so * Quirk: v2 perfmon does not report fixed-purpose events, so
* assume at least 3 events: * assume at least 3 events:
*/ */
if (version > 1) if (version > 1)
x86_pmu.num_events_fixed = max((int)edx.split.num_events_fixed, 3); x86_pmu.num_counters_fixed = max((int)edx.split.num_counters_fixed, 3);
/*
* v2 and above have a perf capabilities MSR
*/
if (version > 1) {
u64 capabilities;
rdmsrl(MSR_IA32_PERF_CAPABILITIES, capabilities);
x86_pmu.intel_cap.capabilities = capabilities;
}
intel_ds_init();
/* /*
* Install the hw-cache-events table: * Install the hw-cache-events table:
...@@ -924,12 +939,15 @@ static __init int intel_pmu_init(void) ...@@ -924,12 +939,15 @@ static __init int intel_pmu_init(void)
break; break;
case 15: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */ case 15: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
x86_pmu.quirks = intel_clovertown_quirks;
case 22: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */ case 22: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
case 23: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */ case 23: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
case 29: /* six-core 45 nm xeon "Dunnington" */ case 29: /* six-core 45 nm xeon "Dunnington" */
memcpy(hw_cache_event_ids, core2_hw_cache_event_ids, memcpy(hw_cache_event_ids, core2_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
intel_pmu_lbr_init_core();
x86_pmu.event_constraints = intel_core2_event_constraints; x86_pmu.event_constraints = intel_core2_event_constraints;
pr_cont("Core2 events, "); pr_cont("Core2 events, ");
break; break;
...@@ -940,13 +958,19 @@ static __init int intel_pmu_init(void) ...@@ -940,13 +958,19 @@ static __init int intel_pmu_init(void)
memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids, memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
intel_pmu_lbr_init_nhm();
x86_pmu.event_constraints = intel_nehalem_event_constraints; x86_pmu.event_constraints = intel_nehalem_event_constraints;
pr_cont("Nehalem/Corei7 events, "); x86_pmu.enable_all = intel_pmu_nhm_enable_all;
pr_cont("Nehalem events, ");
break; break;
case 28: /* Atom */ case 28: /* Atom */
memcpy(hw_cache_event_ids, atom_hw_cache_event_ids, memcpy(hw_cache_event_ids, atom_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
intel_pmu_lbr_init_atom();
x86_pmu.event_constraints = intel_gen_event_constraints; x86_pmu.event_constraints = intel_gen_event_constraints;
pr_cont("Atom events, "); pr_cont("Atom events, ");
break; break;
...@@ -956,7 +980,10 @@ static __init int intel_pmu_init(void) ...@@ -956,7 +980,10 @@ static __init int intel_pmu_init(void)
memcpy(hw_cache_event_ids, westmere_hw_cache_event_ids, memcpy(hw_cache_event_ids, westmere_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
intel_pmu_lbr_init_nhm();
x86_pmu.event_constraints = intel_westmere_event_constraints; x86_pmu.event_constraints = intel_westmere_event_constraints;
x86_pmu.enable_all = intel_pmu_nhm_enable_all;
pr_cont("Westmere events, "); pr_cont("Westmere events, ");
break; break;
......
#ifdef CONFIG_CPU_SUP_INTEL
/* The maximal number of PEBS events: */
#define MAX_PEBS_EVENTS 4
/* The size of a BTS record in bytes: */
#define BTS_RECORD_SIZE 24
#define BTS_BUFFER_SIZE (PAGE_SIZE << 4)
#define PEBS_BUFFER_SIZE PAGE_SIZE
/*
* pebs_record_32 for p4 and core not supported
struct pebs_record_32 {
u32 flags, ip;
u32 ax, bc, cx, dx;
u32 si, di, bp, sp;
};
*/
struct pebs_record_core {
u64 flags, ip;
u64 ax, bx, cx, dx;
u64 si, di, bp, sp;
u64 r8, r9, r10, r11;
u64 r12, r13, r14, r15;
};
struct pebs_record_nhm {
u64 flags, ip;
u64 ax, bx, cx, dx;
u64 si, di, bp, sp;
u64 r8, r9, r10, r11;
u64 r12, r13, r14, r15;
u64 status, dla, dse, lat;
};
/*
* A debug store configuration.
*
* We only support architectures that use 64bit fields.
*/
struct debug_store {
u64 bts_buffer_base;
u64 bts_index;
u64 bts_absolute_maximum;
u64 bts_interrupt_threshold;
u64 pebs_buffer_base;
u64 pebs_index;
u64 pebs_absolute_maximum;
u64 pebs_interrupt_threshold;
u64 pebs_event_reset[MAX_PEBS_EVENTS];
};
static void init_debug_store_on_cpu(int cpu)
{
struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
if (!ds)
return;
wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA,
(u32)((u64)(unsigned long)ds),
(u32)((u64)(unsigned long)ds >> 32));
}
static void fini_debug_store_on_cpu(int cpu)
{
if (!per_cpu(cpu_hw_events, cpu).ds)
return;
wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0, 0);
}
static void release_ds_buffers(void)
{
int cpu;
if (!x86_pmu.bts && !x86_pmu.pebs)
return;
get_online_cpus();
for_each_online_cpu(cpu)
fini_debug_store_on_cpu(cpu);
for_each_possible_cpu(cpu) {
struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
if (!ds)
continue;
per_cpu(cpu_hw_events, cpu).ds = NULL;
kfree((void *)(unsigned long)ds->pebs_buffer_base);
kfree((void *)(unsigned long)ds->bts_buffer_base);
kfree(ds);
}
put_online_cpus();
}
static int reserve_ds_buffers(void)
{
int cpu, err = 0;
if (!x86_pmu.bts && !x86_pmu.pebs)
return 0;
get_online_cpus();
for_each_possible_cpu(cpu) {
struct debug_store *ds;
void *buffer;
int max, thresh;
err = -ENOMEM;
ds = kzalloc(sizeof(*ds), GFP_KERNEL);
if (unlikely(!ds))
break;
per_cpu(cpu_hw_events, cpu).ds = ds;
if (x86_pmu.bts) {
buffer = kzalloc(BTS_BUFFER_SIZE, GFP_KERNEL);
if (unlikely(!buffer))
break;
max = BTS_BUFFER_SIZE / BTS_RECORD_SIZE;
thresh = max / 16;
ds->bts_buffer_base = (u64)(unsigned long)buffer;
ds->bts_index = ds->bts_buffer_base;
ds->bts_absolute_maximum = ds->bts_buffer_base +
max * BTS_RECORD_SIZE;
ds->bts_interrupt_threshold = ds->bts_absolute_maximum -
thresh * BTS_RECORD_SIZE;
}
if (x86_pmu.pebs) {
buffer = kzalloc(PEBS_BUFFER_SIZE, GFP_KERNEL);
if (unlikely(!buffer))
break;
max = PEBS_BUFFER_SIZE / x86_pmu.pebs_record_size;
ds->pebs_buffer_base = (u64)(unsigned long)buffer;
ds->pebs_index = ds->pebs_buffer_base;
ds->pebs_absolute_maximum = ds->pebs_buffer_base +
max * x86_pmu.pebs_record_size;
/*
* Always use single record PEBS
*/
ds->pebs_interrupt_threshold = ds->pebs_buffer_base +
x86_pmu.pebs_record_size;
}
err = 0;
}
if (err)
release_ds_buffers();
else {
for_each_online_cpu(cpu)
init_debug_store_on_cpu(cpu);
}
put_online_cpus();
return err;
}
/*
* BTS
*/
static struct event_constraint bts_constraint =
EVENT_CONSTRAINT(0, 1ULL << X86_PMC_IDX_FIXED_BTS, 0);
static void intel_pmu_enable_bts(u64 config)
{
unsigned long debugctlmsr;
debugctlmsr = get_debugctlmsr();
debugctlmsr |= DEBUGCTLMSR_TR;
debugctlmsr |= DEBUGCTLMSR_BTS;
debugctlmsr |= DEBUGCTLMSR_BTINT;
if (!(config & ARCH_PERFMON_EVENTSEL_OS))
debugctlmsr |= DEBUGCTLMSR_BTS_OFF_OS;
if (!(config & ARCH_PERFMON_EVENTSEL_USR))
debugctlmsr |= DEBUGCTLMSR_BTS_OFF_USR;
update_debugctlmsr(debugctlmsr);
}
static void intel_pmu_disable_bts(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
unsigned long debugctlmsr;
if (!cpuc->ds)
return;
debugctlmsr = get_debugctlmsr();
debugctlmsr &=
~(DEBUGCTLMSR_TR | DEBUGCTLMSR_BTS | DEBUGCTLMSR_BTINT |
DEBUGCTLMSR_BTS_OFF_OS | DEBUGCTLMSR_BTS_OFF_USR);
update_debugctlmsr(debugctlmsr);
}
static void intel_pmu_drain_bts_buffer(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct debug_store *ds = cpuc->ds;
struct bts_record {
u64 from;
u64 to;
u64 flags;
};
struct perf_event *event = cpuc->events[X86_PMC_IDX_FIXED_BTS];
struct bts_record *at, *top;
struct perf_output_handle handle;
struct perf_event_header header;
struct perf_sample_data data;
struct pt_regs regs;
if (!event)
return;
if (!ds)
return;
at = (struct bts_record *)(unsigned long)ds->bts_buffer_base;
top = (struct bts_record *)(unsigned long)ds->bts_index;
if (top <= at)
return;
ds->bts_index = ds->bts_buffer_base;
perf_sample_data_init(&data, 0);
data.period = event->hw.last_period;
regs.ip = 0;
/*
* Prepare a generic sample, i.e. fill in the invariant fields.
* We will overwrite the from and to address before we output
* the sample.
*/
perf_prepare_sample(&header, &data, event, &regs);
if (perf_output_begin(&handle, event, header.size * (top - at), 1, 1))
return;
for (; at < top; at++) {
data.ip = at->from;
data.addr = at->to;
perf_output_sample(&handle, &header, &data, event);
}
perf_output_end(&handle);
/* There's new data available. */
event->hw.interrupts++;
event->pending_kill = POLL_IN;
}
/*
* PEBS
*/
static struct event_constraint intel_core_pebs_events[] = {
PEBS_EVENT_CONSTRAINT(0x00c0, 0x1), /* INSTR_RETIRED.ANY */
PEBS_EVENT_CONSTRAINT(0xfec1, 0x1), /* X87_OPS_RETIRED.ANY */
PEBS_EVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */
PEBS_EVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
PEBS_EVENT_CONSTRAINT(0x01cb, 0x1), /* MEM_LOAD_RETIRED.L1D_MISS */
PEBS_EVENT_CONSTRAINT(0x02cb, 0x1), /* MEM_LOAD_RETIRED.L1D_LINE_MISS */
PEBS_EVENT_CONSTRAINT(0x04cb, 0x1), /* MEM_LOAD_RETIRED.L2_MISS */
PEBS_EVENT_CONSTRAINT(0x08cb, 0x1), /* MEM_LOAD_RETIRED.L2_LINE_MISS */
PEBS_EVENT_CONSTRAINT(0x10cb, 0x1), /* MEM_LOAD_RETIRED.DTLB_MISS */
EVENT_CONSTRAINT_END
};
static struct event_constraint intel_nehalem_pebs_events[] = {
PEBS_EVENT_CONSTRAINT(0x00c0, 0xf), /* INSTR_RETIRED.ANY */
PEBS_EVENT_CONSTRAINT(0xfec1, 0xf), /* X87_OPS_RETIRED.ANY */
PEBS_EVENT_CONSTRAINT(0x00c5, 0xf), /* BR_INST_RETIRED.MISPRED */
PEBS_EVENT_CONSTRAINT(0x1fc7, 0xf), /* SIMD_INST_RETURED.ANY */
PEBS_EVENT_CONSTRAINT(0x01cb, 0xf), /* MEM_LOAD_RETIRED.L1D_MISS */
PEBS_EVENT_CONSTRAINT(0x02cb, 0xf), /* MEM_LOAD_RETIRED.L1D_LINE_MISS */
PEBS_EVENT_CONSTRAINT(0x04cb, 0xf), /* MEM_LOAD_RETIRED.L2_MISS */
PEBS_EVENT_CONSTRAINT(0x08cb, 0xf), /* MEM_LOAD_RETIRED.L2_LINE_MISS */
PEBS_EVENT_CONSTRAINT(0x10cb, 0xf), /* MEM_LOAD_RETIRED.DTLB_MISS */
EVENT_CONSTRAINT_END
};
static struct event_constraint *
intel_pebs_constraints(struct perf_event *event)
{
struct event_constraint *c;
if (!event->attr.precise_ip)
return NULL;
if (x86_pmu.pebs_constraints) {
for_each_event_constraint(c, x86_pmu.pebs_constraints) {
if ((event->hw.config & c->cmask) == c->code)
return c;
}
}
return &emptyconstraint;
}
static void intel_pmu_pebs_enable(struct perf_event *event)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT;
cpuc->pebs_enabled |= 1ULL << hwc->idx;
WARN_ON_ONCE(cpuc->enabled);
if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1)
intel_pmu_lbr_enable(event);
}
static void intel_pmu_pebs_disable(struct perf_event *event)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
if (cpuc->enabled)
wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
hwc->config |= ARCH_PERFMON_EVENTSEL_INT;
if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1)
intel_pmu_lbr_disable(event);
}
static void intel_pmu_pebs_enable_all(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (cpuc->pebs_enabled)
wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
}
static void intel_pmu_pebs_disable_all(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (cpuc->pebs_enabled)
wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
}
#include <asm/insn.h>
static inline bool kernel_ip(unsigned long ip)
{
#ifdef CONFIG_X86_32
return ip > PAGE_OFFSET;
#else
return (long)ip < 0;
#endif
}
static int intel_pmu_pebs_fixup_ip(struct pt_regs *regs)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
unsigned long from = cpuc->lbr_entries[0].from;
unsigned long old_to, to = cpuc->lbr_entries[0].to;
unsigned long ip = regs->ip;
/*
* We don't need to fixup if the PEBS assist is fault like
*/
if (!x86_pmu.intel_cap.pebs_trap)
return 1;
/*
* No LBR entry, no basic block, no rewinding
*/
if (!cpuc->lbr_stack.nr || !from || !to)
return 0;
/*
* Basic blocks should never cross user/kernel boundaries
*/
if (kernel_ip(ip) != kernel_ip(to))
return 0;
/*
* unsigned math, either ip is before the start (impossible) or
* the basic block is larger than 1 page (sanity)
*/
if ((ip - to) > PAGE_SIZE)
return 0;
/*
* We sampled a branch insn, rewind using the LBR stack
*/
if (ip == to) {
regs->ip = from;
return 1;
}
do {
struct insn insn;
u8 buf[MAX_INSN_SIZE];
void *kaddr;
old_to = to;
if (!kernel_ip(ip)) {
int bytes, size = MAX_INSN_SIZE;
bytes = copy_from_user_nmi(buf, (void __user *)to, size);
if (bytes != size)
return 0;
kaddr = buf;
} else
kaddr = (void *)to;
kernel_insn_init(&insn, kaddr);
insn_get_length(&insn);
to += insn.length;
} while (to < ip);
if (to == ip) {
regs->ip = old_to;
return 1;
}
/*
* Even though we decoded the basic block, the instruction stream
* never matched the given IP, either the TO or the IP got corrupted.
*/
return 0;
}
static int intel_pmu_save_and_restart(struct perf_event *event);
static void __intel_pmu_pebs_event(struct perf_event *event,
struct pt_regs *iregs, void *__pebs)
{
/*
* We cast to pebs_record_core since that is a subset of
* both formats and we don't use the other fields in this
* routine.
*/
struct pebs_record_core *pebs = __pebs;
struct perf_sample_data data;
struct pt_regs regs;
if (!intel_pmu_save_and_restart(event))
return;
perf_sample_data_init(&data, 0);
data.period = event->hw.last_period;
/*
* We use the interrupt regs as a base because the PEBS record
* does not contain a full regs set, specifically it seems to
* lack segment descriptors, which get used by things like
* user_mode().
*
* In the simple case fix up only the IP and BP,SP regs, for
* PERF_SAMPLE_IP and PERF_SAMPLE_CALLCHAIN to function properly.
* A possible PERF_SAMPLE_REGS will have to transfer all regs.
*/
regs = *iregs;
regs.ip = pebs->ip;
regs.bp = pebs->bp;
regs.sp = pebs->sp;
if (event->attr.precise_ip > 1 && intel_pmu_pebs_fixup_ip(&regs))
regs.flags |= PERF_EFLAGS_EXACT;
else
regs.flags &= ~PERF_EFLAGS_EXACT;
if (perf_event_overflow(event, 1, &data, &regs))
x86_pmu_stop(event);
}
static void intel_pmu_drain_pebs_core(struct pt_regs *iregs)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct debug_store *ds = cpuc->ds;
struct perf_event *event = cpuc->events[0]; /* PMC0 only */
struct pebs_record_core *at, *top;
int n;
if (!ds || !x86_pmu.pebs)
return;
at = (struct pebs_record_core *)(unsigned long)ds->pebs_buffer_base;
top = (struct pebs_record_core *)(unsigned long)ds->pebs_index;
/*
* Whatever else happens, drain the thing
*/
ds->pebs_index = ds->pebs_buffer_base;
if (!test_bit(0, cpuc->active_mask))
return;
WARN_ON_ONCE(!event);
if (!event->attr.precise_ip)
return;
n = top - at;
if (n <= 0)
return;
/*
* Should not happen, we program the threshold at 1 and do not
* set a reset value.
*/
WARN_ON_ONCE(n > 1);
at += n - 1;
__intel_pmu_pebs_event(event, iregs, at);
}
static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct debug_store *ds = cpuc->ds;
struct pebs_record_nhm *at, *top;
struct perf_event *event = NULL;
u64 status = 0;
int bit, n;
if (!ds || !x86_pmu.pebs)
return;
at = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base;
top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index;
ds->pebs_index = ds->pebs_buffer_base;
n = top - at;
if (n <= 0)
return;
/*
* Should not happen, we program the threshold at 1 and do not
* set a reset value.
*/
WARN_ON_ONCE(n > MAX_PEBS_EVENTS);
for ( ; at < top; at++) {
for_each_set_bit(bit, (unsigned long *)&at->status, MAX_PEBS_EVENTS) {
event = cpuc->events[bit];
if (!test_bit(bit, cpuc->active_mask))
continue;
WARN_ON_ONCE(!event);
if (!event->attr.precise_ip)
continue;
if (__test_and_set_bit(bit, (unsigned long *)&status))
continue;
break;
}
if (!event || bit >= MAX_PEBS_EVENTS)
continue;
__intel_pmu_pebs_event(event, iregs, at);
}
}
/*
* BTS, PEBS probe and setup
*/
static void intel_ds_init(void)
{
/*
* No support for 32bit formats
*/
if (!boot_cpu_has(X86_FEATURE_DTES64))
return;
x86_pmu.bts = boot_cpu_has(X86_FEATURE_BTS);
x86_pmu.pebs = boot_cpu_has(X86_FEATURE_PEBS);
if (x86_pmu.pebs) {
char pebs_type = x86_pmu.intel_cap.pebs_trap ? '+' : '-';
int format = x86_pmu.intel_cap.pebs_format;
switch (format) {
case 0:
printk(KERN_CONT "PEBS fmt0%c, ", pebs_type);
x86_pmu.pebs_record_size = sizeof(struct pebs_record_core);
x86_pmu.drain_pebs = intel_pmu_drain_pebs_core;
x86_pmu.pebs_constraints = intel_core_pebs_events;
break;
case 1:
printk(KERN_CONT "PEBS fmt1%c, ", pebs_type);
x86_pmu.pebs_record_size = sizeof(struct pebs_record_nhm);
x86_pmu.drain_pebs = intel_pmu_drain_pebs_nhm;
x86_pmu.pebs_constraints = intel_nehalem_pebs_events;
break;
default:
printk(KERN_CONT "no PEBS fmt%d%c, ", format, pebs_type);
x86_pmu.pebs = 0;
break;
}
}
}
#else /* CONFIG_CPU_SUP_INTEL */
static int reserve_ds_buffers(void)
{
return 0;
}
static void release_ds_buffers(void)
{
}
#endif /* CONFIG_CPU_SUP_INTEL */
#ifdef CONFIG_CPU_SUP_INTEL
enum {
LBR_FORMAT_32 = 0x00,
LBR_FORMAT_LIP = 0x01,
LBR_FORMAT_EIP = 0x02,
LBR_FORMAT_EIP_FLAGS = 0x03,
};
/*
* We only support LBR implementations that have FREEZE_LBRS_ON_PMI
* otherwise it becomes near impossible to get a reliable stack.
*/
static void __intel_pmu_lbr_enable(void)
{
u64 debugctl;
rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
debugctl |= (DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
}
static void __intel_pmu_lbr_disable(void)
{
u64 debugctl;
rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
}
static void intel_pmu_lbr_reset_32(void)
{
int i;
for (i = 0; i < x86_pmu.lbr_nr; i++)
wrmsrl(x86_pmu.lbr_from + i, 0);
}
static void intel_pmu_lbr_reset_64(void)
{
int i;
for (i = 0; i < x86_pmu.lbr_nr; i++) {
wrmsrl(x86_pmu.lbr_from + i, 0);
wrmsrl(x86_pmu.lbr_to + i, 0);
}
}
static void intel_pmu_lbr_reset(void)
{
if (!x86_pmu.lbr_nr)
return;
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_32)
intel_pmu_lbr_reset_32();
else
intel_pmu_lbr_reset_64();
}
static void intel_pmu_lbr_enable(struct perf_event *event)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (!x86_pmu.lbr_nr)
return;
WARN_ON_ONCE(cpuc->enabled);
/*
* Reset the LBR stack if we changed task context to
* avoid data leaks.
*/
if (event->ctx->task && cpuc->lbr_context != event->ctx) {
intel_pmu_lbr_reset();
cpuc->lbr_context = event->ctx;
}
cpuc->lbr_users++;
}
static void intel_pmu_lbr_disable(struct perf_event *event)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (!x86_pmu.lbr_nr)
return;
cpuc->lbr_users--;
WARN_ON_ONCE(cpuc->lbr_users < 0);
if (cpuc->enabled && !cpuc->lbr_users)
__intel_pmu_lbr_disable();
}
static void intel_pmu_lbr_enable_all(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (cpuc->lbr_users)
__intel_pmu_lbr_enable();
}
static void intel_pmu_lbr_disable_all(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (cpuc->lbr_users)
__intel_pmu_lbr_disable();
}
static inline u64 intel_pmu_lbr_tos(void)
{
u64 tos;
rdmsrl(x86_pmu.lbr_tos, tos);
return tos;
}
static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
{
unsigned long mask = x86_pmu.lbr_nr - 1;
u64 tos = intel_pmu_lbr_tos();
int i;
for (i = 0; i < x86_pmu.lbr_nr; i++) {
unsigned long lbr_idx = (tos - i) & mask;
union {
struct {
u32 from;
u32 to;
};
u64 lbr;
} msr_lastbranch;
rdmsrl(x86_pmu.lbr_from + lbr_idx, msr_lastbranch.lbr);
cpuc->lbr_entries[i].from = msr_lastbranch.from;
cpuc->lbr_entries[i].to = msr_lastbranch.to;
cpuc->lbr_entries[i].flags = 0;
}
cpuc->lbr_stack.nr = i;
}
#define LBR_FROM_FLAG_MISPRED (1ULL << 63)
/*
* Due to lack of segmentation in Linux the effective address (offset)
* is the same as the linear address, allowing us to merge the LIP and EIP
* LBR formats.
*/
static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
{
unsigned long mask = x86_pmu.lbr_nr - 1;
int lbr_format = x86_pmu.intel_cap.lbr_format;
u64 tos = intel_pmu_lbr_tos();
int i;
for (i = 0; i < x86_pmu.lbr_nr; i++) {
unsigned long lbr_idx = (tos - i) & mask;
u64 from, to, flags = 0;
rdmsrl(x86_pmu.lbr_from + lbr_idx, from);
rdmsrl(x86_pmu.lbr_to + lbr_idx, to);
if (lbr_format == LBR_FORMAT_EIP_FLAGS) {
flags = !!(from & LBR_FROM_FLAG_MISPRED);
from = (u64)((((s64)from) << 1) >> 1);
}
cpuc->lbr_entries[i].from = from;
cpuc->lbr_entries[i].to = to;
cpuc->lbr_entries[i].flags = flags;
}
cpuc->lbr_stack.nr = i;
}
static void intel_pmu_lbr_read(void)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
if (!cpuc->lbr_users)
return;
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_32)
intel_pmu_lbr_read_32(cpuc);
else
intel_pmu_lbr_read_64(cpuc);
}
static void intel_pmu_lbr_init_core(void)
{
x86_pmu.lbr_nr = 4;
x86_pmu.lbr_tos = 0x01c9;
x86_pmu.lbr_from = 0x40;
x86_pmu.lbr_to = 0x60;
}
static void intel_pmu_lbr_init_nhm(void)
{
x86_pmu.lbr_nr = 16;
x86_pmu.lbr_tos = 0x01c9;
x86_pmu.lbr_from = 0x680;
x86_pmu.lbr_to = 0x6c0;
}
static void intel_pmu_lbr_init_atom(void)
{
x86_pmu.lbr_nr = 8;
x86_pmu.lbr_tos = 0x01c9;
x86_pmu.lbr_from = 0x40;
x86_pmu.lbr_to = 0x60;
}
#endif /* CONFIG_CPU_SUP_INTEL */
此差异已折叠。
...@@ -27,24 +27,6 @@ static u64 p6_pmu_event_map(int hw_event) ...@@ -27,24 +27,6 @@ static u64 p6_pmu_event_map(int hw_event)
*/ */
#define P6_NOP_EVENT 0x0000002EULL #define P6_NOP_EVENT 0x0000002EULL
static u64 p6_pmu_raw_event(u64 hw_event)
{
#define P6_EVNTSEL_EVENT_MASK 0x000000FFULL
#define P6_EVNTSEL_UNIT_MASK 0x0000FF00ULL
#define P6_EVNTSEL_EDGE_MASK 0x00040000ULL
#define P6_EVNTSEL_INV_MASK 0x00800000ULL
#define P6_EVNTSEL_REG_MASK 0xFF000000ULL
#define P6_EVNTSEL_MASK \
(P6_EVNTSEL_EVENT_MASK | \
P6_EVNTSEL_UNIT_MASK | \
P6_EVNTSEL_EDGE_MASK | \
P6_EVNTSEL_INV_MASK | \
P6_EVNTSEL_REG_MASK)
return hw_event & P6_EVNTSEL_MASK;
}
static struct event_constraint p6_event_constraints[] = static struct event_constraint p6_event_constraints[] =
{ {
INTEL_EVENT_CONSTRAINT(0xc1, 0x1), /* FLOPS */ INTEL_EVENT_CONSTRAINT(0xc1, 0x1), /* FLOPS */
...@@ -66,7 +48,7 @@ static void p6_pmu_disable_all(void) ...@@ -66,7 +48,7 @@ static void p6_pmu_disable_all(void)
wrmsrl(MSR_P6_EVNTSEL0, val); wrmsrl(MSR_P6_EVNTSEL0, val);
} }
static void p6_pmu_enable_all(void) static void p6_pmu_enable_all(int added)
{ {
unsigned long val; unsigned long val;
...@@ -102,22 +84,23 @@ static void p6_pmu_enable_event(struct perf_event *event) ...@@ -102,22 +84,23 @@ static void p6_pmu_enable_event(struct perf_event *event)
(void)checking_wrmsrl(hwc->config_base + hwc->idx, val); (void)checking_wrmsrl(hwc->config_base + hwc->idx, val);
} }
static __initconst struct x86_pmu p6_pmu = { static __initconst const struct x86_pmu p6_pmu = {
.name = "p6", .name = "p6",
.handle_irq = x86_pmu_handle_irq, .handle_irq = x86_pmu_handle_irq,
.disable_all = p6_pmu_disable_all, .disable_all = p6_pmu_disable_all,
.enable_all = p6_pmu_enable_all, .enable_all = p6_pmu_enable_all,
.enable = p6_pmu_enable_event, .enable = p6_pmu_enable_event,
.disable = p6_pmu_disable_event, .disable = p6_pmu_disable_event,
.hw_config = x86_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_P6_EVNTSEL0, .eventsel = MSR_P6_EVNTSEL0,
.perfctr = MSR_P6_PERFCTR0, .perfctr = MSR_P6_PERFCTR0,
.event_map = p6_pmu_event_map, .event_map = p6_pmu_event_map,
.raw_event = p6_pmu_raw_event,
.max_events = ARRAY_SIZE(p6_perfmon_event_map), .max_events = ARRAY_SIZE(p6_perfmon_event_map),
.apic = 1, .apic = 1,
.max_period = (1ULL << 31) - 1, .max_period = (1ULL << 31) - 1,
.version = 0, .version = 0,
.num_events = 2, .num_counters = 2,
/* /*
* Events have 40 bits implemented. However they are designed such * Events have 40 bits implemented. However they are designed such
* that bits [32-39] are sign extensions of bit 31. As such the * that bits [32-39] are sign extensions of bit 31. As such the
...@@ -125,8 +108,8 @@ static __initconst struct x86_pmu p6_pmu = { ...@@ -125,8 +108,8 @@ static __initconst struct x86_pmu p6_pmu = {
* *
* See IA-32 Intel Architecture Software developer manual Vol 3B * See IA-32 Intel Architecture Software developer manual Vol 3B
*/ */
.event_bits = 32, .cntval_bits = 32,
.event_mask = (1ULL << 32) - 1, .cntval_mask = (1ULL << 32) - 1,
.get_event_constraints = x86_get_event_constraints, .get_event_constraints = x86_get_event_constraints,
.event_constraints = p6_event_constraints, .event_constraints = p6_event_constraints,
}; };
......
此差异已折叠。
此差异已折叠。
/*
* Debug Store support - selftest
*
*
* Copyright (C) 2009 Intel Corporation.
* Markus Metzger <markus.t.metzger@intel.com>, 2009
*/
#ifdef CONFIG_X86_DS_SELFTEST
extern int ds_selftest_bts(void);
extern int ds_selftest_pebs(void);
#else
static inline int ds_selftest_bts(void) { return 0; }
static inline int ds_selftest_pebs(void) { return 0; }
#endif
...@@ -224,11 +224,6 @@ unsigned __kprobes long oops_begin(void) ...@@ -224,11 +224,6 @@ unsigned __kprobes long oops_begin(void)
int cpu; int cpu;
unsigned long flags; unsigned long flags;
/* notify the hw-branch tracer so it may disable tracing and
add the last trace to the trace buffer -
the earlier this happens, the more useful the trace. */
trace_hw_branch_oops();
oops_enter(); oops_enter();
/* racy, but better than risking deadlock. */ /* racy, but better than risking deadlock. */
......
...@@ -188,26 +188,17 @@ static int get_hbp_len(u8 hbp_len) ...@@ -188,26 +188,17 @@ static int get_hbp_len(u8 hbp_len)
return len_in_bytes; return len_in_bytes;
} }
/*
* Check for virtual address in user space.
*/
int arch_check_va_in_userspace(unsigned long va, u8 hbp_len)
{
unsigned int len;
len = get_hbp_len(hbp_len);
return (va <= TASK_SIZE - len);
}
/* /*
* Check for virtual address in kernel space. * Check for virtual address in kernel space.
*/ */
static int arch_check_va_in_kernelspace(unsigned long va, u8 hbp_len) int arch_check_bp_in_kernelspace(struct perf_event *bp)
{ {
unsigned int len; unsigned int len;
unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
len = get_hbp_len(hbp_len); va = info->address;
len = get_hbp_len(info->len);
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
...@@ -300,8 +291,7 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -300,8 +291,7 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings * Validate the arch-specific HW Breakpoint register settings
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp, int arch_validate_hwbkpt_settings(struct perf_event *bp)
struct task_struct *tsk)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp); struct arch_hw_breakpoint *info = counter_arch_bp(bp);
unsigned int align; unsigned int align;
...@@ -314,16 +304,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp, ...@@ -314,16 +304,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp,
ret = -EINVAL; ret = -EINVAL;
if (info->type == X86_BREAKPOINT_EXECUTE)
/*
* Ptrace-refactoring code
* For now, we'll allow instruction breakpoint only for user-space
* addresses
*/
if ((!arch_check_va_in_userspace(info->address, info->len)) &&
info->len != X86_BREAKPOINT_EXECUTE)
return ret;
switch (info->len) { switch (info->len) {
case X86_BREAKPOINT_LEN_1: case X86_BREAKPOINT_LEN_1:
align = 0; align = 0;
...@@ -350,15 +330,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp, ...@@ -350,15 +330,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp,
if (info->address & align) if (info->address & align)
return -EINVAL; return -EINVAL;
/* Check that the virtual address is in the proper range */
if (tsk) {
if (!arch_check_va_in_userspace(info->address, info->len))
return -EFAULT;
} else {
if (!arch_check_va_in_kernelspace(info->address, info->len))
return -EFAULT;
}
return 0; return 0;
} }
......
...@@ -422,14 +422,22 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs, ...@@ -422,14 +422,22 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
static void __kprobes clear_btf(void) static void __kprobes clear_btf(void)
{ {
if (test_thread_flag(TIF_DEBUGCTLMSR)) if (test_thread_flag(TIF_BLOCKSTEP)) {
update_debugctlmsr(0); unsigned long debugctl = get_debugctlmsr();
debugctl &= ~DEBUGCTLMSR_BTF;
update_debugctlmsr(debugctl);
}
} }
static void __kprobes restore_btf(void) static void __kprobes restore_btf(void)
{ {
if (test_thread_flag(TIF_DEBUGCTLMSR)) if (test_thread_flag(TIF_BLOCKSTEP)) {
update_debugctlmsr(current->thread.debugctlmsr); unsigned long debugctl = get_debugctlmsr();
debugctl |= DEBUGCTLMSR_BTF;
update_debugctlmsr(debugctl);
}
} }
void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
......
...@@ -20,7 +20,6 @@ ...@@ -20,7 +20,6 @@
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/ds.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
unsigned long idle_halt; unsigned long idle_halt;
...@@ -50,8 +49,6 @@ void free_thread_xstate(struct task_struct *tsk) ...@@ -50,8 +49,6 @@ void free_thread_xstate(struct task_struct *tsk)
kmem_cache_free(task_xstate_cachep, tsk->thread.xstate); kmem_cache_free(task_xstate_cachep, tsk->thread.xstate);
tsk->thread.xstate = NULL; tsk->thread.xstate = NULL;
} }
WARN(tsk->thread.ds_ctx, "leaking DS context\n");
} }
void free_thread_info(struct thread_info *ti) void free_thread_info(struct thread_info *ti)
...@@ -198,11 +195,16 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, ...@@ -198,11 +195,16 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
prev = &prev_p->thread; prev = &prev_p->thread;
next = &next_p->thread; next = &next_p->thread;
if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) || if (test_tsk_thread_flag(prev_p, TIF_BLOCKSTEP) ^
test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR)) test_tsk_thread_flag(next_p, TIF_BLOCKSTEP)) {
ds_switch_to(prev_p, next_p); unsigned long debugctl = get_debugctlmsr();
else if (next->debugctlmsr != prev->debugctlmsr)
update_debugctlmsr(next->debugctlmsr); debugctl &= ~DEBUGCTLMSR_BTF;
if (test_tsk_thread_flag(next_p, TIF_BLOCKSTEP))
debugctl |= DEBUGCTLMSR_BTF;
update_debugctlmsr(debugctl);
}
if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^ if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
test_tsk_thread_flag(next_p, TIF_NOTSC)) { test_tsk_thread_flag(next_p, TIF_NOTSC)) {
......
...@@ -55,7 +55,6 @@ ...@@ -55,7 +55,6 @@
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/syscalls.h> #include <asm/syscalls.h>
#include <asm/ds.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
...@@ -238,13 +237,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, ...@@ -238,13 +237,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
kfree(p->thread.io_bitmap_ptr); kfree(p->thread.io_bitmap_ptr);
p->thread.io_bitmap_max = 0; p->thread.io_bitmap_max = 0;
} }
clear_tsk_thread_flag(p, TIF_DS_AREA_MSR);
p->thread.ds_ctx = NULL;
clear_tsk_thread_flag(p, TIF_DEBUGCTLMSR);
p->thread.debugctlmsr = 0;
return err; return err;
} }
......
...@@ -49,7 +49,6 @@ ...@@ -49,7 +49,6 @@
#include <asm/ia32.h> #include <asm/ia32.h>
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/syscalls.h> #include <asm/syscalls.h>
#include <asm/ds.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
asmlinkage extern void ret_from_fork(void); asmlinkage extern void ret_from_fork(void);
...@@ -313,13 +312,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, ...@@ -313,13 +312,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
if (err) if (err)
goto out; goto out;
} }
clear_tsk_thread_flag(p, TIF_DS_AREA_MSR);
p->thread.ds_ctx = NULL;
clear_tsk_thread_flag(p, TIF_DEBUGCTLMSR);
p->thread.debugctlmsr = 0;
err = 0; err = 0;
out: out:
if (err && p->thread.io_bitmap_ptr) { if (err && p->thread.io_bitmap_ptr) {
......
此差异已折叠。
...@@ -157,22 +157,6 @@ static int enable_single_step(struct task_struct *child) ...@@ -157,22 +157,6 @@ static int enable_single_step(struct task_struct *child)
return 1; return 1;
} }
/*
* Install this value in MSR_IA32_DEBUGCTLMSR whenever child is running.
*/
static void write_debugctlmsr(struct task_struct *child, unsigned long val)
{
if (child->thread.debugctlmsr == val)
return;
child->thread.debugctlmsr = val;
if (child != current)
return;
update_debugctlmsr(val);
}
/* /*
* Enable single or block step. * Enable single or block step.
*/ */
...@@ -186,15 +170,17 @@ static void enable_step(struct task_struct *child, bool block) ...@@ -186,15 +170,17 @@ static void enable_step(struct task_struct *child, bool block)
* that uses user-mode single stepping itself. * that uses user-mode single stepping itself.
*/ */
if (enable_single_step(child) && block) { if (enable_single_step(child) && block) {
set_tsk_thread_flag(child, TIF_DEBUGCTLMSR); unsigned long debugctl = get_debugctlmsr();
write_debugctlmsr(child,
child->thread.debugctlmsr | DEBUGCTLMSR_BTF); debugctl |= DEBUGCTLMSR_BTF;
} else { update_debugctlmsr(debugctl);
write_debugctlmsr(child, set_tsk_thread_flag(child, TIF_BLOCKSTEP);
child->thread.debugctlmsr & ~DEBUGCTLMSR_BTF); } else if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) {
unsigned long debugctl = get_debugctlmsr();
if (!child->thread.debugctlmsr)
clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); debugctl &= ~DEBUGCTLMSR_BTF;
update_debugctlmsr(debugctl);
clear_tsk_thread_flag(child, TIF_BLOCKSTEP);
} }
} }
...@@ -213,11 +199,13 @@ void user_disable_single_step(struct task_struct *child) ...@@ -213,11 +199,13 @@ void user_disable_single_step(struct task_struct *child)
/* /*
* Make sure block stepping (BTF) is disabled. * Make sure block stepping (BTF) is disabled.
*/ */
write_debugctlmsr(child, if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) {
child->thread.debugctlmsr & ~DEBUGCTLMSR_BTF); unsigned long debugctl = get_debugctlmsr();
if (!child->thread.debugctlmsr) debugctl &= ~DEBUGCTLMSR_BTF;
clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); update_debugctlmsr(debugctl);
clear_tsk_thread_flag(child, TIF_BLOCKSTEP);
}
/* Always clear TIF_SINGLESTEP... */ /* Always clear TIF_SINGLESTEP... */
clear_tsk_thread_flag(child, TIF_SINGLESTEP); clear_tsk_thread_flag(child, TIF_SINGLESTEP);
......
...@@ -543,11 +543,11 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -543,11 +543,11 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code)
/* DR6 may or may not be cleared by the CPU */ /* DR6 may or may not be cleared by the CPU */
set_debugreg(0, 6); set_debugreg(0, 6);
/* /*
* The processor cleared BTF, so don't mark that we need it set. * The processor cleared BTF, so don't mark that we need it set.
*/ */
clear_tsk_thread_flag(tsk, TIF_DEBUGCTLMSR); clear_tsk_thread_flag(tsk, TIF_BLOCKSTEP);
tsk->thread.debugctlmsr = 0;
/* Store the virtualized DR6 value */ /* Store the virtualized DR6 value */
tsk->thread.debugreg6 = dr6; tsk->thread.debugreg6 = dr6;
......
...@@ -3659,8 +3659,11 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) ...@@ -3659,8 +3659,11 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
/* We need to handle NMIs before interrupts are enabled */ /* We need to handle NMIs before interrupts are enabled */
if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR && if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR &&
(exit_intr_info & INTR_INFO_VALID_MASK)) (exit_intr_info & INTR_INFO_VALID_MASK)) {
kvm_before_handle_nmi(&vmx->vcpu);
asm("int $2"); asm("int $2");
kvm_after_handle_nmi(&vmx->vcpu);
}
idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK; idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
......
此差异已折叠。
...@@ -65,4 +65,7 @@ static inline int is_paging(struct kvm_vcpu *vcpu) ...@@ -65,4 +65,7 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return kvm_read_cr0_bits(vcpu, X86_CR0_PG); return kvm_read_cr0_bits(vcpu, X86_CR0_PG);
} }
void kvm_before_handle_nmi(struct kvm_vcpu *vcpu);
void kvm_after_handle_nmi(struct kvm_vcpu *vcpu);
#endif #endif
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册