提交 8603596a 编写于 作者: L Linus Torvalds

Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf update from Thomas Gleixner:
 "The perf crowd presents:

  Kernel updates:

   - Removal of jprobes

   - Cleanup and consolidatation the handling of kprobes

   - Cleanup and consolidation of hardware breakpoints

   - The usual pile of fixes and updates to PMUs and event descriptors

  Tooling updates:

   - Updates and improvements all over the place. Nothing outstanding,
     just the (good) boring incremental grump work"

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (103 commits)
  perf trace: Do not require --no-syscalls to suppress strace like output
  perf bpf: Include uapi/linux/bpf.h from the 'perf trace' script's bpf.h
  perf tools: Allow overriding MAX_NR_CPUS at compile time
  perf bpf: Show better message when failing to load an object
  perf list: Unify metric group description format with PMU event description
  perf vendor events arm64: Update ThunderX2 implementation defined pmu core events
  perf cs-etm: Generate branch sample for CS_ETM_TRACE_ON packet
  perf cs-etm: Generate branch sample when receiving a CS_ETM_TRACE_ON packet
  perf cs-etm: Support dummy address value for CS_ETM_TRACE_ON packet
  perf cs-etm: Fix start tracing packet handling
  perf build: Fix installation directory for eBPF
  perf c2c report: Fix crash for empty browser
  perf tests: Fix indexing when invoking subtests
  perf trace: Beautify the AF_INET & AF_INET6 'socket' syscall 'protocol' args
  perf trace beauty: Add beautifiers for 'socket''s 'protocol' arg
  perf trace beauty: Do not print NULL strarray entries
  perf beauty: Add a generator for IPPROTO_ socket's protocol constants
  tools include uapi: Grab a copy of linux/in.h
  perf tests: Fix complex event name parsing
  perf evlist: Fix error out while applying initial delay and LBR
  ...
...@@ -80,6 +80,26 @@ After the instruction is single-stepped, Kprobes executes the ...@@ -80,6 +80,26 @@ After the instruction is single-stepped, Kprobes executes the
"post_handler," if any, that is associated with the kprobe. "post_handler," if any, that is associated with the kprobe.
Execution then continues with the instruction following the probepoint. Execution then continues with the instruction following the probepoint.
Changing Execution Path
-----------------------
Since kprobes can probe into a running kernel code, it can change the
register set, including instruction pointer. This operation requires
maximum care, such as keeping the stack frame, recovering the execution
path etc. Since it operates on a running kernel and needs deep knowledge
of computer architecture and concurrent computing, you can easily shoot
your foot.
If you change the instruction pointer (and set up other related
registers) in pre_handler, you must return !0 so that kprobes stops
single stepping and just returns to the given address.
This also means post_handler should not be called anymore.
Note that this operation may be harder on some architectures which use
TOC (Table of Contents) for function call, since you have to setup a new
TOC for your function in your module, and recover the old one after
returning from it.
Return Probes Return Probes
------------- -------------
...@@ -262,7 +282,7 @@ is optimized, that modification is ignored. Thus, if you want to ...@@ -262,7 +282,7 @@ is optimized, that modification is ignored. Thus, if you want to
tweak the kernel's execution path, you need to suppress optimization, tweak the kernel's execution path, you need to suppress optimization,
using one of the following techniques: using one of the following techniques:
- Specify an empty function for the kprobe's post_handler or break_handler. - Specify an empty function for the kprobe's post_handler.
or or
...@@ -474,7 +494,7 @@ error occurs during registration, all probes in the array, up to ...@@ -474,7 +494,7 @@ error occurs during registration, all probes in the array, up to
the bad probe, are safely unregistered before the register_*probes the bad probe, are safely unregistered before the register_*probes
function returns. function returns.
- kps/rps/jps: an array of pointers to ``*probe`` data structures - kps/rps: an array of pointers to ``*probe`` data structures
- num: the number of the array entries. - num: the number of the array entries.
.. note:: .. note::
...@@ -566,12 +586,11 @@ the same handler) may run concurrently on different CPUs. ...@@ -566,12 +586,11 @@ the same handler) may run concurrently on different CPUs.
Kprobes does not use mutexes or allocate memory except during Kprobes does not use mutexes or allocate memory except during
registration and unregistration. registration and unregistration.
Probe handlers are run with preemption disabled. Depending on the Probe handlers are run with preemption disabled or interrupt disabled,
architecture and optimization state, handlers may also run with which depends on the architecture and optimization state. (e.g.,
interrupts disabled (e.g., kretprobe handlers and optimized kprobe kretprobe handlers and optimized kprobe handlers run without interrupt
handlers run without interrupt disabled on x86/x86-64). In any case, disabled on x86/x86-64). In any case, your handler should not yield
your handler should not yield the CPU (e.g., by attempting to acquire the CPU (e.g., by attempting to acquire a semaphore, or waiting I/O).
a semaphore).
Since a return probe is implemented by replacing the return Since a return probe is implemented by replacing the return
address with the trampoline's address, stack backtraces and calls address with the trampoline's address, stack backtraces and calls
......
...@@ -45,8 +45,6 @@ struct prev_kprobe { ...@@ -45,8 +45,6 @@ struct prev_kprobe {
struct kprobe_ctlblk { struct kprobe_ctlblk {
unsigned int kprobe_status; unsigned int kprobe_status;
struct pt_regs jprobe_saved_regs;
char jprobes_stack[MAX_STACK_SIZE];
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
......
...@@ -225,24 +225,18 @@ int __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs) ...@@ -225,24 +225,18 @@ int __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs)
/* If we have no pre-handler or it returned 0, we continue with /* If we have no pre-handler or it returned 0, we continue with
* normal processing. If we have a pre-handler and it returned * normal processing. If we have a pre-handler and it returned
* non-zero - which is expected from setjmp_pre_handler for * non-zero - which means user handler setup registers to exit
* jprobe, we return without single stepping and leave that to * to another instruction, we must skip the single stepping.
* the break-handler which is invoked by a kprobe from
* jprobe_return
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
setup_singlestep(p, regs); setup_singlestep(p, regs);
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
} else {
reset_current_kprobe();
preempt_enable_no_resched();
} }
return 1; return 1;
} else if (kprobe_running()) {
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
setup_singlestep(p, regs);
kcb->kprobe_status = KPROBE_HIT_SS;
return 1;
}
} }
/* no_kprobe: */ /* no_kprobe: */
...@@ -386,38 +380,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ...@@ -386,38 +380,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
return ret; return ret;
} }
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long sp_addr = regs->sp;
kcb->jprobe_saved_regs = *regs;
memcpy(kcb->jprobes_stack, (void *)sp_addr, MIN_STACK_SIZE(sp_addr));
regs->ret = (unsigned long)(jp->entry);
return 1;
}
void __kprobes jprobe_return(void)
{
__asm__ __volatile__("unimp_s");
return;
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long sp_addr;
*regs = kcb->jprobe_saved_regs;
sp_addr = regs->sp;
memcpy((void *)sp_addr, kcb->jprobes_stack, MIN_STACK_SIZE(sp_addr));
preempt_enable_no_resched();
return 1;
}
static void __used kretprobe_trampoline_holder(void) static void __used kretprobe_trampoline_holder(void)
{ {
__asm__ __volatile__(".global kretprobe_trampoline\n" __asm__ __volatile__(".global kretprobe_trampoline\n"
...@@ -483,9 +445,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -483,9 +445,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
regs->ret = orig_ret_address; regs->ret = orig_ret_address;
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -111,14 +111,17 @@ static inline void decode_ctrl_reg(u32 reg, ...@@ -111,14 +111,17 @@ static inline void decode_ctrl_reg(u32 reg,
asm volatile("mcr p14, 0, %0, " #N "," #M ", " #OP2 : : "r" (VAL));\ asm volatile("mcr p14, 0, %0, " #N "," #M ", " #OP2 : : "r" (VAL));\
} while (0) } while (0)
struct perf_event_attr;
struct notifier_block; struct notifier_block;
struct perf_event; struct perf_event;
struct pmu; struct pmu;
extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
int *gen_len, int *gen_type); int *gen_len, int *gen_type);
extern int arch_check_bp_in_kernelspace(struct perf_event *bp); extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -44,8 +44,6 @@ struct prev_kprobe { ...@@ -44,8 +44,6 @@ struct prev_kprobe {
struct kprobe_ctlblk { struct kprobe_ctlblk {
unsigned int kprobe_status; unsigned int kprobe_status;
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
struct pt_regs jprobe_saved_regs;
char jprobes_stack[MAX_STACK_SIZE];
}; };
void arch_remove_kprobe(struct kprobe *); void arch_remove_kprobe(struct kprobe *);
......
...@@ -51,7 +51,6 @@ struct arch_probes_insn { ...@@ -51,7 +51,6 @@ struct arch_probes_insn {
* We assume one instruction can consume at most 64 bytes stack, which is * We assume one instruction can consume at most 64 bytes stack, which is
* 'push {r0-r15}'. Instructions consume more or unknown stack space like * 'push {r0-r15}'. Instructions consume more or unknown stack space like
* 'str r0, [sp, #-80]' and 'str r0, [sp, r1]' should be prohibit to probe. * 'str r0, [sp, #-80]' and 'str r0, [sp, r1]' should be prohibit to probe.
* Both kprobe and jprobe use this macro.
*/ */
#define MAX_STACK_SIZE 64 #define MAX_STACK_SIZE 64
......
...@@ -456,14 +456,13 @@ static int get_hbp_len(u8 hbp_len) ...@@ -456,14 +456,13 @@ static int get_hbp_len(u8 hbp_len)
/* /*
* Check whether bp virtual address is in kernel space. * Check whether bp virtual address is in kernel space.
*/ */
int arch_check_bp_in_kernelspace(struct perf_event *bp) int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw)
{ {
unsigned int len; unsigned int len;
unsigned long va; unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
va = info->address; va = hw->address;
len = get_hbp_len(info->ctrl.len); len = get_hbp_len(hw->ctrl.len);
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
...@@ -518,42 +517,42 @@ int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, ...@@ -518,42 +517,42 @@ int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
/* /*
* Construct an arch_hw_breakpoint from a perf_event. * Construct an arch_hw_breakpoint from a perf_event.
*/ */
static int arch_build_bp_info(struct perf_event *bp) static int arch_build_bp_info(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
/* Type */ /* Type */
switch (bp->attr.bp_type) { switch (attr->bp_type) {
case HW_BREAKPOINT_X: case HW_BREAKPOINT_X:
info->ctrl.type = ARM_BREAKPOINT_EXECUTE; hw->ctrl.type = ARM_BREAKPOINT_EXECUTE;
break; break;
case HW_BREAKPOINT_R: case HW_BREAKPOINT_R:
info->ctrl.type = ARM_BREAKPOINT_LOAD; hw->ctrl.type = ARM_BREAKPOINT_LOAD;
break; break;
case HW_BREAKPOINT_W: case HW_BREAKPOINT_W:
info->ctrl.type = ARM_BREAKPOINT_STORE; hw->ctrl.type = ARM_BREAKPOINT_STORE;
break; break;
case HW_BREAKPOINT_RW: case HW_BREAKPOINT_RW:
info->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE; hw->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE;
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
/* Len */ /* Len */
switch (bp->attr.bp_len) { switch (attr->bp_len) {
case HW_BREAKPOINT_LEN_1: case HW_BREAKPOINT_LEN_1:
info->ctrl.len = ARM_BREAKPOINT_LEN_1; hw->ctrl.len = ARM_BREAKPOINT_LEN_1;
break; break;
case HW_BREAKPOINT_LEN_2: case HW_BREAKPOINT_LEN_2:
info->ctrl.len = ARM_BREAKPOINT_LEN_2; hw->ctrl.len = ARM_BREAKPOINT_LEN_2;
break; break;
case HW_BREAKPOINT_LEN_4: case HW_BREAKPOINT_LEN_4:
info->ctrl.len = ARM_BREAKPOINT_LEN_4; hw->ctrl.len = ARM_BREAKPOINT_LEN_4;
break; break;
case HW_BREAKPOINT_LEN_8: case HW_BREAKPOINT_LEN_8:
info->ctrl.len = ARM_BREAKPOINT_LEN_8; hw->ctrl.len = ARM_BREAKPOINT_LEN_8;
if ((info->ctrl.type != ARM_BREAKPOINT_EXECUTE) if ((hw->ctrl.type != ARM_BREAKPOINT_EXECUTE)
&& max_watchpoint_len >= 8) && max_watchpoint_len >= 8)
break; break;
default: default:
...@@ -566,24 +565,24 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -566,24 +565,24 @@ static int arch_build_bp_info(struct perf_event *bp)
* by the hardware and must be aligned to the appropriate number of * by the hardware and must be aligned to the appropriate number of
* bytes. * bytes.
*/ */
if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE && if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE &&
info->ctrl.len != ARM_BREAKPOINT_LEN_2 && hw->ctrl.len != ARM_BREAKPOINT_LEN_2 &&
info->ctrl.len != ARM_BREAKPOINT_LEN_4) hw->ctrl.len != ARM_BREAKPOINT_LEN_4)
return -EINVAL; return -EINVAL;
/* Address */ /* Address */
info->address = bp->attr.bp_addr; hw->address = attr->bp_addr;
/* Privilege */ /* Privilege */
info->ctrl.privilege = ARM_BREAKPOINT_USER; hw->ctrl.privilege = ARM_BREAKPOINT_USER;
if (arch_check_bp_in_kernelspace(bp)) if (arch_check_bp_in_kernelspace(hw))
info->ctrl.privilege |= ARM_BREAKPOINT_PRIV; hw->ctrl.privilege |= ARM_BREAKPOINT_PRIV;
/* Enabled? */ /* Enabled? */
info->ctrl.enabled = !bp->attr.disabled; hw->ctrl.enabled = !attr->disabled;
/* Mismatch */ /* Mismatch */
info->ctrl.mismatch = 0; hw->ctrl.mismatch = 0;
return 0; return 0;
} }
...@@ -591,9 +590,10 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -591,9 +590,10 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings. * Validate the arch-specific HW Breakpoint register settings.
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp) int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
int ret = 0; int ret = 0;
u32 offset, alignment_mask = 0x3; u32 offset, alignment_mask = 0x3;
...@@ -602,14 +602,14 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -602,14 +602,14 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
return -ENODEV; return -ENODEV;
/* Build the arch_hw_breakpoint. */ /* Build the arch_hw_breakpoint. */
ret = arch_build_bp_info(bp); ret = arch_build_bp_info(bp, attr, hw);
if (ret) if (ret)
goto out; goto out;
/* Check address alignment. */ /* Check address alignment. */
if (info->ctrl.len == ARM_BREAKPOINT_LEN_8) if (hw->ctrl.len == ARM_BREAKPOINT_LEN_8)
alignment_mask = 0x7; alignment_mask = 0x7;
offset = info->address & alignment_mask; offset = hw->address & alignment_mask;
switch (offset) { switch (offset) {
case 0: case 0:
/* Aligned */ /* Aligned */
...@@ -617,19 +617,19 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -617,19 +617,19 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
case 1: case 1:
case 2: case 2:
/* Allow halfword watchpoints and breakpoints. */ /* Allow halfword watchpoints and breakpoints. */
if (info->ctrl.len == ARM_BREAKPOINT_LEN_2) if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2)
break; break;
case 3: case 3:
/* Allow single byte watchpoint. */ /* Allow single byte watchpoint. */
if (info->ctrl.len == ARM_BREAKPOINT_LEN_1) if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1)
break; break;
default: default:
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
info->address &= ~alignment_mask; hw->address &= ~alignment_mask;
info->ctrl.len <<= offset; hw->ctrl.len <<= offset;
if (is_default_overflow_handler(bp)) { if (is_default_overflow_handler(bp)) {
/* /*
...@@ -640,7 +640,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -640,7 +640,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
return -EINVAL; return -EINVAL;
/* We don't allow mismatch breakpoints in kernel space. */ /* We don't allow mismatch breakpoints in kernel space. */
if (arch_check_bp_in_kernelspace(bp)) if (arch_check_bp_in_kernelspace(hw))
return -EPERM; return -EPERM;
/* /*
...@@ -655,8 +655,8 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -655,8 +655,8 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
* reports them. * reports them.
*/ */
if (!debug_exception_updates_fsr() && if (!debug_exception_updates_fsr() &&
(info->ctrl.type == ARM_BREAKPOINT_LOAD || (hw->ctrl.type == ARM_BREAKPOINT_LOAD ||
info->ctrl.type == ARM_BREAKPOINT_STORE)) hw->ctrl.type == ARM_BREAKPOINT_STORE))
return -EINVAL; return -EINVAL;
} }
......
...@@ -47,9 +47,6 @@ ...@@ -47,9 +47,6 @@
(unsigned long)(addr) + \ (unsigned long)(addr) + \
(size)) (size))
/* Used as a marker in ARM_pc to note when we're in a jprobe. */
#define JPROBE_MAGIC_ADDR 0xffffffff
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
...@@ -289,8 +286,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -289,8 +286,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
break; break;
case KPROBE_REENTER: case KPROBE_REENTER:
/* A nested probe was hit in FIQ, it is a BUG */ /* A nested probe was hit in FIQ, it is a BUG */
pr_warn("Unrecoverable kprobe detected at %p.\n", pr_warn("Unrecoverable kprobe detected.\n");
p->addr); dump_kprobe(p);
/* fall through */ /* fall through */
default: default:
/* impossible cases */ /* impossible cases */
...@@ -303,10 +300,10 @@ void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -303,10 +300,10 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
/* /*
* If we have no pre-handler or it returned 0, we * If we have no pre-handler or it returned 0, we
* continue with normal processing. If we have a * continue with normal processing. If we have a
* pre-handler and it returned non-zero, it prepped * pre-handler and it returned non-zero, it will
* for calling the break_handler below on re-entry, * modify the execution path and no need to single
* so get out doing nothing more here. * stepping. Let's just reset current kprobe and exit.
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
...@@ -315,20 +312,9 @@ void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -315,20 +312,9 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
kcb->kprobe_status = KPROBE_HIT_SSDONE; kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0); p->post_handler(p, regs, 0);
} }
reset_current_kprobe();
}
}
} else if (cur) {
/* We probably hit a jprobe. Call its break handler. */
if (cur->break_handler && cur->break_handler(cur, regs)) {
kcb->kprobe_status = KPROBE_HIT_SS;
singlestep(cur, regs, kcb);
if (cur->post_handler) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
cur->post_handler(cur, regs, 0);
} }
reset_current_kprobe();
} }
reset_current_kprobe();
} else { } else {
/* /*
* The probe was removed and a race is in progress. * The probe was removed and a race is in progress.
...@@ -521,117 +507,6 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, ...@@ -521,117 +507,6 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
regs->ARM_lr = (unsigned long)&kretprobe_trampoline; regs->ARM_lr = (unsigned long)&kretprobe_trampoline;
} }
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
long sp_addr = regs->ARM_sp;
long cpsr;
kcb->jprobe_saved_regs = *regs;
memcpy(kcb->jprobes_stack, (void *)sp_addr, MIN_STACK_SIZE(sp_addr));
regs->ARM_pc = (long)jp->entry;
cpsr = regs->ARM_cpsr | PSR_I_BIT;
#ifdef CONFIG_THUMB2_KERNEL
/* Set correct Thumb state in cpsr */
if (regs->ARM_pc & 1)
cpsr |= PSR_T_BIT;
else
cpsr &= ~PSR_T_BIT;
#endif
regs->ARM_cpsr = cpsr;
preempt_disable();
return 1;
}
void __kprobes jprobe_return(void)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
__asm__ __volatile__ (
/*
* Setup an empty pt_regs. Fill SP and PC fields as
* they're needed by longjmp_break_handler.
*
* We allocate some slack between the original SP and start of
* our fabricated regs. To be precise we want to have worst case
* covered which is STMFD with all 16 regs so we allocate 2 *
* sizeof(struct_pt_regs)).
*
* This is to prevent any simulated instruction from writing
* over the regs when they are accessing the stack.
*/
#ifdef CONFIG_THUMB2_KERNEL
"sub r0, %0, %1 \n\t"
"mov sp, r0 \n\t"
#else
"sub sp, %0, %1 \n\t"
#endif
"ldr r0, ="__stringify(JPROBE_MAGIC_ADDR)"\n\t"
"str %0, [sp, %2] \n\t"
"str r0, [sp, %3] \n\t"
"mov r0, sp \n\t"
"bl kprobe_handler \n\t"
/*
* Return to the context saved by setjmp_pre_handler
* and restored by longjmp_break_handler.
*/
#ifdef CONFIG_THUMB2_KERNEL
"ldr lr, [sp, %2] \n\t" /* lr = saved sp */
"ldrd r0, r1, [sp, %5] \n\t" /* r0,r1 = saved lr,pc */
"ldr r2, [sp, %4] \n\t" /* r2 = saved psr */
"stmdb lr!, {r0, r1, r2} \n\t" /* push saved lr and */
/* rfe context */
"ldmia sp, {r0 - r12} \n\t"
"mov sp, lr \n\t"
"ldr lr, [sp], #4 \n\t"
"rfeia sp! \n\t"
#else
"ldr r0, [sp, %4] \n\t"
"msr cpsr_cxsf, r0 \n\t"
"ldmia sp, {r0 - pc} \n\t"
#endif
:
: "r" (kcb->jprobe_saved_regs.ARM_sp),
"I" (sizeof(struct pt_regs) * 2),
"J" (offsetof(struct pt_regs, ARM_sp)),
"J" (offsetof(struct pt_regs, ARM_pc)),
"J" (offsetof(struct pt_regs, ARM_cpsr)),
"J" (offsetof(struct pt_regs, ARM_lr))
: "memory", "cc");
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
long stack_addr = kcb->jprobe_saved_regs.ARM_sp;
long orig_sp = regs->ARM_sp;
struct jprobe *jp = container_of(p, struct jprobe, kp);
if (regs->ARM_pc == JPROBE_MAGIC_ADDR) {
if (orig_sp != stack_addr) {
struct pt_regs *saved_regs =
(struct pt_regs *)kcb->jprobe_saved_regs.ARM_sp;
printk("current sp %lx does not match saved sp %lx\n",
orig_sp, stack_addr);
printk("Saved registers for jprobe %p\n", jp);
show_regs(saved_regs);
printk("Current registers\n");
show_regs(regs);
BUG();
}
*regs = kcb->jprobe_saved_regs;
memcpy((void *)stack_addr, kcb->jprobes_stack,
MIN_STACK_SIZE(stack_addr));
preempt_enable_no_resched();
return 1;
}
return 0;
}
int __kprobes arch_trampoline_kprobe(struct kprobe *p) int __kprobes arch_trampoline_kprobe(struct kprobe *p)
{ {
return 0; return 0;
......
...@@ -1461,7 +1461,6 @@ static bool check_test_results(void) ...@@ -1461,7 +1461,6 @@ static bool check_test_results(void)
print_registers(&result_regs); print_registers(&result_regs);
if (mem) { if (mem) {
pr_err("current_stack=%p\n", current_stack);
pr_err("expected_memory:\n"); pr_err("expected_memory:\n");
print_memory(expected_memory, mem_size); print_memory(expected_memory, mem_size);
pr_err("result_memory:\n"); pr_err("result_memory:\n");
......
...@@ -119,13 +119,16 @@ static inline void decode_ctrl_reg(u32 reg, ...@@ -119,13 +119,16 @@ static inline void decode_ctrl_reg(u32 reg,
struct task_struct; struct task_struct;
struct notifier_block; struct notifier_block;
struct perf_event_attr;
struct perf_event; struct perf_event;
struct pmu; struct pmu;
extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
int *gen_len, int *gen_type, int *offset); int *gen_len, int *gen_type, int *offset);
extern int arch_check_bp_in_kernelspace(struct perf_event *bp); extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -48,7 +48,6 @@ struct kprobe_ctlblk { ...@@ -48,7 +48,6 @@ struct kprobe_ctlblk {
unsigned long saved_irqflag; unsigned long saved_irqflag;
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
struct kprobe_step_ctx ss_ctx; struct kprobe_step_ctx ss_ctx;
struct pt_regs jprobe_saved_regs;
}; };
void arch_remove_kprobe(struct kprobe *); void arch_remove_kprobe(struct kprobe *);
......
...@@ -343,14 +343,13 @@ static int get_hbp_len(u8 hbp_len) ...@@ -343,14 +343,13 @@ static int get_hbp_len(u8 hbp_len)
/* /*
* Check whether bp virtual address is in kernel space. * Check whether bp virtual address is in kernel space.
*/ */
int arch_check_bp_in_kernelspace(struct perf_event *bp) int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw)
{ {
unsigned int len; unsigned int len;
unsigned long va; unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
va = info->address; va = hw->address;
len = get_hbp_len(info->ctrl.len); len = get_hbp_len(hw->ctrl.len);
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
...@@ -421,53 +420,53 @@ int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl, ...@@ -421,53 +420,53 @@ int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
/* /*
* Construct an arch_hw_breakpoint from a perf_event. * Construct an arch_hw_breakpoint from a perf_event.
*/ */
static int arch_build_bp_info(struct perf_event *bp) static int arch_build_bp_info(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
/* Type */ /* Type */
switch (bp->attr.bp_type) { switch (attr->bp_type) {
case HW_BREAKPOINT_X: case HW_BREAKPOINT_X:
info->ctrl.type = ARM_BREAKPOINT_EXECUTE; hw->ctrl.type = ARM_BREAKPOINT_EXECUTE;
break; break;
case HW_BREAKPOINT_R: case HW_BREAKPOINT_R:
info->ctrl.type = ARM_BREAKPOINT_LOAD; hw->ctrl.type = ARM_BREAKPOINT_LOAD;
break; break;
case HW_BREAKPOINT_W: case HW_BREAKPOINT_W:
info->ctrl.type = ARM_BREAKPOINT_STORE; hw->ctrl.type = ARM_BREAKPOINT_STORE;
break; break;
case HW_BREAKPOINT_RW: case HW_BREAKPOINT_RW:
info->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE; hw->ctrl.type = ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE;
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
/* Len */ /* Len */
switch (bp->attr.bp_len) { switch (attr->bp_len) {
case HW_BREAKPOINT_LEN_1: case HW_BREAKPOINT_LEN_1:
info->ctrl.len = ARM_BREAKPOINT_LEN_1; hw->ctrl.len = ARM_BREAKPOINT_LEN_1;
break; break;
case HW_BREAKPOINT_LEN_2: case HW_BREAKPOINT_LEN_2:
info->ctrl.len = ARM_BREAKPOINT_LEN_2; hw->ctrl.len = ARM_BREAKPOINT_LEN_2;
break; break;
case HW_BREAKPOINT_LEN_3: case HW_BREAKPOINT_LEN_3:
info->ctrl.len = ARM_BREAKPOINT_LEN_3; hw->ctrl.len = ARM_BREAKPOINT_LEN_3;
break; break;
case HW_BREAKPOINT_LEN_4: case HW_BREAKPOINT_LEN_4:
info->ctrl.len = ARM_BREAKPOINT_LEN_4; hw->ctrl.len = ARM_BREAKPOINT_LEN_4;
break; break;
case HW_BREAKPOINT_LEN_5: case HW_BREAKPOINT_LEN_5:
info->ctrl.len = ARM_BREAKPOINT_LEN_5; hw->ctrl.len = ARM_BREAKPOINT_LEN_5;
break; break;
case HW_BREAKPOINT_LEN_6: case HW_BREAKPOINT_LEN_6:
info->ctrl.len = ARM_BREAKPOINT_LEN_6; hw->ctrl.len = ARM_BREAKPOINT_LEN_6;
break; break;
case HW_BREAKPOINT_LEN_7: case HW_BREAKPOINT_LEN_7:
info->ctrl.len = ARM_BREAKPOINT_LEN_7; hw->ctrl.len = ARM_BREAKPOINT_LEN_7;
break; break;
case HW_BREAKPOINT_LEN_8: case HW_BREAKPOINT_LEN_8:
info->ctrl.len = ARM_BREAKPOINT_LEN_8; hw->ctrl.len = ARM_BREAKPOINT_LEN_8;
break; break;
default: default:
return -EINVAL; return -EINVAL;
...@@ -478,37 +477,37 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -478,37 +477,37 @@ static int arch_build_bp_info(struct perf_event *bp)
* AArch32 also requires breakpoints of length 2 for Thumb. * AArch32 also requires breakpoints of length 2 for Thumb.
* Watchpoints can be of length 1, 2, 4 or 8 bytes. * Watchpoints can be of length 1, 2, 4 or 8 bytes.
*/ */
if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE) { if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE) {
if (is_compat_bp(bp)) { if (is_compat_bp(bp)) {
if (info->ctrl.len != ARM_BREAKPOINT_LEN_2 && if (hw->ctrl.len != ARM_BREAKPOINT_LEN_2 &&
info->ctrl.len != ARM_BREAKPOINT_LEN_4) hw->ctrl.len != ARM_BREAKPOINT_LEN_4)
return -EINVAL; return -EINVAL;
} else if (info->ctrl.len != ARM_BREAKPOINT_LEN_4) { } else if (hw->ctrl.len != ARM_BREAKPOINT_LEN_4) {
/* /*
* FIXME: Some tools (I'm looking at you perf) assume * FIXME: Some tools (I'm looking at you perf) assume
* that breakpoints should be sizeof(long). This * that breakpoints should be sizeof(long). This
* is nonsense. For now, we fix up the parameter * is nonsense. For now, we fix up the parameter
* but we should probably return -EINVAL instead. * but we should probably return -EINVAL instead.
*/ */
info->ctrl.len = ARM_BREAKPOINT_LEN_4; hw->ctrl.len = ARM_BREAKPOINT_LEN_4;
} }
} }
/* Address */ /* Address */
info->address = bp->attr.bp_addr; hw->address = attr->bp_addr;
/* /*
* Privilege * Privilege
* Note that we disallow combined EL0/EL1 breakpoints because * Note that we disallow combined EL0/EL1 breakpoints because
* that would complicate the stepping code. * that would complicate the stepping code.
*/ */
if (arch_check_bp_in_kernelspace(bp)) if (arch_check_bp_in_kernelspace(hw))
info->ctrl.privilege = AARCH64_BREAKPOINT_EL1; hw->ctrl.privilege = AARCH64_BREAKPOINT_EL1;
else else
info->ctrl.privilege = AARCH64_BREAKPOINT_EL0; hw->ctrl.privilege = AARCH64_BREAKPOINT_EL0;
/* Enabled? */ /* Enabled? */
info->ctrl.enabled = !bp->attr.disabled; hw->ctrl.enabled = !attr->disabled;
return 0; return 0;
} }
...@@ -516,14 +515,15 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -516,14 +515,15 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings. * Validate the arch-specific HW Breakpoint register settings.
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp) int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
int ret; int ret;
u64 alignment_mask, offset; u64 alignment_mask, offset;
/* Build the arch_hw_breakpoint. */ /* Build the arch_hw_breakpoint. */
ret = arch_build_bp_info(bp); ret = arch_build_bp_info(bp, attr, hw);
if (ret) if (ret)
return ret; return ret;
...@@ -537,42 +537,42 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -537,42 +537,42 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
* that here. * that here.
*/ */
if (is_compat_bp(bp)) { if (is_compat_bp(bp)) {
if (info->ctrl.len == ARM_BREAKPOINT_LEN_8) if (hw->ctrl.len == ARM_BREAKPOINT_LEN_8)
alignment_mask = 0x7; alignment_mask = 0x7;
else else
alignment_mask = 0x3; alignment_mask = 0x3;
offset = info->address & alignment_mask; offset = hw->address & alignment_mask;
switch (offset) { switch (offset) {
case 0: case 0:
/* Aligned */ /* Aligned */
break; break;
case 1: case 1:
/* Allow single byte watchpoint. */ /* Allow single byte watchpoint. */
if (info->ctrl.len == ARM_BREAKPOINT_LEN_1) if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1)
break; break;
case 2: case 2:
/* Allow halfword watchpoints and breakpoints. */ /* Allow halfword watchpoints and breakpoints. */
if (info->ctrl.len == ARM_BREAKPOINT_LEN_2) if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2)
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
} else { } else {
if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE) if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE)
alignment_mask = 0x3; alignment_mask = 0x3;
else else
alignment_mask = 0x7; alignment_mask = 0x7;
offset = info->address & alignment_mask; offset = hw->address & alignment_mask;
} }
info->address &= ~alignment_mask; hw->address &= ~alignment_mask;
info->ctrl.len <<= offset; hw->ctrl.len <<= offset;
/* /*
* Disallow per-task kernel breakpoints since these would * Disallow per-task kernel breakpoints since these would
* complicate the stepping code. * complicate the stepping code.
*/ */
if (info->ctrl.privilege == AARCH64_BREAKPOINT_EL1 && bp->hw.target) if (hw->ctrl.privilege == AARCH64_BREAKPOINT_EL1 && bp->hw.target)
return -EINVAL; return -EINVAL;
return 0; return 0;
......
...@@ -275,7 +275,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p, ...@@ -275,7 +275,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
break; break;
case KPROBE_HIT_SS: case KPROBE_HIT_SS:
case KPROBE_REENTER: case KPROBE_REENTER:
pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr); pr_warn("Unrecoverable kprobe detected.\n");
dump_kprobe(p); dump_kprobe(p);
BUG(); BUG();
break; break;
...@@ -395,9 +395,9 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -395,9 +395,9 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
/* /*
* If we have no pre-handler or it returned 0, we * If we have no pre-handler or it returned 0, we
* continue with normal processing. If we have a * continue with normal processing. If we have a
* pre-handler and it returned non-zero, it prepped * pre-handler and it returned non-zero, it will
* for calling the break_handler below on re-entry, * modify the execution path and no need to single
* so get out doing nothing more here. * stepping. Let's just reset current kprobe and exit.
* *
* pre_handler can hit a breakpoint and can step thru * pre_handler can hit a breakpoint and can step thru
* before return, keep PSTATE D-flag enabled until * before return, keep PSTATE D-flag enabled until
...@@ -405,16 +405,8 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -405,16 +405,8 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
setup_singlestep(p, regs, kcb, 0); setup_singlestep(p, regs, kcb, 0);
return; } else
} reset_current_kprobe();
}
} else if ((le32_to_cpu(*(kprobe_opcode_t *) addr) ==
BRK64_OPCODE_KPROBES) && cur_kprobe) {
/* We probably hit a jprobe. Call its break handler. */
if (cur_kprobe->break_handler &&
cur_kprobe->break_handler(cur_kprobe, regs)) {
setup_singlestep(cur_kprobe, regs, kcb, 0);
return;
} }
} }
/* /*
...@@ -465,74 +457,6 @@ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr) ...@@ -465,74 +457,6 @@ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
} }
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
kcb->jprobe_saved_regs = *regs;
/*
* Since we can't be sure where in the stack frame "stacked"
* pass-by-value arguments are stored we just don't try to
* duplicate any of the stack. Do not use jprobes on functions that
* use more than 64 bytes (after padding each to an 8 byte boundary)
* of arguments, or pass individual arguments larger than 16 bytes.
*/
instruction_pointer_set(regs, (unsigned long) jp->entry);
preempt_disable();
pause_graph_tracing();
return 1;
}
void __kprobes jprobe_return(void)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
/*
* Jprobe handler return by entering break exception,
* encoded same as kprobe, but with following conditions
* -a special PC to identify it from the other kprobes.
* -restore stack addr to original saved pt_regs
*/
asm volatile(" mov sp, %0 \n"
"jprobe_return_break: brk %1 \n"
:
: "r" (kcb->jprobe_saved_regs.sp),
"I" (BRK64_ESR_KPROBES)
: "memory");
unreachable();
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
long stack_addr = kcb->jprobe_saved_regs.sp;
long orig_sp = kernel_stack_pointer(regs);
struct jprobe *jp = container_of(p, struct jprobe, kp);
extern const char jprobe_return_break[];
if (instruction_pointer(regs) != (u64) jprobe_return_break)
return 0;
if (orig_sp != stack_addr) {
struct pt_regs *saved_regs =
(struct pt_regs *)kcb->jprobe_saved_regs.sp;
pr_err("current sp %lx does not match saved sp %lx\n",
orig_sp, stack_addr);
pr_err("Saved registers for jprobe %p\n", jp);
__show_regs(saved_regs);
pr_err("Current registers\n");
__show_regs(regs);
BUG();
}
unpause_graph_tracing();
*regs = kcb->jprobe_saved_regs;
preempt_enable_no_resched();
return 1;
}
bool arch_within_kprobe_blacklist(unsigned long addr) bool arch_within_kprobe_blacklist(unsigned long addr)
{ {
if ((addr >= (unsigned long)__kprobes_text_start && if ((addr >= (unsigned long)__kprobes_text_start &&
......
...@@ -82,8 +82,6 @@ struct prev_kprobe { ...@@ -82,8 +82,6 @@ struct prev_kprobe {
#define ARCH_PREV_KPROBE_SZ 2 #define ARCH_PREV_KPROBE_SZ 2
struct kprobe_ctlblk { struct kprobe_ctlblk {
unsigned long kprobe_status; unsigned long kprobe_status;
struct pt_regs jprobe_saved_regs;
unsigned long jprobes_saved_stacked_regs[MAX_PARAM_RSE_SIZE];
unsigned long *bsp; unsigned long *bsp;
unsigned long cfm; unsigned long cfm;
atomic_t prev_kprobe_index; atomic_t prev_kprobe_index;
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
*/ */
#define __IA64_BREAK_KDB 0x80100 #define __IA64_BREAK_KDB 0x80100
#define __IA64_BREAK_KPROBE 0x81000 /* .. 0x81fff */ #define __IA64_BREAK_KPROBE 0x81000 /* .. 0x81fff */
#define __IA64_BREAK_JPROBE 0x82000
/* /*
* OS-specific break numbers: * OS-specific break numbers:
......
...@@ -25,7 +25,7 @@ obj-$(CONFIG_NUMA) += numa.o ...@@ -25,7 +25,7 @@ obj-$(CONFIG_NUMA) += numa.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
obj-$(CONFIG_IA64_CYCLONE) += cyclone.o obj-$(CONFIG_IA64_CYCLONE) += cyclone.o
obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o
obj-$(CONFIG_KPROBES) += kprobes.o jprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o crash.o obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
......
/*
* Jprobe specific operations
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* Copyright (C) Intel Corporation, 2005
*
* 2005-May Rusty Lynch <rusty.lynch@intel.com> and Anil S Keshavamurthy
* <anil.s.keshavamurthy@intel.com> initial implementation
*
* Jprobes (a.k.a. "jump probes" which is built on-top of kprobes) allow a
* probe to be inserted into the beginning of a function call. The fundamental
* difference between a jprobe and a kprobe is the jprobe handler is executed
* in the same context as the target function, while the kprobe handlers
* are executed in interrupt context.
*
* For jprobes we initially gain control by placing a break point in the
* first instruction of the targeted function. When we catch that specific
* break, we:
* * set the return address to our jprobe_inst_return() function
* * jump to the jprobe handler function
*
* Since we fixed up the return address, the jprobe handler will return to our
* jprobe_inst_return() function, giving us control again. At this point we
* are back in the parents frame marker, so we do yet another call to our
* jprobe_break() function to fix up the frame marker as it would normally
* exist in the target function.
*
* Our jprobe_return function then transfers control back to kprobes.c by
* executing a break instruction using one of our reserved numbers. When we
* catch that break in kprobes.c, we continue like we do for a normal kprobe
* by single stepping the emulated instruction, and then returning execution
* to the correct location.
*/
#include <asm/asmmacro.h>
#include <asm/break.h>
/*
* void jprobe_break(void)
*/
.section .kprobes.text, "ax"
ENTRY(jprobe_break)
break.m __IA64_BREAK_JPROBE
END(jprobe_break)
/*
* void jprobe_inst_return(void)
*/
GLOBAL_ENTRY(jprobe_inst_return)
br.call.sptk.many b0=jprobe_break
END(jprobe_inst_return)
GLOBAL_ENTRY(invalidate_stacked_regs)
movl r16=invalidate_restore_cfm
;;
mov b6=r16
;;
br.ret.sptk.many b6
;;
invalidate_restore_cfm:
mov r16=ar.rsc
;;
mov ar.rsc=r0
;;
loadrs
;;
mov ar.rsc=r16
;;
br.cond.sptk.many rp
END(invalidate_stacked_regs)
GLOBAL_ENTRY(flush_register_stack)
// flush dirty regs to backing store (must be first in insn group)
flushrs
;;
br.ret.sptk.many rp
END(flush_register_stack)
...@@ -35,8 +35,6 @@ ...@@ -35,8 +35,6 @@
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/exception.h> #include <asm/exception.h>
extern void jprobe_inst_return(void);
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
...@@ -480,12 +478,9 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -480,12 +478,9 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
*/ */
break; break;
} }
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
...@@ -819,14 +814,6 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) ...@@ -819,14 +814,6 @@ static int __kprobes pre_kprobes_handler(struct die_args *args)
prepare_ss(p, regs); prepare_ss(p, regs);
kcb->kprobe_status = KPROBE_REENTER; kcb->kprobe_status = KPROBE_REENTER;
return 1; return 1;
} else if (args->err == __IA64_BREAK_JPROBE) {
/*
* jprobe instrumented function just completed
*/
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
goto ss_probe;
}
} else if (!is_ia64_break_inst(regs)) { } else if (!is_ia64_break_inst(regs)) {
/* The breakpoint instruction was removed by /* The breakpoint instruction was removed by
* another cpu right after we hit, no further * another cpu right after we hit, no further
...@@ -861,15 +848,12 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) ...@@ -861,15 +848,12 @@ static int __kprobes pre_kprobes_handler(struct die_args *args)
set_current_kprobe(p, kcb); set_current_kprobe(p, kcb);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
/* reset_current_kprobe();
* Our pre-handler is specifically requesting that we just preempt_enable_no_resched();
* do a return. This is used for both the jprobe pre-handler
* and the kretprobe trampoline
*/
return 1; return 1;
}
ss_probe:
#if !defined(CONFIG_PREEMPT) #if !defined(CONFIG_PREEMPT)
if (p->ainsn.inst_flag == INST_FLAG_BOOSTABLE && !p->post_handler) { if (p->ainsn.inst_flag == INST_FLAG_BOOSTABLE && !p->post_handler) {
/* Boost up -- we can execute copied instructions directly */ /* Boost up -- we can execute copied instructions directly */
...@@ -992,7 +976,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ...@@ -992,7 +976,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
case DIE_BREAK: case DIE_BREAK:
/* err is break number from ia64_bad_break() */ /* err is break number from ia64_bad_break() */
if ((args->err >> 12) == (__IA64_BREAK_KPROBE >> 12) if ((args->err >> 12) == (__IA64_BREAK_KPROBE >> 12)
|| args->err == __IA64_BREAK_JPROBE
|| args->err == 0) || args->err == 0)
if (pre_kprobes_handler(args)) if (pre_kprobes_handler(args))
ret = NOTIFY_STOP; ret = NOTIFY_STOP;
...@@ -1040,74 +1023,6 @@ unsigned long arch_deref_entry_point(void *entry) ...@@ -1040,74 +1023,6 @@ unsigned long arch_deref_entry_point(void *entry)
return ((struct fnptr *)entry)->ip; return ((struct fnptr *)entry)->ip;
} }
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
unsigned long addr = arch_deref_entry_point(jp->entry);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
struct param_bsp_cfm pa;
int bytes;
/*
* Callee owns the argument space and could overwrite it, eg
* tail call optimization. So to be absolutely safe
* we save the argument space before transferring the control
* to instrumented jprobe function which runs in
* the process context
*/
pa.ip = regs->cr_iip;
unw_init_running(ia64_get_bsp_cfm, &pa);
bytes = (char *)ia64_rse_skip_regs(pa.bsp, pa.cfm & 0x3f)
- (char *)pa.bsp;
memcpy( kcb->jprobes_saved_stacked_regs,
pa.bsp,
bytes );
kcb->bsp = pa.bsp;
kcb->cfm = pa.cfm;
/* save architectural state */
kcb->jprobe_saved_regs = *regs;
/* after rfi, execute the jprobe instrumented function */
regs->cr_iip = addr & ~0xFULL;
ia64_psr(regs)->ri = addr & 0xf;
regs->r1 = ((struct fnptr *)(jp->entry))->gp;
/*
* fix the return address to our jprobe_inst_return() function
* in the jprobes.S file
*/
regs->b0 = ((struct fnptr *)(jprobe_inst_return))->ip;
return 1;
}
/* ia64 does not need this */
void __kprobes jprobe_return(void)
{
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
int bytes;
/* restoring architectural state */
*regs = kcb->jprobe_saved_regs;
/* restoring the original argument space */
flush_register_stack();
bytes = (char *)ia64_rse_skip_regs(kcb->bsp, kcb->cfm & 0x3f)
- (char *)kcb->bsp;
memcpy( kcb->bsp,
kcb->jprobes_saved_stacked_regs,
bytes );
invalidate_stacked_regs();
preempt_enable_no_resched();
return 1;
}
static struct kprobe trampoline_p = { static struct kprobe trampoline_p = {
.pre_handler = trampoline_probe_handler .pre_handler = trampoline_probe_handler
}; };
......
...@@ -68,16 +68,6 @@ struct prev_kprobe { ...@@ -68,16 +68,6 @@ struct prev_kprobe {
unsigned long saved_epc; unsigned long saved_epc;
}; };
#define MAX_JPROBES_STACK_SIZE 128
#define MAX_JPROBES_STACK_ADDR \
(((unsigned long)current_thread_info()) + THREAD_SIZE - 32 - sizeof(struct pt_regs))
#define MIN_JPROBES_STACK_SIZE(ADDR) \
((((ADDR) + MAX_JPROBES_STACK_SIZE) > MAX_JPROBES_STACK_ADDR) \
? MAX_JPROBES_STACK_ADDR - (ADDR) \
: MAX_JPROBES_STACK_SIZE)
#define SKIP_DELAYSLOT 0x0001 #define SKIP_DELAYSLOT 0x0001
/* per-cpu kprobe control block */ /* per-cpu kprobe control block */
...@@ -86,12 +76,9 @@ struct kprobe_ctlblk { ...@@ -86,12 +76,9 @@ struct kprobe_ctlblk {
unsigned long kprobe_old_SR; unsigned long kprobe_old_SR;
unsigned long kprobe_saved_SR; unsigned long kprobe_saved_SR;
unsigned long kprobe_saved_epc; unsigned long kprobe_saved_epc;
unsigned long jprobe_saved_sp;
struct pt_regs jprobe_saved_regs;
/* Per-thread fields, used while emulating branches */ /* Per-thread fields, used while emulating branches */
unsigned long flags; unsigned long flags;
unsigned long target_epc; unsigned long target_epc;
u8 jprobes_stack[MAX_JPROBES_STACK_SIZE];
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
......
...@@ -326,19 +326,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -326,19 +326,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
preempt_enable_no_resched(); preempt_enable_no_resched();
} }
return 1; return 1;
} else { } else if (addr->word != breakpoint_insn.word) {
if (addr->word != breakpoint_insn.word) { /*
/* * The breakpoint instruction was removed by
* The breakpoint instruction was removed by * another cpu right after we hit, no further
* another cpu right after we hit, no further * handling of this interrupt is appropriate
* handling of this interrupt is appropriate */
*/ ret = 1;
ret = 1;
goto no_kprobe;
}
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs))
goto ss_probe;
} }
goto no_kprobe; goto no_kprobe;
} }
...@@ -364,10 +358,11 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -364,10 +358,11 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
if (p->pre_handler && p->pre_handler(p, regs)) { if (p->pre_handler && p->pre_handler(p, regs)) {
/* handler has already set things up, so skip ss setup */ /* handler has already set things up, so skip ss setup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
} }
ss_probe:
prepare_singlestep(p, regs, kcb); prepare_singlestep(p, regs, kcb);
if (kcb->flags & SKIP_DELAYSLOT) { if (kcb->flags & SKIP_DELAYSLOT) {
kcb->kprobe_status = KPROBE_HIT_SSDONE; kcb->kprobe_status = KPROBE_HIT_SSDONE;
...@@ -468,51 +463,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ...@@ -468,51 +463,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
return ret; return ret;
} }
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
kcb->jprobe_saved_regs = *regs;
kcb->jprobe_saved_sp = regs->regs[29];
memcpy(kcb->jprobes_stack, (void *)kcb->jprobe_saved_sp,
MIN_JPROBES_STACK_SIZE(kcb->jprobe_saved_sp));
regs->cp0_epc = (unsigned long)(jp->entry);
return 1;
}
/* Defined in the inline asm below. */
void jprobe_return_end(void);
void __kprobes jprobe_return(void)
{
/* Assembler quirk necessitates this '0,code' business. */
asm volatile(
"break 0,%0\n\t"
".globl jprobe_return_end\n"
"jprobe_return_end:\n"
: : "n" (BRK_KPROBE_BP) : "memory");
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
if (regs->cp0_epc >= (unsigned long)jprobe_return &&
regs->cp0_epc <= (unsigned long)jprobe_return_end) {
*regs = kcb->jprobe_saved_regs;
memcpy((void *)kcb->jprobe_saved_sp, kcb->jprobes_stack,
MIN_JPROBES_STACK_SIZE(kcb->jprobe_saved_sp));
preempt_enable_no_resched();
return 1;
}
return 0;
}
/* /*
* Function return probe trampoline: * Function return probe trampoline:
* - init_kprobes() establishes a probepoint here * - init_kprobes() establishes a probepoint here
...@@ -595,9 +545,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -595,9 +545,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
instruction_pointer(regs) = orig_ret_address; instruction_pointer(regs) = orig_ret_address;
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -52,6 +52,7 @@ struct arch_hw_breakpoint { ...@@ -52,6 +52,7 @@ struct arch_hw_breakpoint {
#include <asm/reg.h> #include <asm/reg.h>
#include <asm/debug.h> #include <asm/debug.h>
struct perf_event_attr;
struct perf_event; struct perf_event;
struct pmu; struct pmu;
struct perf_sample_data; struct perf_sample_data;
...@@ -60,8 +61,10 @@ struct perf_sample_data; ...@@ -60,8 +61,10 @@ struct perf_sample_data;
extern int hw_breakpoint_slots(int type); extern int hw_breakpoint_slots(int type);
extern int arch_bp_generic_fields(int type, int *gen_bp_type); extern int arch_bp_generic_fields(int type, int *gen_bp_type);
extern int arch_check_bp_in_kernelspace(struct perf_event *bp); extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
int arch_install_hw_breakpoint(struct perf_event *bp); int arch_install_hw_breakpoint(struct perf_event *bp);
......
...@@ -88,7 +88,6 @@ struct prev_kprobe { ...@@ -88,7 +88,6 @@ struct prev_kprobe {
struct kprobe_ctlblk { struct kprobe_ctlblk {
unsigned long kprobe_status; unsigned long kprobe_status;
unsigned long kprobe_saved_msr; unsigned long kprobe_saved_msr;
struct pt_regs jprobe_saved_regs;
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
...@@ -103,17 +102,6 @@ extern int kprobe_exceptions_notify(struct notifier_block *self, ...@@ -103,17 +102,6 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr); extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_handler(struct pt_regs *regs); extern int kprobe_handler(struct pt_regs *regs);
extern int kprobe_post_handler(struct pt_regs *regs); extern int kprobe_post_handler(struct pt_regs *regs);
#ifdef CONFIG_KPROBES_ON_FTRACE
extern int __is_active_jprobe(unsigned long addr);
extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb);
#else
static inline int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
return 0;
}
#endif
#else #else
static inline int kprobe_handler(struct pt_regs *regs) { return 0; } static inline int kprobe_handler(struct pt_regs *regs) { return 0; }
static inline int kprobe_post_handler(struct pt_regs *regs) { return 0; } static inline int kprobe_post_handler(struct pt_regs *regs) { return 0; }
......
...@@ -119,11 +119,9 @@ void arch_unregister_hw_breakpoint(struct perf_event *bp) ...@@ -119,11 +119,9 @@ void arch_unregister_hw_breakpoint(struct perf_event *bp)
/* /*
* Check for virtual address in kernel space. * Check for virtual address in kernel space.
*/ */
int arch_check_bp_in_kernelspace(struct perf_event *bp) int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp); return is_kernel_addr(hw->address);
return is_kernel_addr(info->address);
} }
int arch_bp_generic_fields(int type, int *gen_bp_type) int arch_bp_generic_fields(int type, int *gen_bp_type)
...@@ -141,30 +139,31 @@ int arch_bp_generic_fields(int type, int *gen_bp_type) ...@@ -141,30 +139,31 @@ int arch_bp_generic_fields(int type, int *gen_bp_type)
/* /*
* Validate the arch-specific HW Breakpoint register settings * Validate the arch-specific HW Breakpoint register settings
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp) int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
int ret = -EINVAL, length_max; int ret = -EINVAL, length_max;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
if (!bp) if (!bp)
return ret; return ret;
info->type = HW_BRK_TYPE_TRANSLATE; hw->type = HW_BRK_TYPE_TRANSLATE;
if (bp->attr.bp_type & HW_BREAKPOINT_R) if (attr->bp_type & HW_BREAKPOINT_R)
info->type |= HW_BRK_TYPE_READ; hw->type |= HW_BRK_TYPE_READ;
if (bp->attr.bp_type & HW_BREAKPOINT_W) if (attr->bp_type & HW_BREAKPOINT_W)
info->type |= HW_BRK_TYPE_WRITE; hw->type |= HW_BRK_TYPE_WRITE;
if (info->type == HW_BRK_TYPE_TRANSLATE) if (hw->type == HW_BRK_TYPE_TRANSLATE)
/* must set alteast read or write */ /* must set alteast read or write */
return ret; return ret;
if (!(bp->attr.exclude_user)) if (!attr->exclude_user)
info->type |= HW_BRK_TYPE_USER; hw->type |= HW_BRK_TYPE_USER;
if (!(bp->attr.exclude_kernel)) if (!attr->exclude_kernel)
info->type |= HW_BRK_TYPE_KERNEL; hw->type |= HW_BRK_TYPE_KERNEL;
if (!(bp->attr.exclude_hv)) if (!attr->exclude_hv)
info->type |= HW_BRK_TYPE_HYP; hw->type |= HW_BRK_TYPE_HYP;
info->address = bp->attr.bp_addr; hw->address = attr->bp_addr;
info->len = bp->attr.bp_len; hw->len = attr->bp_len;
/* /*
* Since breakpoint length can be a maximum of HW_BREAKPOINT_LEN(8) * Since breakpoint length can be a maximum of HW_BREAKPOINT_LEN(8)
...@@ -178,12 +177,12 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -178,12 +177,12 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
if (cpu_has_feature(CPU_FTR_DAWR)) { if (cpu_has_feature(CPU_FTR_DAWR)) {
length_max = 512 ; /* 64 doublewords */ length_max = 512 ; /* 64 doublewords */
/* DAWR region can't cross 512 boundary */ /* DAWR region can't cross 512 boundary */
if ((bp->attr.bp_addr >> 9) != if ((attr->bp_addr >> 9) !=
((bp->attr.bp_addr + bp->attr.bp_len - 1) >> 9)) ((attr->bp_addr + attr->bp_len - 1) >> 9))
return -EINVAL; return -EINVAL;
} }
if (info->len > if (hw->len >
(length_max - (info->address & HW_BREAKPOINT_ALIGN))) (length_max - (hw->address & HW_BREAKPOINT_ALIGN)))
return -EINVAL; return -EINVAL;
return 0; return 0;
} }
......
...@@ -25,50 +25,6 @@ ...@@ -25,50 +25,6 @@
#include <linux/preempt.h> #include <linux/preempt.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
/*
* This is called from ftrace code after invoking registered handlers to
* disambiguate regs->nip changes done by jprobes and livepatch. We check if
* there is an active jprobe at the provided address (mcount location).
*/
int __is_active_jprobe(unsigned long addr)
{
if (!preemptible()) {
struct kprobe *p = raw_cpu_read(current_kprobe);
return (p && (unsigned long)p->addr == addr) ? 1 : 0;
}
return 0;
}
static nokprobe_inline
int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb, unsigned long orig_nip)
{
/*
* Emulate singlestep (and also recover regs->nip)
* as if there is a nop
*/
regs->nip = (unsigned long)p->addr + MCOUNT_INSN_SIZE;
if (unlikely(p->post_handler)) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
__this_cpu_write(current_kprobe, NULL);
if (orig_nip)
regs->nip = orig_nip;
return 1;
}
int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
if (kprobe_ftrace(p))
return __skip_singlestep(p, regs, kcb, 0);
else
return 0;
}
NOKPROBE_SYMBOL(skip_singlestep);
/* Ftrace callback handler for kprobes */ /* Ftrace callback handler for kprobes */
void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
struct ftrace_ops *ops, struct pt_regs *regs) struct ftrace_ops *ops, struct pt_regs *regs)
...@@ -76,18 +32,14 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, ...@@ -76,18 +32,14 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
struct kprobe *p; struct kprobe *p;
struct kprobe_ctlblk *kcb; struct kprobe_ctlblk *kcb;
preempt_disable();
p = get_kprobe((kprobe_opcode_t *)nip); p = get_kprobe((kprobe_opcode_t *)nip);
if (unlikely(!p) || kprobe_disabled(p)) if (unlikely(!p) || kprobe_disabled(p))
goto end; return;
kcb = get_kprobe_ctlblk(); kcb = get_kprobe_ctlblk();
if (kprobe_running()) { if (kprobe_running()) {
kprobes_inc_nmissed_count(p); kprobes_inc_nmissed_count(p);
} else { } else {
unsigned long orig_nip = regs->nip;
/* /*
* On powerpc, NIP is *before* this instruction for the * On powerpc, NIP is *before* this instruction for the
* pre handler * pre handler
...@@ -96,19 +48,23 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, ...@@ -96,19 +48,23 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
__this_cpu_write(current_kprobe, p); __this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (!p->pre_handler || !p->pre_handler(p, regs)) if (!p->pre_handler || !p->pre_handler(p, regs)) {
__skip_singlestep(p, regs, kcb, orig_nip);
else {
/* /*
* If pre_handler returns !0, it sets regs->nip and * Emulate singlestep (and also recover regs->nip)
* resets current kprobe. In this case, we should not * as if there is a nop
* re-enable preemption.
*/ */
return; regs->nip += MCOUNT_INSN_SIZE;
if (unlikely(p->post_handler)) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
} }
/*
* If pre_handler returns !0, it changes regs->nip. We have to
* skip emulating post_handler.
*/
__this_cpu_write(current_kprobe, NULL);
} }
end:
preempt_enable_no_resched();
} }
NOKPROBE_SYMBOL(kprobe_ftrace_handler); NOKPROBE_SYMBOL(kprobe_ftrace_handler);
......
...@@ -317,25 +317,17 @@ int kprobe_handler(struct pt_regs *regs) ...@@ -317,25 +317,17 @@ int kprobe_handler(struct pt_regs *regs)
} }
prepare_singlestep(p, regs); prepare_singlestep(p, regs);
return 1; return 1;
} else { } else if (*addr != BREAKPOINT_INSTRUCTION) {
if (*addr != BREAKPOINT_INSTRUCTION) { /* If trap variant, then it belongs not to us */
/* If trap variant, then it belongs not to us */ kprobe_opcode_t cur_insn = *addr;
kprobe_opcode_t cur_insn = *addr;
if (is_trap(cur_insn)) if (is_trap(cur_insn))
goto no_kprobe;
/* The breakpoint instruction was removed by
* another cpu right after we hit, no further
* handling of this interrupt is appropriate
*/
ret = 1;
goto no_kprobe; goto no_kprobe;
} /* The breakpoint instruction was removed by
p = __this_cpu_read(current_kprobe); * another cpu right after we hit, no further
if (p->break_handler && p->break_handler(p, regs)) { * handling of this interrupt is appropriate
if (!skip_singlestep(p, regs, kcb)) */
goto ss_probe; ret = 1;
ret = 1;
}
} }
goto no_kprobe; goto no_kprobe;
} }
...@@ -350,7 +342,7 @@ int kprobe_handler(struct pt_regs *regs) ...@@ -350,7 +342,7 @@ int kprobe_handler(struct pt_regs *regs)
*/ */
kprobe_opcode_t cur_insn = *addr; kprobe_opcode_t cur_insn = *addr;
if (is_trap(cur_insn)) if (is_trap(cur_insn))
goto no_kprobe; goto no_kprobe;
/* /*
* The breakpoint instruction was removed right * The breakpoint instruction was removed right
* after we hit it. Another cpu has removed * after we hit it. Another cpu has removed
...@@ -366,11 +358,13 @@ int kprobe_handler(struct pt_regs *regs) ...@@ -366,11 +358,13 @@ int kprobe_handler(struct pt_regs *regs)
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
set_current_kprobe(p, regs, kcb); set_current_kprobe(p, regs, kcb);
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
/* handler has already set things up, so skip ss setup */ /* handler changed execution path, so skip ss setup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
ss_probe:
if (p->ainsn.boostable >= 0) { if (p->ainsn.boostable >= 0) {
ret = try_to_emulate(p, regs); ret = try_to_emulate(p, regs);
...@@ -611,60 +605,6 @@ unsigned long arch_deref_entry_point(void *entry) ...@@ -611,60 +605,6 @@ unsigned long arch_deref_entry_point(void *entry)
} }
NOKPROBE_SYMBOL(arch_deref_entry_point); NOKPROBE_SYMBOL(arch_deref_entry_point);
int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
memcpy(&kcb->jprobe_saved_regs, regs, sizeof(struct pt_regs));
/* setup return addr to the jprobe handler routine */
regs->nip = arch_deref_entry_point(jp->entry);
#ifdef PPC64_ELF_ABI_v2
regs->gpr[12] = (unsigned long)jp->entry;
#elif defined(PPC64_ELF_ABI_v1)
regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
#endif
/*
* jprobes use jprobe_return() which skips the normal return
* path of the function, and this messes up the accounting of the
* function graph tracer.
*
* Pause function graph tracing while performing the jprobe function.
*/
pause_graph_tracing();
return 1;
}
NOKPROBE_SYMBOL(setjmp_pre_handler);
void __used jprobe_return(void)
{
asm volatile("jprobe_return_trap:\n"
"trap\n"
::: "memory");
}
NOKPROBE_SYMBOL(jprobe_return);
int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
if (regs->nip != ppc_kallsyms_lookup_name("jprobe_return_trap")) {
pr_debug("longjmp_break_handler NIP (0x%lx) does not match jprobe_return_trap (0x%lx)\n",
regs->nip, ppc_kallsyms_lookup_name("jprobe_return_trap"));
return 0;
}
memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
/* It's OK to start function graph tracing again */
unpause_graph_tracing();
preempt_enable_no_resched();
return 1;
}
NOKPROBE_SYMBOL(longjmp_break_handler);
static struct kprobe trampoline_p = { static struct kprobe trampoline_p = {
.addr = (kprobe_opcode_t *) &kretprobe_trampoline, .addr = (kprobe_opcode_t *) &kretprobe_trampoline,
.pre_handler = trampoline_probe_handler .pre_handler = trampoline_probe_handler
......
...@@ -104,39 +104,13 @@ ftrace_regs_call: ...@@ -104,39 +104,13 @@ ftrace_regs_call:
bl ftrace_stub bl ftrace_stub
nop nop
/* Load the possibly modified NIP */ /* Load ctr with the possibly modified NIP */
ld r15, _NIP(r1) ld r3, _NIP(r1)
mtctr r3
#ifdef CONFIG_LIVEPATCH #ifdef CONFIG_LIVEPATCH
cmpd r14, r15 /* has NIP been altered? */ cmpd r14, r3 /* has NIP been altered? */
#endif #endif
#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
/* NIP has not been altered, skip over further checks */
beq 1f
/* Check if there is an active jprobe on us */
subi r3, r14, 4
bl __is_active_jprobe
nop
/*
* If r3 == 1, then this is a kprobe/jprobe.
* else, this is livepatched function.
*
* The conditional branch for livepatch_handler below will use the
* result of this comparison. For kprobe/jprobe, we just need to branch to
* the new NIP, not call livepatch_handler. The branch below is bne, so we
* want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want
* CR0[EQ] = (r3 == 1).
*/
cmpdi r3, 1
1:
#endif
/* Load CTR with the possibly modified NIP */
mtctr r15
/* Restore gprs */ /* Restore gprs */
REST_GPR(0,r1) REST_GPR(0,r1)
REST_10GPRS(2,r1) REST_10GPRS(2,r1)
...@@ -154,10 +128,7 @@ ftrace_regs_call: ...@@ -154,10 +128,7 @@ ftrace_regs_call:
addi r1, r1, SWITCH_FRAME_SIZE addi r1, r1, SWITCH_FRAME_SIZE
#ifdef CONFIG_LIVEPATCH #ifdef CONFIG_LIVEPATCH
/* /* Based on the cmpd above, if the NIP was altered handle livepatch */
* Based on the cmpd or cmpdi above, if the NIP was altered and we're
* not on a kprobe/jprobe, then handle livepatch.
*/
bne- livepatch_handler bne- livepatch_handler
#endif #endif
......
...@@ -1469,7 +1469,7 @@ static int collect_events(struct perf_event *group, int max_count, ...@@ -1469,7 +1469,7 @@ static int collect_events(struct perf_event *group, int max_count,
} }
/* /*
* Add a event to the PMU. * Add an event to the PMU.
* If all events are not already frozen, then we disable and * If all events are not already frozen, then we disable and
* re-enable the PMU in order to get hw_perf_enable to do the * re-enable the PMU in order to get hw_perf_enable to do the
* actual work of reconfiguring the PMU. * actual work of reconfiguring the PMU.
...@@ -1548,7 +1548,7 @@ static int power_pmu_add(struct perf_event *event, int ef_flags) ...@@ -1548,7 +1548,7 @@ static int power_pmu_add(struct perf_event *event, int ef_flags)
} }
/* /*
* Remove a event from the PMU. * Remove an event from the PMU.
*/ */
static void power_pmu_del(struct perf_event *event, int ef_flags) static void power_pmu_del(struct perf_event *event, int ef_flags)
{ {
...@@ -1742,7 +1742,7 @@ static int power_pmu_commit_txn(struct pmu *pmu) ...@@ -1742,7 +1742,7 @@ static int power_pmu_commit_txn(struct pmu *pmu)
/* /*
* Return 1 if we might be able to put event on a limited PMC, * Return 1 if we might be able to put event on a limited PMC,
* or 0 if not. * or 0 if not.
* A event can only go on a limited PMC if it counts something * An event can only go on a limited PMC if it counts something
* that a limited PMC can count, doesn't require interrupts, and * that a limited PMC can count, doesn't require interrupts, and
* doesn't exclude any processor mode. * doesn't exclude any processor mode.
*/ */
......
...@@ -68,8 +68,6 @@ struct kprobe_ctlblk { ...@@ -68,8 +68,6 @@ struct kprobe_ctlblk {
unsigned long kprobe_saved_imask; unsigned long kprobe_saved_imask;
unsigned long kprobe_saved_ctl[3]; unsigned long kprobe_saved_ctl[3];
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
struct pt_regs jprobe_saved_regs;
kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
}; };
void arch_remove_kprobe(struct kprobe *p); void arch_remove_kprobe(struct kprobe *p);
......
...@@ -321,38 +321,20 @@ static int kprobe_handler(struct pt_regs *regs) ...@@ -321,38 +321,20 @@ static int kprobe_handler(struct pt_regs *regs)
* If we have no pre-handler or it returned 0, we * If we have no pre-handler or it returned 0, we
* continue with single stepping. If we have a * continue with single stepping. If we have a
* pre-handler and it returned non-zero, it prepped * pre-handler and it returned non-zero, it prepped
* for calling the break_handler below on re-entry * for changing execution path, so get out doing
* for jprobe processing, so get out doing nothing * nothing more here.
* more here.
*/ */
push_kprobe(kcb, p); push_kprobe(kcb, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
pop_kprobe(kcb);
preempt_enable_no_resched();
return 1; return 1;
}
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
} }
enable_singlestep(kcb, regs, (unsigned long) p->ainsn.insn); enable_singlestep(kcb, regs, (unsigned long) p->ainsn.insn);
return 1; return 1;
} else if (kprobe_running()) {
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
/*
* Continuation after the jprobe completed and
* caused the jprobe_return trap. The jprobe
* break_handler "returns" to the original
* function that still has the kprobe breakpoint
* installed. We continue with single stepping.
*/
kcb->kprobe_status = KPROBE_HIT_SS;
enable_singlestep(kcb, regs,
(unsigned long) p->ainsn.insn);
return 1;
} /* else:
* No kprobe at this address and the current kprobe
* has no break handler (no jprobe!). The kernel just
* exploded, let the standard trap handler pick up the
* pieces.
*/
} /* else: } /* else:
* No kprobe at this address and no active kprobe. The trap has * No kprobe at this address and no active kprobe. The trap has
* not been caused by a kprobe breakpoint. The race of breakpoint * not been caused by a kprobe breakpoint. The race of breakpoint
...@@ -452,9 +434,7 @@ static int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -452,9 +434,7 @@ static int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
regs->psw.addr = orig_ret_address; regs->psw.addr = orig_ret_address;
pop_kprobe(get_kprobe_ctlblk());
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
...@@ -661,60 +641,6 @@ int kprobe_exceptions_notify(struct notifier_block *self, ...@@ -661,60 +641,6 @@ int kprobe_exceptions_notify(struct notifier_block *self,
} }
NOKPROBE_SYMBOL(kprobe_exceptions_notify); NOKPROBE_SYMBOL(kprobe_exceptions_notify);
int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long stack;
memcpy(&kcb->jprobe_saved_regs, regs, sizeof(struct pt_regs));
/* setup return addr to the jprobe handler routine */
regs->psw.addr = (unsigned long) jp->entry;
regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT);
/* r15 is the stack pointer */
stack = (unsigned long) regs->gprs[15];
memcpy(kcb->jprobes_stack, (void *) stack, MIN_STACK_SIZE(stack));
/*
* jprobes use jprobe_return() which skips the normal return
* path of the function, and this messes up the accounting of the
* function graph tracer to get messed up.
*
* Pause function graph tracing while performing the jprobe function.
*/
pause_graph_tracing();
return 1;
}
NOKPROBE_SYMBOL(setjmp_pre_handler);
void jprobe_return(void)
{
asm volatile(".word 0x0002");
}
NOKPROBE_SYMBOL(jprobe_return);
int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long stack;
/* It's OK to start function graph tracing again */
unpause_graph_tracing();
stack = (unsigned long) kcb->jprobe_saved_regs.gprs[15];
/* Put the regs back */
memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
/* put the stack back */
memcpy((void *) stack, kcb->jprobes_stack, MIN_STACK_SIZE(stack));
preempt_enable_no_resched();
return 1;
}
NOKPROBE_SYMBOL(longjmp_break_handler);
static struct kprobe trampoline = { static struct kprobe trampoline = {
.addr = (kprobe_opcode_t *) &kretprobe_trampoline, .addr = (kprobe_opcode_t *) &kretprobe_trampoline,
.pre_handler = trampoline_probe_handler .pre_handler = trampoline_probe_handler
......
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#include <linux/types.h> #include <linux/types.h>
struct arch_hw_breakpoint { struct arch_hw_breakpoint {
char *name; /* Contains name of the symbol to set bkpt */
unsigned long address; unsigned long address;
u16 len; u16 len;
u16 type; u16 type;
...@@ -41,6 +40,7 @@ struct sh_ubc { ...@@ -41,6 +40,7 @@ struct sh_ubc {
struct clk *clk; /* optional interface clock / MSTP bit */ struct clk *clk; /* optional interface clock / MSTP bit */
}; };
struct perf_event_attr;
struct perf_event; struct perf_event;
struct task_struct; struct task_struct;
struct pmu; struct pmu;
...@@ -54,8 +54,10 @@ static inline int hw_breakpoint_slots(int type) ...@@ -54,8 +54,10 @@ static inline int hw_breakpoint_slots(int type)
} }
/* arch/sh/kernel/hw_breakpoint.c */ /* arch/sh/kernel/hw_breakpoint.c */
extern int arch_check_bp_in_kernelspace(struct perf_event *bp); extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -27,7 +27,6 @@ struct kprobe; ...@@ -27,7 +27,6 @@ struct kprobe;
void arch_remove_kprobe(struct kprobe *); void arch_remove_kprobe(struct kprobe *);
void kretprobe_trampoline(void); void kretprobe_trampoline(void);
void jprobe_return_end(void);
/* Architecture specific copy of original instruction*/ /* Architecture specific copy of original instruction*/
struct arch_specific_insn { struct arch_specific_insn {
...@@ -43,9 +42,6 @@ struct prev_kprobe { ...@@ -43,9 +42,6 @@ struct prev_kprobe {
/* per-cpu kprobe control block */ /* per-cpu kprobe control block */
struct kprobe_ctlblk { struct kprobe_ctlblk {
unsigned long kprobe_status; unsigned long kprobe_status;
unsigned long jprobe_saved_r15;
struct pt_regs jprobe_saved_regs;
kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
......
...@@ -124,14 +124,13 @@ static int get_hbp_len(u16 hbp_len) ...@@ -124,14 +124,13 @@ static int get_hbp_len(u16 hbp_len)
/* /*
* Check for virtual address in kernel space. * Check for virtual address in kernel space.
*/ */
int arch_check_bp_in_kernelspace(struct perf_event *bp) int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw)
{ {
unsigned int len; unsigned int len;
unsigned long va; unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
va = info->address; va = hw->address;
len = get_hbp_len(info->len); len = get_hbp_len(hw->len);
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
...@@ -174,40 +173,40 @@ int arch_bp_generic_fields(int sh_len, int sh_type, ...@@ -174,40 +173,40 @@ int arch_bp_generic_fields(int sh_len, int sh_type,
return 0; return 0;
} }
static int arch_build_bp_info(struct perf_event *bp) static int arch_build_bp_info(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp); hw->address = attr->bp_addr;
info->address = bp->attr.bp_addr;
/* Len */ /* Len */
switch (bp->attr.bp_len) { switch (attr->bp_len) {
case HW_BREAKPOINT_LEN_1: case HW_BREAKPOINT_LEN_1:
info->len = SH_BREAKPOINT_LEN_1; hw->len = SH_BREAKPOINT_LEN_1;
break; break;
case HW_BREAKPOINT_LEN_2: case HW_BREAKPOINT_LEN_2:
info->len = SH_BREAKPOINT_LEN_2; hw->len = SH_BREAKPOINT_LEN_2;
break; break;
case HW_BREAKPOINT_LEN_4: case HW_BREAKPOINT_LEN_4:
info->len = SH_BREAKPOINT_LEN_4; hw->len = SH_BREAKPOINT_LEN_4;
break; break;
case HW_BREAKPOINT_LEN_8: case HW_BREAKPOINT_LEN_8:
info->len = SH_BREAKPOINT_LEN_8; hw->len = SH_BREAKPOINT_LEN_8;
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
/* Type */ /* Type */
switch (bp->attr.bp_type) { switch (attr->bp_type) {
case HW_BREAKPOINT_R: case HW_BREAKPOINT_R:
info->type = SH_BREAKPOINT_READ; hw->type = SH_BREAKPOINT_READ;
break; break;
case HW_BREAKPOINT_W: case HW_BREAKPOINT_W:
info->type = SH_BREAKPOINT_WRITE; hw->type = SH_BREAKPOINT_WRITE;
break; break;
case HW_BREAKPOINT_W | HW_BREAKPOINT_R: case HW_BREAKPOINT_W | HW_BREAKPOINT_R:
info->type = SH_BREAKPOINT_RW; hw->type = SH_BREAKPOINT_RW;
break; break;
default: default:
return -EINVAL; return -EINVAL;
...@@ -219,19 +218,20 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -219,19 +218,20 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings * Validate the arch-specific HW Breakpoint register settings
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp) int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
unsigned int align; unsigned int align;
int ret; int ret;
ret = arch_build_bp_info(bp); ret = arch_build_bp_info(bp, attr, hw);
if (ret) if (ret)
return ret; return ret;
ret = -EINVAL; ret = -EINVAL;
switch (info->len) { switch (hw->len) {
case SH_BREAKPOINT_LEN_1: case SH_BREAKPOINT_LEN_1:
align = 0; align = 0;
break; break;
...@@ -248,18 +248,11 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -248,18 +248,11 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
return ret; return ret;
} }
/*
* For kernel-addresses, either the address or symbol name can be
* specified.
*/
if (info->name)
info->address = (unsigned long)kallsyms_lookup_name(info->name);
/* /*
* Check that the low-order bits of the address are appropriate * Check that the low-order bits of the address are appropriate
* for the alignment implied by len. * for the alignment implied by len.
*/ */
if (info->address & align) if (hw->address & align)
return -EINVAL; return -EINVAL;
return 0; return 0;
...@@ -346,7 +339,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args) ...@@ -346,7 +339,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args)
perf_bp_event(bp, args->regs); perf_bp_event(bp, args->regs);
/* Deliver the signal to userspace */ /* Deliver the signal to userspace */
if (!arch_check_bp_in_kernelspace(bp)) { if (!arch_check_bp_in_kernelspace(&bp->hw.info)) {
force_sig_fault(SIGTRAP, TRAP_HWBKPT, force_sig_fault(SIGTRAP, TRAP_HWBKPT,
(void __user *)NULL, current); (void __user *)NULL, current);
} }
......
...@@ -248,11 +248,6 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -248,11 +248,6 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
prepare_singlestep(p, regs); prepare_singlestep(p, regs);
kcb->kprobe_status = KPROBE_REENTER; kcb->kprobe_status = KPROBE_REENTER;
return 1; return 1;
} else {
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
goto ss_probe;
}
} }
goto no_kprobe; goto no_kprobe;
} }
...@@ -277,11 +272,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -277,11 +272,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
set_current_kprobe(p, regs, kcb); set_current_kprobe(p, regs, kcb);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
/* handler has already set things up, so skip ss setup */ /* handler has already set things up, so skip ss setup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
ss_probe:
prepare_singlestep(p, regs); prepare_singlestep(p, regs);
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
return 1; return 1;
...@@ -358,8 +355,6 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -358,8 +355,6 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
regs->pc = orig_ret_address; regs->pc = orig_ret_address;
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
kfree(ri); kfree(ri);
...@@ -508,14 +503,8 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ...@@ -508,14 +503,8 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
if (post_kprobe_handler(args->regs)) if (post_kprobe_handler(args->regs))
ret = NOTIFY_STOP; ret = NOTIFY_STOP;
} else { } else {
if (kprobe_handler(args->regs)) { if (kprobe_handler(args->regs))
ret = NOTIFY_STOP; ret = NOTIFY_STOP;
} else {
p = __this_cpu_read(current_kprobe);
if (p->break_handler &&
p->break_handler(p, args->regs))
ret = NOTIFY_STOP;
}
} }
} }
} }
...@@ -523,57 +512,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ...@@ -523,57 +512,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
return ret; return ret;
} }
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
unsigned long addr;
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
kcb->jprobe_saved_regs = *regs;
kcb->jprobe_saved_r15 = regs->regs[15];
addr = kcb->jprobe_saved_r15;
/*
* TBD: As Linus pointed out, gcc assumes that the callee
* owns the argument space and could overwrite it, e.g.
* tailcall optimization. So, to be absolutely safe
* we also save and restore enough stack bytes to cover
* the argument area.
*/
memcpy(kcb->jprobes_stack, (kprobe_opcode_t *) addr,
MIN_STACK_SIZE(addr));
regs->pc = (unsigned long)(jp->entry);
return 1;
}
void __kprobes jprobe_return(void)
{
asm volatile ("trapa #0x3a\n\t" "jprobe_return_end:\n\t" "nop\n\t");
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long stack_addr = kcb->jprobe_saved_r15;
u8 *addr = (u8 *)regs->pc;
if ((addr >= (u8 *)jprobe_return) &&
(addr <= (u8 *)jprobe_return_end)) {
*regs = kcb->jprobe_saved_regs;
memcpy((kprobe_opcode_t *)stack_addr, kcb->jprobes_stack,
MIN_STACK_SIZE(stack_addr));
kcb->kprobe_status = KPROBE_HIT_SS;
preempt_enable_no_resched();
return 1;
}
return 0;
}
static struct kprobe trampoline_p = { static struct kprobe trampoline_p = {
.addr = (kprobe_opcode_t *)&kretprobe_trampoline, .addr = (kprobe_opcode_t *)&kretprobe_trampoline,
.pre_handler = trampoline_probe_handler .pre_handler = trampoline_probe_handler
......
...@@ -44,7 +44,6 @@ struct kprobe_ctlblk { ...@@ -44,7 +44,6 @@ struct kprobe_ctlblk {
unsigned long kprobe_status; unsigned long kprobe_status;
unsigned long kprobe_orig_tnpc; unsigned long kprobe_orig_tnpc;
unsigned long kprobe_orig_tstate_pil; unsigned long kprobe_orig_tstate_pil;
struct pt_regs jprobe_saved_regs;
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
......
...@@ -147,18 +147,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -147,18 +147,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
kcb->kprobe_status = KPROBE_REENTER; kcb->kprobe_status = KPROBE_REENTER;
prepare_singlestep(p, regs, kcb); prepare_singlestep(p, regs, kcb);
return 1; return 1;
} else { } else if (*(u32 *)addr != BREAKPOINT_INSTRUCTION) {
if (*(u32 *)addr != BREAKPOINT_INSTRUCTION) {
/* The breakpoint instruction was removed by /* The breakpoint instruction was removed by
* another cpu right after we hit, no further * another cpu right after we hit, no further
* handling of this interrupt is appropriate * handling of this interrupt is appropriate
*/ */
ret = 1; ret = 1;
goto no_kprobe;
}
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs))
goto ss_probe;
} }
goto no_kprobe; goto no_kprobe;
} }
...@@ -181,10 +175,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -181,10 +175,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
set_current_kprobe(p, regs, kcb); set_current_kprobe(p, regs, kcb);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
ss_probe:
prepare_singlestep(p, regs, kcb); prepare_singlestep(p, regs, kcb);
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
return 1; return 1;
...@@ -441,53 +437,6 @@ asmlinkage void __kprobes kprobe_trap(unsigned long trap_level, ...@@ -441,53 +437,6 @@ asmlinkage void __kprobes kprobe_trap(unsigned long trap_level,
exception_exit(prev_state); exception_exit(prev_state);
} }
/* Jprobes support. */
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
memcpy(&(kcb->jprobe_saved_regs), regs, sizeof(*regs));
regs->tpc = (unsigned long) jp->entry;
regs->tnpc = ((unsigned long) jp->entry) + 0x4UL;
regs->tstate |= TSTATE_PIL;
return 1;
}
void __kprobes jprobe_return(void)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
register unsigned long orig_fp asm("g1");
orig_fp = kcb->jprobe_saved_regs.u_regs[UREG_FP];
__asm__ __volatile__("\n"
"1: cmp %%sp, %0\n\t"
"blu,a,pt %%xcc, 1b\n\t"
" restore\n\t"
".globl jprobe_return_trap_instruction\n"
"jprobe_return_trap_instruction:\n\t"
"ta 0x70"
: /* no outputs */
: "r" (orig_fp));
}
extern void jprobe_return_trap_instruction(void);
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
u32 *addr = (u32 *) regs->tpc;
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
if (addr == (u32 *) jprobe_return_trap_instruction) {
memcpy(regs, &(kcb->jprobe_saved_regs), sizeof(*regs));
preempt_enable_no_resched();
return 1;
}
return 0;
}
/* The value stored in the return address register is actually 2 /* The value stored in the return address register is actually 2
* instructions before where the callee will return to. * instructions before where the callee will return to.
* Sequences usually look something like this * Sequences usually look something like this
...@@ -562,9 +511,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -562,9 +511,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
regs->tpc = orig_ret_address; regs->tpc = orig_ret_address;
regs->tnpc = orig_ret_address + 4; regs->tnpc = orig_ret_address + 4;
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -2041,15 +2041,15 @@ static void intel_pmu_disable_event(struct perf_event *event) ...@@ -2041,15 +2041,15 @@ static void intel_pmu_disable_event(struct perf_event *event)
cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx); cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx);
cpuc->intel_cp_status &= ~(1ull << hwc->idx); cpuc->intel_cp_status &= ~(1ull << hwc->idx);
if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_disable(event);
if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
intel_pmu_disable_fixed(hwc); intel_pmu_disable_fixed(hwc);
return; return;
} }
x86_pmu_disable_event(event); x86_pmu_disable_event(event);
if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_disable(event);
} }
static void intel_pmu_del_event(struct perf_event *event) static void intel_pmu_del_event(struct perf_event *event)
...@@ -2068,17 +2068,19 @@ static void intel_pmu_read_event(struct perf_event *event) ...@@ -2068,17 +2068,19 @@ static void intel_pmu_read_event(struct perf_event *event)
x86_perf_event_update(event); x86_perf_event_update(event);
} }
static void intel_pmu_enable_fixed(struct hw_perf_event *hwc) static void intel_pmu_enable_fixed(struct perf_event *event)
{ {
struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx - INTEL_PMC_IDX_FIXED; int idx = hwc->idx - INTEL_PMC_IDX_FIXED;
u64 ctrl_val, bits, mask; u64 ctrl_val, mask, bits = 0;
/* /*
* Enable IRQ generation (0x8), * Enable IRQ generation (0x8), if not PEBS,
* and enable ring-3 counting (0x2) and ring-0 counting (0x1) * and enable ring-3 counting (0x2) and ring-0 counting (0x1)
* if requested: * if requested:
*/ */
bits = 0x8ULL; if (!event->attr.precise_ip)
bits |= 0x8;
if (hwc->config & ARCH_PERFMON_EVENTSEL_USR) if (hwc->config & ARCH_PERFMON_EVENTSEL_USR)
bits |= 0x2; bits |= 0x2;
if (hwc->config & ARCH_PERFMON_EVENTSEL_OS) if (hwc->config & ARCH_PERFMON_EVENTSEL_OS)
...@@ -2120,14 +2122,14 @@ static void intel_pmu_enable_event(struct perf_event *event) ...@@ -2120,14 +2122,14 @@ static void intel_pmu_enable_event(struct perf_event *event)
if (unlikely(event_is_checkpointed(event))) if (unlikely(event_is_checkpointed(event)))
cpuc->intel_cp_status |= (1ull << hwc->idx); cpuc->intel_cp_status |= (1ull << hwc->idx);
if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_enable(event);
if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
intel_pmu_enable_fixed(hwc); intel_pmu_enable_fixed(event);
return; return;
} }
if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_enable(event);
__x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE); __x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE);
} }
...@@ -2280,7 +2282,10 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) ...@@ -2280,7 +2282,10 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
* counters from the GLOBAL_STATUS mask and we always process PEBS * counters from the GLOBAL_STATUS mask and we always process PEBS
* events via drain_pebs(). * events via drain_pebs().
*/ */
status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); if (x86_pmu.flags & PMU_FL_PEBS_ALL)
status &= ~cpuc->pebs_enabled;
else
status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
/* /*
* PEBS overflow sets bit 62 in the global status register * PEBS overflow sets bit 62 in the global status register
...@@ -4072,7 +4077,6 @@ __init int intel_pmu_init(void) ...@@ -4072,7 +4077,6 @@ __init int intel_pmu_init(void)
intel_pmu_lbr_init_skl(); intel_pmu_lbr_init_skl();
x86_pmu.event_constraints = intel_slm_event_constraints; x86_pmu.event_constraints = intel_slm_event_constraints;
x86_pmu.pebs_constraints = intel_glp_pebs_event_constraints;
x86_pmu.extra_regs = intel_glm_extra_regs; x86_pmu.extra_regs = intel_glm_extra_regs;
/* /*
* It's recommended to use CPU_CLK_UNHALTED.CORE_P + NPEBS * It's recommended to use CPU_CLK_UNHALTED.CORE_P + NPEBS
...@@ -4082,6 +4086,7 @@ __init int intel_pmu_init(void) ...@@ -4082,6 +4086,7 @@ __init int intel_pmu_init(void)
x86_pmu.pebs_prec_dist = true; x86_pmu.pebs_prec_dist = true;
x86_pmu.lbr_pt_coexist = true; x86_pmu.lbr_pt_coexist = true;
x86_pmu.flags |= PMU_FL_HAS_RSP_1; x86_pmu.flags |= PMU_FL_HAS_RSP_1;
x86_pmu.flags |= PMU_FL_PEBS_ALL;
x86_pmu.get_event_constraints = glp_get_event_constraints; x86_pmu.get_event_constraints = glp_get_event_constraints;
x86_pmu.cpu_events = glm_events_attrs; x86_pmu.cpu_events = glm_events_attrs;
/* Goldmont Plus has 4-wide pipeline */ /* Goldmont Plus has 4-wide pipeline */
......
...@@ -713,12 +713,6 @@ struct event_constraint intel_glm_pebs_event_constraints[] = { ...@@ -713,12 +713,6 @@ struct event_constraint intel_glm_pebs_event_constraints[] = {
EVENT_CONSTRAINT_END EVENT_CONSTRAINT_END
}; };
struct event_constraint intel_glp_pebs_event_constraints[] = {
/* Allow all events as PEBS with no flags */
INTEL_ALL_EVENT_CONSTRAINT(0, 0xf),
EVENT_CONSTRAINT_END
};
struct event_constraint intel_nehalem_pebs_event_constraints[] = { struct event_constraint intel_nehalem_pebs_event_constraints[] = {
INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */ INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */
INTEL_FLAGS_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */ INTEL_FLAGS_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */
...@@ -871,6 +865,13 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event) ...@@ -871,6 +865,13 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event)
} }
} }
/*
* Extended PEBS support
* Makes the PEBS code search the normal constraints.
*/
if (x86_pmu.flags & PMU_FL_PEBS_ALL)
return NULL;
return &emptyconstraint; return &emptyconstraint;
} }
...@@ -896,10 +897,16 @@ static inline void pebs_update_threshold(struct cpu_hw_events *cpuc) ...@@ -896,10 +897,16 @@ static inline void pebs_update_threshold(struct cpu_hw_events *cpuc)
{ {
struct debug_store *ds = cpuc->ds; struct debug_store *ds = cpuc->ds;
u64 threshold; u64 threshold;
int reserved;
if (x86_pmu.flags & PMU_FL_PEBS_ALL)
reserved = x86_pmu.max_pebs_events + x86_pmu.num_counters_fixed;
else
reserved = x86_pmu.max_pebs_events;
if (cpuc->n_pebs == cpuc->n_large_pebs) { if (cpuc->n_pebs == cpuc->n_large_pebs) {
threshold = ds->pebs_absolute_maximum - threshold = ds->pebs_absolute_maximum -
x86_pmu.max_pebs_events * x86_pmu.pebs_record_size; reserved * x86_pmu.pebs_record_size;
} else { } else {
threshold = ds->pebs_buffer_base + x86_pmu.pebs_record_size; threshold = ds->pebs_buffer_base + x86_pmu.pebs_record_size;
} }
...@@ -963,7 +970,11 @@ void intel_pmu_pebs_enable(struct perf_event *event) ...@@ -963,7 +970,11 @@ void intel_pmu_pebs_enable(struct perf_event *event)
* This must be done in pmu::start(), because PERF_EVENT_IOC_PERIOD. * This must be done in pmu::start(), because PERF_EVENT_IOC_PERIOD.
*/ */
if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) { if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) {
ds->pebs_event_reset[hwc->idx] = unsigned int idx = hwc->idx;
if (idx >= INTEL_PMC_IDX_FIXED)
idx = MAX_PEBS_EVENTS + (idx - INTEL_PMC_IDX_FIXED);
ds->pebs_event_reset[idx] =
(u64)(-hwc->sample_period) & x86_pmu.cntval_mask; (u64)(-hwc->sample_period) & x86_pmu.cntval_mask;
} else { } else {
ds->pebs_event_reset[hwc->idx] = 0; ds->pebs_event_reset[hwc->idx] = 0;
...@@ -1481,9 +1492,10 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) ...@@ -1481,9 +1492,10 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
struct debug_store *ds = cpuc->ds; struct debug_store *ds = cpuc->ds;
struct perf_event *event; struct perf_event *event;
void *base, *at, *top; void *base, *at, *top;
short counts[MAX_PEBS_EVENTS] = {}; short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
short error[MAX_PEBS_EVENTS] = {}; short error[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
int bit, i; int bit, i, size;
u64 mask;
if (!x86_pmu.pebs_active) if (!x86_pmu.pebs_active)
return; return;
...@@ -1493,6 +1505,13 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) ...@@ -1493,6 +1505,13 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
ds->pebs_index = ds->pebs_buffer_base; ds->pebs_index = ds->pebs_buffer_base;
mask = (1ULL << x86_pmu.max_pebs_events) - 1;
size = x86_pmu.max_pebs_events;
if (x86_pmu.flags & PMU_FL_PEBS_ALL) {
mask |= ((1ULL << x86_pmu.num_counters_fixed) - 1) << INTEL_PMC_IDX_FIXED;
size = INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed;
}
if (unlikely(base >= top)) { if (unlikely(base >= top)) {
/* /*
* The drain_pebs() could be called twice in a short period * The drain_pebs() could be called twice in a short period
...@@ -1502,7 +1521,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) ...@@ -1502,7 +1521,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
* update the event->count for this case. * update the event->count for this case.
*/ */
for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled,
x86_pmu.max_pebs_events) { size) {
event = cpuc->events[bit]; event = cpuc->events[bit];
if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD) if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)
intel_pmu_save_and_restart_reload(event, 0); intel_pmu_save_and_restart_reload(event, 0);
...@@ -1515,12 +1534,12 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) ...@@ -1515,12 +1534,12 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
u64 pebs_status; u64 pebs_status;
pebs_status = p->status & cpuc->pebs_enabled; pebs_status = p->status & cpuc->pebs_enabled;
pebs_status &= (1ULL << x86_pmu.max_pebs_events) - 1; pebs_status &= mask;
/* PEBS v3 has more accurate status bits */ /* PEBS v3 has more accurate status bits */
if (x86_pmu.intel_cap.pebs_format >= 3) { if (x86_pmu.intel_cap.pebs_format >= 3) {
for_each_set_bit(bit, (unsigned long *)&pebs_status, for_each_set_bit(bit, (unsigned long *)&pebs_status,
x86_pmu.max_pebs_events) size)
counts[bit]++; counts[bit]++;
continue; continue;
...@@ -1568,7 +1587,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) ...@@ -1568,7 +1587,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
counts[bit]++; counts[bit]++;
} }
for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) { for (bit = 0; bit < size; bit++) {
if ((counts[bit] == 0) && (error[bit] == 0)) if ((counts[bit] == 0) && (error[bit] == 0))
continue; continue;
......
...@@ -216,6 +216,8 @@ static void intel_pmu_lbr_reset_64(void) ...@@ -216,6 +216,8 @@ static void intel_pmu_lbr_reset_64(void)
void intel_pmu_lbr_reset(void) void intel_pmu_lbr_reset(void)
{ {
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
if (!x86_pmu.lbr_nr) if (!x86_pmu.lbr_nr)
return; return;
...@@ -223,6 +225,9 @@ void intel_pmu_lbr_reset(void) ...@@ -223,6 +225,9 @@ void intel_pmu_lbr_reset(void)
intel_pmu_lbr_reset_32(); intel_pmu_lbr_reset_32();
else else
intel_pmu_lbr_reset_64(); intel_pmu_lbr_reset_64();
cpuc->last_task_ctx = NULL;
cpuc->last_log_id = 0;
} }
/* /*
...@@ -334,6 +339,7 @@ static inline u64 rdlbr_to(unsigned int idx) ...@@ -334,6 +339,7 @@ static inline u64 rdlbr_to(unsigned int idx)
static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
{ {
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
int i; int i;
unsigned lbr_idx, mask; unsigned lbr_idx, mask;
u64 tos; u64 tos;
...@@ -344,9 +350,21 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) ...@@ -344,9 +350,21 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
return; return;
} }
mask = x86_pmu.lbr_nr - 1;
tos = task_ctx->tos; tos = task_ctx->tos;
for (i = 0; i < tos; i++) { /*
* Does not restore the LBR registers, if
* - No one else touched them, and
* - Did not enter C6
*/
if ((task_ctx == cpuc->last_task_ctx) &&
(task_ctx->log_id == cpuc->last_log_id) &&
rdlbr_from(tos)) {
task_ctx->lbr_stack_state = LBR_NONE;
return;
}
mask = x86_pmu.lbr_nr - 1;
for (i = 0; i < task_ctx->valid_lbrs; i++) {
lbr_idx = (tos - i) & mask; lbr_idx = (tos - i) & mask;
wrlbr_from(lbr_idx, task_ctx->lbr_from[i]); wrlbr_from(lbr_idx, task_ctx->lbr_from[i]);
wrlbr_to (lbr_idx, task_ctx->lbr_to[i]); wrlbr_to (lbr_idx, task_ctx->lbr_to[i]);
...@@ -354,14 +372,24 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) ...@@ -354,14 +372,24 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]); wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
} }
for (; i < x86_pmu.lbr_nr; i++) {
lbr_idx = (tos - i) & mask;
wrlbr_from(lbr_idx, 0);
wrlbr_to(lbr_idx, 0);
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
wrmsrl(MSR_LBR_INFO_0 + lbr_idx, 0);
}
wrmsrl(x86_pmu.lbr_tos, tos); wrmsrl(x86_pmu.lbr_tos, tos);
task_ctx->lbr_stack_state = LBR_NONE; task_ctx->lbr_stack_state = LBR_NONE;
} }
static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
{ {
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
unsigned lbr_idx, mask; unsigned lbr_idx, mask;
u64 tos; u64 tos, from;
int i; int i;
if (task_ctx->lbr_callstack_users == 0) { if (task_ctx->lbr_callstack_users == 0) {
...@@ -371,15 +399,22 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) ...@@ -371,15 +399,22 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
mask = x86_pmu.lbr_nr - 1; mask = x86_pmu.lbr_nr - 1;
tos = intel_pmu_lbr_tos(); tos = intel_pmu_lbr_tos();
for (i = 0; i < tos; i++) { for (i = 0; i < x86_pmu.lbr_nr; i++) {
lbr_idx = (tos - i) & mask; lbr_idx = (tos - i) & mask;
task_ctx->lbr_from[i] = rdlbr_from(lbr_idx); from = rdlbr_from(lbr_idx);
if (!from)
break;
task_ctx->lbr_from[i] = from;
task_ctx->lbr_to[i] = rdlbr_to(lbr_idx); task_ctx->lbr_to[i] = rdlbr_to(lbr_idx);
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]); rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
} }
task_ctx->valid_lbrs = i;
task_ctx->tos = tos; task_ctx->tos = tos;
task_ctx->lbr_stack_state = LBR_VALID; task_ctx->lbr_stack_state = LBR_VALID;
cpuc->last_task_ctx = task_ctx;
cpuc->last_log_id = ++task_ctx->log_id;
} }
void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in) void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in)
...@@ -531,7 +566,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc) ...@@ -531,7 +566,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
*/ */
static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
{ {
bool need_info = false; bool need_info = false, call_stack = false;
unsigned long mask = x86_pmu.lbr_nr - 1; unsigned long mask = x86_pmu.lbr_nr - 1;
int lbr_format = x86_pmu.intel_cap.lbr_format; int lbr_format = x86_pmu.intel_cap.lbr_format;
u64 tos = intel_pmu_lbr_tos(); u64 tos = intel_pmu_lbr_tos();
...@@ -542,7 +577,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) ...@@ -542,7 +577,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
if (cpuc->lbr_sel) { if (cpuc->lbr_sel) {
need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO); need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO);
if (cpuc->lbr_sel->config & LBR_CALL_STACK) if (cpuc->lbr_sel->config & LBR_CALL_STACK)
num = tos; call_stack = true;
} }
for (i = 0; i < num; i++) { for (i = 0; i < num; i++) {
...@@ -555,6 +590,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) ...@@ -555,6 +590,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
from = rdlbr_from(lbr_idx); from = rdlbr_from(lbr_idx);
to = rdlbr_to(lbr_idx); to = rdlbr_to(lbr_idx);
/*
* Read LBR call stack entries
* until invalid entry (0s) is detected.
*/
if (call_stack && !from)
break;
if (lbr_format == LBR_FORMAT_INFO && need_info) { if (lbr_format == LBR_FORMAT_INFO && need_info) {
u64 info; u64 info;
......
...@@ -163,6 +163,7 @@ struct intel_excl_cntrs { ...@@ -163,6 +163,7 @@ struct intel_excl_cntrs {
unsigned core_id; /* per-core: core id */ unsigned core_id; /* per-core: core id */
}; };
struct x86_perf_task_context;
#define MAX_LBR_ENTRIES 32 #define MAX_LBR_ENTRIES 32
enum { enum {
...@@ -214,6 +215,8 @@ struct cpu_hw_events { ...@@ -214,6 +215,8 @@ struct cpu_hw_events {
struct perf_branch_entry lbr_entries[MAX_LBR_ENTRIES]; struct perf_branch_entry lbr_entries[MAX_LBR_ENTRIES];
struct er_account *lbr_sel; struct er_account *lbr_sel;
u64 br_sel; u64 br_sel;
struct x86_perf_task_context *last_task_ctx;
int last_log_id;
/* /*
* Intel host/guest exclude bits * Intel host/guest exclude bits
...@@ -648,8 +651,10 @@ struct x86_perf_task_context { ...@@ -648,8 +651,10 @@ struct x86_perf_task_context {
u64 lbr_to[MAX_LBR_ENTRIES]; u64 lbr_to[MAX_LBR_ENTRIES];
u64 lbr_info[MAX_LBR_ENTRIES]; u64 lbr_info[MAX_LBR_ENTRIES];
int tos; int tos;
int valid_lbrs;
int lbr_callstack_users; int lbr_callstack_users;
int lbr_stack_state; int lbr_stack_state;
int log_id;
}; };
#define x86_add_quirk(func_) \ #define x86_add_quirk(func_) \
...@@ -668,6 +673,7 @@ do { \ ...@@ -668,6 +673,7 @@ do { \
#define PMU_FL_HAS_RSP_1 0x2 /* has 2 equivalent offcore_rsp regs */ #define PMU_FL_HAS_RSP_1 0x2 /* has 2 equivalent offcore_rsp regs */
#define PMU_FL_EXCL_CNTRS 0x4 /* has exclusive counter requirements */ #define PMU_FL_EXCL_CNTRS 0x4 /* has exclusive counter requirements */
#define PMU_FL_EXCL_ENABLED 0x8 /* exclusive counter active */ #define PMU_FL_EXCL_ENABLED 0x8 /* exclusive counter active */
#define PMU_FL_PEBS_ALL 0x10 /* all events are valid PEBS events */
#define EVENT_VAR(_id) event_attr_##_id #define EVENT_VAR(_id) event_attr_##_id
#define EVENT_PTR(_id) &event_attr_##_id.attr.attr #define EVENT_PTR(_id) &event_attr_##_id.attr.attr
......
...@@ -49,11 +49,14 @@ static inline int hw_breakpoint_slots(int type) ...@@ -49,11 +49,14 @@ static inline int hw_breakpoint_slots(int type)
return HBP_NUM; return HBP_NUM;
} }
struct perf_event_attr;
struct perf_event; struct perf_event;
struct pmu; struct pmu;
extern int arch_check_bp_in_kernelspace(struct perf_event *bp); extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw);
extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
/* The maximal number of PEBS events: */ /* The maximal number of PEBS events: */
#define MAX_PEBS_EVENTS 8 #define MAX_PEBS_EVENTS 8
#define MAX_FIXED_PEBS_EVENTS 3
/* /*
* A debug store configuration. * A debug store configuration.
...@@ -23,7 +24,7 @@ struct debug_store { ...@@ -23,7 +24,7 @@ struct debug_store {
u64 pebs_index; u64 pebs_index;
u64 pebs_absolute_maximum; u64 pebs_absolute_maximum;
u64 pebs_interrupt_threshold; u64 pebs_interrupt_threshold;
u64 pebs_event_reset[MAX_PEBS_EVENTS]; u64 pebs_event_reset[MAX_PEBS_EVENTS + MAX_FIXED_PEBS_EVENTS];
} __aligned(PAGE_SIZE); } __aligned(PAGE_SIZE);
DECLARE_PER_CPU_PAGE_ALIGNED(struct debug_store, cpu_debug_store); DECLARE_PER_CPU_PAGE_ALIGNED(struct debug_store, cpu_debug_store);
......
...@@ -78,7 +78,7 @@ struct arch_specific_insn { ...@@ -78,7 +78,7 @@ struct arch_specific_insn {
* boostable = true: This instruction has been boosted: we have * boostable = true: This instruction has been boosted: we have
* added a relative jump after the instruction copy in insn, * added a relative jump after the instruction copy in insn,
* so no single-step and fixup are needed (unless there's * so no single-step and fixup are needed (unless there's
* a post_handler or break_handler). * a post_handler).
*/ */
bool boostable; bool boostable;
bool if_modifier; bool if_modifier;
...@@ -111,9 +111,6 @@ struct kprobe_ctlblk { ...@@ -111,9 +111,6 @@ struct kprobe_ctlblk {
unsigned long kprobe_status; unsigned long kprobe_status;
unsigned long kprobe_old_flags; unsigned long kprobe_old_flags;
unsigned long kprobe_saved_flags; unsigned long kprobe_saved_flags;
unsigned long *jprobe_saved_sp;
struct pt_regs jprobe_saved_regs;
kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
......
...@@ -169,28 +169,29 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp) ...@@ -169,28 +169,29 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp)
set_dr_addr_mask(0, i); set_dr_addr_mask(0, i);
} }
/* static int arch_bp_generic_len(int x86_len)
* Check for virtual address in kernel space.
*/
int arch_check_bp_in_kernelspace(struct perf_event *bp)
{ {
unsigned int len; switch (x86_len) {
unsigned long va; case X86_BREAKPOINT_LEN_1:
struct arch_hw_breakpoint *info = counter_arch_bp(bp); return HW_BREAKPOINT_LEN_1;
case X86_BREAKPOINT_LEN_2:
va = info->address; return HW_BREAKPOINT_LEN_2;
len = bp->attr.bp_len; case X86_BREAKPOINT_LEN_4:
return HW_BREAKPOINT_LEN_4;
/* #ifdef CONFIG_X86_64
* We don't need to worry about va + len - 1 overflowing: case X86_BREAKPOINT_LEN_8:
* we already require that va is aligned to a multiple of len. return HW_BREAKPOINT_LEN_8;
*/ #endif
return (va >= TASK_SIZE_MAX) || ((va + len - 1) >= TASK_SIZE_MAX); default:
return -EINVAL;
}
} }
int arch_bp_generic_fields(int x86_len, int x86_type, int arch_bp_generic_fields(int x86_len, int x86_type,
int *gen_len, int *gen_type) int *gen_len, int *gen_type)
{ {
int len;
/* Type */ /* Type */
switch (x86_type) { switch (x86_type) {
case X86_BREAKPOINT_EXECUTE: case X86_BREAKPOINT_EXECUTE:
...@@ -211,42 +212,47 @@ int arch_bp_generic_fields(int x86_len, int x86_type, ...@@ -211,42 +212,47 @@ int arch_bp_generic_fields(int x86_len, int x86_type,
} }
/* Len */ /* Len */
switch (x86_len) { len = arch_bp_generic_len(x86_len);
case X86_BREAKPOINT_LEN_1: if (len < 0)
*gen_len = HW_BREAKPOINT_LEN_1;
break;
case X86_BREAKPOINT_LEN_2:
*gen_len = HW_BREAKPOINT_LEN_2;
break;
case X86_BREAKPOINT_LEN_4:
*gen_len = HW_BREAKPOINT_LEN_4;
break;
#ifdef CONFIG_X86_64
case X86_BREAKPOINT_LEN_8:
*gen_len = HW_BREAKPOINT_LEN_8;
break;
#endif
default:
return -EINVAL; return -EINVAL;
} *gen_len = len;
return 0; return 0;
} }
/*
static int arch_build_bp_info(struct perf_event *bp) * Check for virtual address in kernel space.
*/
int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp); unsigned long va;
int len;
info->address = bp->attr.bp_addr; va = hw->address;
len = arch_bp_generic_len(hw->len);
WARN_ON_ONCE(len < 0);
/*
* We don't need to worry about va + len - 1 overflowing:
* we already require that va is aligned to a multiple of len.
*/
return (va >= TASK_SIZE_MAX) || ((va + len - 1) >= TASK_SIZE_MAX);
}
static int arch_build_bp_info(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{
hw->address = attr->bp_addr;
hw->mask = 0;
/* Type */ /* Type */
switch (bp->attr.bp_type) { switch (attr->bp_type) {
case HW_BREAKPOINT_W: case HW_BREAKPOINT_W:
info->type = X86_BREAKPOINT_WRITE; hw->type = X86_BREAKPOINT_WRITE;
break; break;
case HW_BREAKPOINT_W | HW_BREAKPOINT_R: case HW_BREAKPOINT_W | HW_BREAKPOINT_R:
info->type = X86_BREAKPOINT_RW; hw->type = X86_BREAKPOINT_RW;
break; break;
case HW_BREAKPOINT_X: case HW_BREAKPOINT_X:
/* /*
...@@ -254,23 +260,23 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -254,23 +260,23 @@ static int arch_build_bp_info(struct perf_event *bp)
* acceptable for kprobes. On non-kprobes kernels, we don't * acceptable for kprobes. On non-kprobes kernels, we don't
* allow kernel breakpoints at all. * allow kernel breakpoints at all.
*/ */
if (bp->attr.bp_addr >= TASK_SIZE_MAX) { if (attr->bp_addr >= TASK_SIZE_MAX) {
#ifdef CONFIG_KPROBES #ifdef CONFIG_KPROBES
if (within_kprobe_blacklist(bp->attr.bp_addr)) if (within_kprobe_blacklist(attr->bp_addr))
return -EINVAL; return -EINVAL;
#else #else
return -EINVAL; return -EINVAL;
#endif #endif
} }
info->type = X86_BREAKPOINT_EXECUTE; hw->type = X86_BREAKPOINT_EXECUTE;
/* /*
* x86 inst breakpoints need to have a specific undefined len. * x86 inst breakpoints need to have a specific undefined len.
* But we still need to check userspace is not trying to setup * But we still need to check userspace is not trying to setup
* an unsupported length, to get a range breakpoint for example. * an unsupported length, to get a range breakpoint for example.
*/ */
if (bp->attr.bp_len == sizeof(long)) { if (attr->bp_len == sizeof(long)) {
info->len = X86_BREAKPOINT_LEN_X; hw->len = X86_BREAKPOINT_LEN_X;
return 0; return 0;
} }
default: default:
...@@ -278,28 +284,26 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -278,28 +284,26 @@ static int arch_build_bp_info(struct perf_event *bp)
} }
/* Len */ /* Len */
info->mask = 0; switch (attr->bp_len) {
switch (bp->attr.bp_len) {
case HW_BREAKPOINT_LEN_1: case HW_BREAKPOINT_LEN_1:
info->len = X86_BREAKPOINT_LEN_1; hw->len = X86_BREAKPOINT_LEN_1;
break; break;
case HW_BREAKPOINT_LEN_2: case HW_BREAKPOINT_LEN_2:
info->len = X86_BREAKPOINT_LEN_2; hw->len = X86_BREAKPOINT_LEN_2;
break; break;
case HW_BREAKPOINT_LEN_4: case HW_BREAKPOINT_LEN_4:
info->len = X86_BREAKPOINT_LEN_4; hw->len = X86_BREAKPOINT_LEN_4;
break; break;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
case HW_BREAKPOINT_LEN_8: case HW_BREAKPOINT_LEN_8:
info->len = X86_BREAKPOINT_LEN_8; hw->len = X86_BREAKPOINT_LEN_8;
break; break;
#endif #endif
default: default:
/* AMD range breakpoint */ /* AMD range breakpoint */
if (!is_power_of_2(bp->attr.bp_len)) if (!is_power_of_2(attr->bp_len))
return -EINVAL; return -EINVAL;
if (bp->attr.bp_addr & (bp->attr.bp_len - 1)) if (attr->bp_addr & (attr->bp_len - 1))
return -EINVAL; return -EINVAL;
if (!boot_cpu_has(X86_FEATURE_BPEXT)) if (!boot_cpu_has(X86_FEATURE_BPEXT))
...@@ -312,8 +316,8 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -312,8 +316,8 @@ static int arch_build_bp_info(struct perf_event *bp)
* breakpoints, then we'll have to check for kprobe-blacklisted * breakpoints, then we'll have to check for kprobe-blacklisted
* addresses anywhere in the range. * addresses anywhere in the range.
*/ */
info->mask = bp->attr.bp_len - 1; hw->mask = attr->bp_len - 1;
info->len = X86_BREAKPOINT_LEN_1; hw->len = X86_BREAKPOINT_LEN_1;
} }
return 0; return 0;
...@@ -322,22 +326,23 @@ static int arch_build_bp_info(struct perf_event *bp) ...@@ -322,22 +326,23 @@ static int arch_build_bp_info(struct perf_event *bp)
/* /*
* Validate the arch-specific HW Breakpoint register settings * Validate the arch-specific HW Breakpoint register settings
*/ */
int arch_validate_hwbkpt_settings(struct perf_event *bp) int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
unsigned int align; unsigned int align;
int ret; int ret;
ret = arch_build_bp_info(bp); ret = arch_build_bp_info(bp, attr, hw);
if (ret) if (ret)
return ret; return ret;
switch (info->len) { switch (hw->len) {
case X86_BREAKPOINT_LEN_1: case X86_BREAKPOINT_LEN_1:
align = 0; align = 0;
if (info->mask) if (hw->mask)
align = info->mask; align = hw->mask;
break; break;
case X86_BREAKPOINT_LEN_2: case X86_BREAKPOINT_LEN_2:
align = 1; align = 1;
...@@ -358,7 +363,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp) ...@@ -358,7 +363,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
* Check that the low-order bits of the address are appropriate * Check that the low-order bits of the address are appropriate
* for the alignment implied by len. * for the alignment implied by len.
*/ */
if (info->address & align) if (hw->address & align)
return -EINVAL; return -EINVAL;
return 0; return 0;
......
...@@ -105,14 +105,4 @@ static inline unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsig ...@@ -105,14 +105,4 @@ static inline unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsig
} }
#endif #endif
#ifdef CONFIG_KPROBES_ON_FTRACE
extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb);
#else
static inline int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
return 0;
}
#endif
#endif #endif
...@@ -66,8 +66,6 @@ ...@@ -66,8 +66,6 @@
#include "common.h" #include "common.h"
void jprobe_return_end(void);
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
...@@ -395,8 +393,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn) ...@@ -395,8 +393,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
- (u8 *) real; - (u8 *) real;
if ((s64) (s32) newdisp != newdisp) { if ((s64) (s32) newdisp != newdisp) {
pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp); pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
pr_err("\tSrc: %p, Dest: %p, old disp: %x\n",
src, real, insn->displacement.value);
return 0; return 0;
} }
disp = (u8 *) dest + insn_offset_displacement(insn); disp = (u8 *) dest + insn_offset_displacement(insn);
...@@ -596,7 +592,6 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs, ...@@ -596,7 +592,6 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
* stepping. * stepping.
*/ */
regs->ip = (unsigned long)p->ainsn.insn; regs->ip = (unsigned long)p->ainsn.insn;
preempt_enable_no_resched();
return; return;
} }
#endif #endif
...@@ -640,8 +635,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs, ...@@ -640,8 +635,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
* Raise a BUG or we'll continue in an endless reentering loop * Raise a BUG or we'll continue in an endless reentering loop
* and eventually a stack overflow. * and eventually a stack overflow.
*/ */
printk(KERN_WARNING "Unrecoverable kprobe detected at %p.\n", pr_err("Unrecoverable kprobe detected.\n");
p->addr);
dump_kprobe(p); dump_kprobe(p);
BUG(); BUG();
default: default:
...@@ -669,12 +663,10 @@ int kprobe_int3_handler(struct pt_regs *regs) ...@@ -669,12 +663,10 @@ int kprobe_int3_handler(struct pt_regs *regs)
addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t)); addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
/* /*
* We don't want to be preempted for the entire * We don't want to be preempted for the entire duration of kprobe
* duration of kprobe processing. We conditionally * processing. Since int3 and debug trap disables irqs and we clear
* re-enable preemption at the end of this function, * IF while singlestepping, it must be no preemptible.
* and also in reenter_kprobe() and setup_singlestep().
*/ */
preempt_disable();
kcb = get_kprobe_ctlblk(); kcb = get_kprobe_ctlblk();
p = get_kprobe(addr); p = get_kprobe(addr);
...@@ -690,13 +682,14 @@ int kprobe_int3_handler(struct pt_regs *regs) ...@@ -690,13 +682,14 @@ int kprobe_int3_handler(struct pt_regs *regs)
/* /*
* If we have no pre-handler or it returned 0, we * If we have no pre-handler or it returned 0, we
* continue with normal processing. If we have a * continue with normal processing. If we have a
* pre-handler and it returned non-zero, it prepped * pre-handler and it returned non-zero, that means
* for calling the break_handler below on re-entry * user handler setup registers to exit to another
* for jprobe processing, so get out doing nothing * instruction, we must skip the single stepping.
* more here.
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) if (!p->pre_handler || !p->pre_handler(p, regs))
setup_singlestep(p, regs, kcb, 0); setup_singlestep(p, regs, kcb, 0);
else
reset_current_kprobe();
return 1; return 1;
} }
} else if (*addr != BREAKPOINT_INSTRUCTION) { } else if (*addr != BREAKPOINT_INSTRUCTION) {
...@@ -710,18 +703,9 @@ int kprobe_int3_handler(struct pt_regs *regs) ...@@ -710,18 +703,9 @@ int kprobe_int3_handler(struct pt_regs *regs)
* the original instruction. * the original instruction.
*/ */
regs->ip = (unsigned long)addr; regs->ip = (unsigned long)addr;
preempt_enable_no_resched();
return 1; return 1;
} else if (kprobe_running()) {
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
if (!skip_singlestep(p, regs, kcb))
setup_singlestep(p, regs, kcb, 0);
return 1;
}
} /* else: not a kprobe fault; let the kernel handle it */ } /* else: not a kprobe fault; let the kernel handle it */
preempt_enable_no_resched();
return 0; return 0;
} }
NOKPROBE_SYMBOL(kprobe_int3_handler); NOKPROBE_SYMBOL(kprobe_int3_handler);
...@@ -972,8 +956,6 @@ int kprobe_debug_handler(struct pt_regs *regs) ...@@ -972,8 +956,6 @@ int kprobe_debug_handler(struct pt_regs *regs)
} }
reset_current_kprobe(); reset_current_kprobe();
out: out:
preempt_enable_no_resched();
/* /*
* if somebody else is singlestepping across a probe point, flags * if somebody else is singlestepping across a probe point, flags
* will have TF set, in which case, continue the remaining processing * will have TF set, in which case, continue the remaining processing
...@@ -1020,7 +1002,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr) ...@@ -1020,7 +1002,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
restore_previous_kprobe(kcb); restore_previous_kprobe(kcb);
else else
reset_current_kprobe(); reset_current_kprobe();
preempt_enable_no_resched();
} else if (kcb->kprobe_status == KPROBE_HIT_ACTIVE || } else if (kcb->kprobe_status == KPROBE_HIT_ACTIVE ||
kcb->kprobe_status == KPROBE_HIT_SSDONE) { kcb->kprobe_status == KPROBE_HIT_SSDONE) {
/* /*
...@@ -1083,93 +1064,6 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, ...@@ -1083,93 +1064,6 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
} }
NOKPROBE_SYMBOL(kprobe_exceptions_notify); NOKPROBE_SYMBOL(kprobe_exceptions_notify);
int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
unsigned long addr;
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
kcb->jprobe_saved_regs = *regs;
kcb->jprobe_saved_sp = stack_addr(regs);
addr = (unsigned long)(kcb->jprobe_saved_sp);
/*
* As Linus pointed out, gcc assumes that the callee
* owns the argument space and could overwrite it, e.g.
* tailcall optimization. So, to be absolutely safe
* we also save and restore enough stack bytes to cover
* the argument area.
* Use __memcpy() to avoid KASAN stack out-of-bounds reports as we copy
* raw stack chunk with redzones:
*/
__memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr, MIN_STACK_SIZE(addr));
regs->ip = (unsigned long)(jp->entry);
/*
* jprobes use jprobe_return() which skips the normal return
* path of the function, and this messes up the accounting of the
* function graph tracer to get messed up.
*
* Pause function graph tracing while performing the jprobe function.
*/
pause_graph_tracing();
return 1;
}
NOKPROBE_SYMBOL(setjmp_pre_handler);
void jprobe_return(void)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
/* Unpoison stack redzones in the frames we are going to jump over. */
kasan_unpoison_stack_above_sp_to(kcb->jprobe_saved_sp);
asm volatile (
#ifdef CONFIG_X86_64
" xchg %%rbx,%%rsp \n"
#else
" xchgl %%ebx,%%esp \n"
#endif
" int3 \n"
" .globl jprobe_return_end\n"
" jprobe_return_end: \n"
" nop \n"::"b"
(kcb->jprobe_saved_sp):"memory");
}
NOKPROBE_SYMBOL(jprobe_return);
NOKPROBE_SYMBOL(jprobe_return_end);
int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
u8 *addr = (u8 *) (regs->ip - 1);
struct jprobe *jp = container_of(p, struct jprobe, kp);
void *saved_sp = kcb->jprobe_saved_sp;
if ((addr > (u8 *) jprobe_return) &&
(addr < (u8 *) jprobe_return_end)) {
if (stack_addr(regs) != saved_sp) {
struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;
printk(KERN_ERR
"current sp %p does not match saved sp %p\n",
stack_addr(regs), saved_sp);
printk(KERN_ERR "Saved registers for jprobe %p\n", jp);
show_regs(saved_regs);
printk(KERN_ERR "Current registers\n");
show_regs(regs);
BUG();
}
/* It's OK to start function graph tracing again */
unpause_graph_tracing();
*regs = kcb->jprobe_saved_regs;
__memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));
preempt_enable_no_resched();
return 1;
}
return 0;
}
NOKPROBE_SYMBOL(longjmp_break_handler);
bool arch_within_kprobe_blacklist(unsigned long addr) bool arch_within_kprobe_blacklist(unsigned long addr)
{ {
bool is_in_entry_trampoline_section = false; bool is_in_entry_trampoline_section = false;
......
...@@ -25,36 +25,6 @@ ...@@ -25,36 +25,6 @@
#include "common.h" #include "common.h"
static nokprobe_inline
void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb, unsigned long orig_ip)
{
/*
* Emulate singlestep (and also recover regs->ip)
* as if there is a 5byte nop
*/
regs->ip = (unsigned long)p->addr + MCOUNT_INSN_SIZE;
if (unlikely(p->post_handler)) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
__this_cpu_write(current_kprobe, NULL);
if (orig_ip)
regs->ip = orig_ip;
}
int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
if (kprobe_ftrace(p)) {
__skip_singlestep(p, regs, kcb, 0);
preempt_enable_no_resched();
return 1;
}
return 0;
}
NOKPROBE_SYMBOL(skip_singlestep);
/* Ftrace callback handler for kprobes -- called under preepmt disabed */ /* Ftrace callback handler for kprobes -- called under preepmt disabed */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct pt_regs *regs) struct ftrace_ops *ops, struct pt_regs *regs)
...@@ -75,18 +45,25 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, ...@@ -75,18 +45,25 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */ /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
regs->ip = ip + sizeof(kprobe_opcode_t); regs->ip = ip + sizeof(kprobe_opcode_t);
/* To emulate trap based kprobes, preempt_disable here */
preempt_disable();
__this_cpu_write(current_kprobe, p); __this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
__skip_singlestep(p, regs, kcb, orig_ip); /*
preempt_enable_no_resched(); * Emulate singlestep (and also recover regs->ip)
* as if there is a 5byte nop
*/
regs->ip = (unsigned long)p->addr + MCOUNT_INSN_SIZE;
if (unlikely(p->post_handler)) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
regs->ip = orig_ip;
} }
/* /*
* If pre_handler returns !0, it sets regs->ip and * If pre_handler returns !0, it changes regs->ip. We have to
* resets current kprobe, and keep preempt count +1. * skip emulating post_handler.
*/ */
__this_cpu_write(current_kprobe, NULL);
} }
} }
NOKPROBE_SYMBOL(kprobe_ftrace_handler); NOKPROBE_SYMBOL(kprobe_ftrace_handler);
......
...@@ -491,7 +491,6 @@ int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter) ...@@ -491,7 +491,6 @@ int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX; regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
if (!reenter) if (!reenter)
reset_current_kprobe(); reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
} }
return 0; return 0;
......
...@@ -30,13 +30,16 @@ struct arch_hw_breakpoint { ...@@ -30,13 +30,16 @@ struct arch_hw_breakpoint {
u16 type; u16 type;
}; };
struct perf_event_attr;
struct perf_event; struct perf_event;
struct pt_regs; struct pt_regs;
struct task_struct; struct task_struct;
int hw_breakpoint_slots(int type); int hw_breakpoint_slots(int type);
int arch_check_bp_in_kernelspace(struct perf_event *bp); int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
int arch_validate_hwbkpt_settings(struct perf_event *bp); int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw);
int hw_breakpoint_exceptions_notify(struct notifier_block *unused, int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -33,14 +33,13 @@ int hw_breakpoint_slots(int type) ...@@ -33,14 +33,13 @@ int hw_breakpoint_slots(int type)
} }
} }
int arch_check_bp_in_kernelspace(struct perf_event *bp) int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw)
{ {
unsigned int len; unsigned int len;
unsigned long va; unsigned long va;
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
va = info->address; va = hw->address;
len = bp->attr.bp_len; len = hw->len;
return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE);
} }
...@@ -48,50 +47,41 @@ int arch_check_bp_in_kernelspace(struct perf_event *bp) ...@@ -48,50 +47,41 @@ int arch_check_bp_in_kernelspace(struct perf_event *bp)
/* /*
* Construct an arch_hw_breakpoint from a perf_event. * Construct an arch_hw_breakpoint from a perf_event.
*/ */
static int arch_build_bp_info(struct perf_event *bp) int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
struct arch_hw_breakpoint *info = counter_arch_bp(bp);
/* Type */ /* Type */
switch (bp->attr.bp_type) { switch (attr->bp_type) {
case HW_BREAKPOINT_X: case HW_BREAKPOINT_X:
info->type = XTENSA_BREAKPOINT_EXECUTE; hw->type = XTENSA_BREAKPOINT_EXECUTE;
break; break;
case HW_BREAKPOINT_R: case HW_BREAKPOINT_R:
info->type = XTENSA_BREAKPOINT_LOAD; hw->type = XTENSA_BREAKPOINT_LOAD;
break; break;
case HW_BREAKPOINT_W: case HW_BREAKPOINT_W:
info->type = XTENSA_BREAKPOINT_STORE; hw->type = XTENSA_BREAKPOINT_STORE;
break; break;
case HW_BREAKPOINT_RW: case HW_BREAKPOINT_RW:
info->type = XTENSA_BREAKPOINT_LOAD | XTENSA_BREAKPOINT_STORE; hw->type = XTENSA_BREAKPOINT_LOAD | XTENSA_BREAKPOINT_STORE;
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
/* Len */ /* Len */
info->len = bp->attr.bp_len; hw->len = attr->bp_len;
if (info->len < 1 || info->len > 64 || !is_power_of_2(info->len)) if (hw->len < 1 || hw->len > 64 || !is_power_of_2(hw->len))
return -EINVAL; return -EINVAL;
/* Address */ /* Address */
info->address = bp->attr.bp_addr; hw->address = attr->bp_addr;
if (info->address & (info->len - 1)) if (hw->address & (hw->len - 1))
return -EINVAL; return -EINVAL;
return 0; return 0;
} }
int arch_validate_hwbkpt_settings(struct perf_event *bp)
{
int ret;
/* Build the arch_hw_breakpoint. */
ret = arch_build_bp_info(bp);
return ret;
}
int hw_breakpoint_exceptions_notify(struct notifier_block *unused, int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data) unsigned long val, void *data)
{ {
......
...@@ -63,7 +63,6 @@ struct pt_regs; ...@@ -63,7 +63,6 @@ struct pt_regs;
struct kretprobe; struct kretprobe;
struct kretprobe_instance; struct kretprobe_instance;
typedef int (*kprobe_pre_handler_t) (struct kprobe *, struct pt_regs *); typedef int (*kprobe_pre_handler_t) (struct kprobe *, struct pt_regs *);
typedef int (*kprobe_break_handler_t) (struct kprobe *, struct pt_regs *);
typedef void (*kprobe_post_handler_t) (struct kprobe *, struct pt_regs *, typedef void (*kprobe_post_handler_t) (struct kprobe *, struct pt_regs *,
unsigned long flags); unsigned long flags);
typedef int (*kprobe_fault_handler_t) (struct kprobe *, struct pt_regs *, typedef int (*kprobe_fault_handler_t) (struct kprobe *, struct pt_regs *,
...@@ -101,12 +100,6 @@ struct kprobe { ...@@ -101,12 +100,6 @@ struct kprobe {
*/ */
kprobe_fault_handler_t fault_handler; kprobe_fault_handler_t fault_handler;
/*
* ... called if breakpoint trap occurs in probe handler.
* Return 1 if it handled break, otherwise kernel will see it.
*/
kprobe_break_handler_t break_handler;
/* Saved opcode (which has been replaced with breakpoint) */ /* Saved opcode (which has been replaced with breakpoint) */
kprobe_opcode_t opcode; kprobe_opcode_t opcode;
...@@ -154,24 +147,6 @@ static inline int kprobe_ftrace(struct kprobe *p) ...@@ -154,24 +147,6 @@ static inline int kprobe_ftrace(struct kprobe *p)
return p->flags & KPROBE_FLAG_FTRACE; return p->flags & KPROBE_FLAG_FTRACE;
} }
/*
* Special probe type that uses setjmp-longjmp type tricks to resume
* execution at a specified entry with a matching prototype corresponding
* to the probed function - a trick to enable arguments to become
* accessible seamlessly by probe handling logic.
* Note:
* Because of the way compilers allocate stack space for local variables
* etc upfront, regardless of sub-scopes within a function, this mirroring
* principle currently works only for probes placed on function entry points.
*/
struct jprobe {
struct kprobe kp;
void *entry; /* probe handling code to jump to */
};
/* For backward compatibility with old code using JPROBE_ENTRY() */
#define JPROBE_ENTRY(handler) (handler)
/* /*
* Function-return probe - * Function-return probe -
* Note: * Note:
...@@ -389,9 +364,6 @@ int register_kprobe(struct kprobe *p); ...@@ -389,9 +364,6 @@ int register_kprobe(struct kprobe *p);
void unregister_kprobe(struct kprobe *p); void unregister_kprobe(struct kprobe *p);
int register_kprobes(struct kprobe **kps, int num); int register_kprobes(struct kprobe **kps, int num);
void unregister_kprobes(struct kprobe **kps, int num); void unregister_kprobes(struct kprobe **kps, int num);
int setjmp_pre_handler(struct kprobe *, struct pt_regs *);
int longjmp_break_handler(struct kprobe *, struct pt_regs *);
void jprobe_return(void);
unsigned long arch_deref_entry_point(void *); unsigned long arch_deref_entry_point(void *);
int register_kretprobe(struct kretprobe *rp); int register_kretprobe(struct kretprobe *rp);
...@@ -439,9 +411,6 @@ static inline void unregister_kprobe(struct kprobe *p) ...@@ -439,9 +411,6 @@ static inline void unregister_kprobe(struct kprobe *p)
static inline void unregister_kprobes(struct kprobe **kps, int num) static inline void unregister_kprobes(struct kprobe **kps, int num)
{ {
} }
static inline void jprobe_return(void)
{
}
static inline int register_kretprobe(struct kretprobe *rp) static inline int register_kretprobe(struct kretprobe *rp)
{ {
return -ENOSYS; return -ENOSYS;
...@@ -468,20 +437,6 @@ static inline int enable_kprobe(struct kprobe *kp) ...@@ -468,20 +437,6 @@ static inline int enable_kprobe(struct kprobe *kp)
return -ENOSYS; return -ENOSYS;
} }
#endif /* CONFIG_KPROBES */ #endif /* CONFIG_KPROBES */
static inline int register_jprobe(struct jprobe *p)
{
return -ENOSYS;
}
static inline int register_jprobes(struct jprobe **jps, int num)
{
return -ENOSYS;
}
static inline void unregister_jprobe(struct jprobe *p)
{
}
static inline void unregister_jprobes(struct jprobe **jps, int num)
{
}
static inline int disable_kretprobe(struct kretprobe *rp) static inline int disable_kretprobe(struct kretprobe *rp)
{ {
return disable_kprobe(&rp->kp); return disable_kprobe(&rp->kp);
...@@ -490,14 +445,6 @@ static inline int enable_kretprobe(struct kretprobe *rp) ...@@ -490,14 +445,6 @@ static inline int enable_kretprobe(struct kretprobe *rp)
{ {
return enable_kprobe(&rp->kp); return enable_kprobe(&rp->kp);
} }
static inline int disable_jprobe(struct jprobe *jp)
{
return -ENOSYS;
}
static inline int enable_jprobe(struct jprobe *jp)
{
return -ENOSYS;
}
#ifndef CONFIG_KPROBES #ifndef CONFIG_KPROBES
static inline bool is_kprobe_insn_slot(unsigned long addr) static inline bool is_kprobe_insn_slot(unsigned long addr)
......
...@@ -490,7 +490,7 @@ struct perf_addr_filters_head { ...@@ -490,7 +490,7 @@ struct perf_addr_filters_head {
}; };
/** /**
* enum perf_event_state - the states of a event * enum perf_event_state - the states of an event:
*/ */
enum perf_event_state { enum perf_event_state {
PERF_EVENT_STATE_DEAD = -4, PERF_EVENT_STATE_DEAD = -4,
......
...@@ -1656,7 +1656,7 @@ perf_event_groups_next(struct perf_event *event) ...@@ -1656,7 +1656,7 @@ perf_event_groups_next(struct perf_event *event)
typeof(*event), group_node)) typeof(*event), group_node))
/* /*
* Add a event from the lists for its context. * Add an event from the lists for its context.
* Must be called with ctx->mutex and ctx->lock held. * Must be called with ctx->mutex and ctx->lock held.
*/ */
static void static void
...@@ -1844,7 +1844,7 @@ static void perf_group_attach(struct perf_event *event) ...@@ -1844,7 +1844,7 @@ static void perf_group_attach(struct perf_event *event)
} }
/* /*
* Remove a event from the lists for its context. * Remove an event from the lists for its context.
* Must be called with ctx->mutex and ctx->lock held. * Must be called with ctx->mutex and ctx->lock held.
*/ */
static void static void
...@@ -2148,7 +2148,7 @@ static void __perf_event_disable(struct perf_event *event, ...@@ -2148,7 +2148,7 @@ static void __perf_event_disable(struct perf_event *event,
} }
/* /*
* Disable a event. * Disable an event.
* *
* If event->ctx is a cloned context, callers must make sure that * If event->ctx is a cloned context, callers must make sure that
* every task struct that event->ctx->task could possibly point to * every task struct that event->ctx->task could possibly point to
...@@ -2677,7 +2677,7 @@ static void __perf_event_enable(struct perf_event *event, ...@@ -2677,7 +2677,7 @@ static void __perf_event_enable(struct perf_event *event,
} }
/* /*
* Enable a event. * Enable an event.
* *
* If event->ctx is a cloned context, callers must make sure that * If event->ctx is a cloned context, callers must make sure that
* every task struct that event->ctx->task could possibly point to * every task struct that event->ctx->task could possibly point to
...@@ -2755,7 +2755,7 @@ static int __perf_event_stop(void *info) ...@@ -2755,7 +2755,7 @@ static int __perf_event_stop(void *info)
* events will refuse to restart because of rb::aux_mmap_count==0, * events will refuse to restart because of rb::aux_mmap_count==0,
* see comments in perf_aux_output_begin(). * see comments in perf_aux_output_begin().
* *
* Since this is happening on a event-local CPU, no trace is lost * Since this is happening on an event-local CPU, no trace is lost
* while restarting. * while restarting.
*/ */
if (sd->restart) if (sd->restart)
...@@ -4827,7 +4827,7 @@ __perf_read(struct perf_event *event, char __user *buf, size_t count) ...@@ -4827,7 +4827,7 @@ __perf_read(struct perf_event *event, char __user *buf, size_t count)
int ret; int ret;
/* /*
* Return end-of-file for a read on a event that is in * Return end-of-file for a read on an event that is in
* error state (i.e. because it was pinned but it couldn't be * error state (i.e. because it was pinned but it couldn't be
* scheduled on to the CPU at some point). * scheduled on to the CPU at some point).
*/ */
...@@ -5273,11 +5273,11 @@ void perf_event_update_userpage(struct perf_event *event) ...@@ -5273,11 +5273,11 @@ void perf_event_update_userpage(struct perf_event *event)
} }
EXPORT_SYMBOL_GPL(perf_event_update_userpage); EXPORT_SYMBOL_GPL(perf_event_update_userpage);
static int perf_mmap_fault(struct vm_fault *vmf) static vm_fault_t perf_mmap_fault(struct vm_fault *vmf)
{ {
struct perf_event *event = vmf->vma->vm_file->private_data; struct perf_event *event = vmf->vma->vm_file->private_data;
struct ring_buffer *rb; struct ring_buffer *rb;
int ret = VM_FAULT_SIGBUS; vm_fault_t ret = VM_FAULT_SIGBUS;
if (vmf->flags & FAULT_FLAG_MKWRITE) { if (vmf->flags & FAULT_FLAG_MKWRITE) {
if (vmf->pgoff == 0) if (vmf->pgoff == 0)
...@@ -9904,7 +9904,7 @@ static void account_event(struct perf_event *event) ...@@ -9904,7 +9904,7 @@ static void account_event(struct perf_event *event)
} }
/* /*
* Allocate and initialize a event structure * Allocate and initialize an event structure
*/ */
static struct perf_event * static struct perf_event *
perf_event_alloc(struct perf_event_attr *attr, int cpu, perf_event_alloc(struct perf_event_attr *attr, int cpu,
...@@ -11235,7 +11235,7 @@ const struct perf_event_attr *perf_event_attrs(struct perf_event *event) ...@@ -11235,7 +11235,7 @@ const struct perf_event_attr *perf_event_attrs(struct perf_event *event)
} }
/* /*
* Inherit a event from parent task to child task. * Inherit an event from parent task to child task.
* *
* Returns: * Returns:
* - valid pointer on success * - valid pointer on success
......
...@@ -345,13 +345,13 @@ void release_bp_slot(struct perf_event *bp) ...@@ -345,13 +345,13 @@ void release_bp_slot(struct perf_event *bp)
mutex_unlock(&nr_bp_mutex); mutex_unlock(&nr_bp_mutex);
} }
static int __modify_bp_slot(struct perf_event *bp, u64 old_type) static int __modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type)
{ {
int err; int err;
__release_bp_slot(bp, old_type); __release_bp_slot(bp, old_type);
err = __reserve_bp_slot(bp, bp->attr.bp_type); err = __reserve_bp_slot(bp, new_type);
if (err) { if (err) {
/* /*
* Reserve the old_type slot back in case * Reserve the old_type slot back in case
...@@ -367,12 +367,12 @@ static int __modify_bp_slot(struct perf_event *bp, u64 old_type) ...@@ -367,12 +367,12 @@ static int __modify_bp_slot(struct perf_event *bp, u64 old_type)
return err; return err;
} }
static int modify_bp_slot(struct perf_event *bp, u64 old_type) static int modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type)
{ {
int ret; int ret;
mutex_lock(&nr_bp_mutex); mutex_lock(&nr_bp_mutex);
ret = __modify_bp_slot(bp, old_type); ret = __modify_bp_slot(bp, old_type, new_type);
mutex_unlock(&nr_bp_mutex); mutex_unlock(&nr_bp_mutex);
return ret; return ret;
} }
...@@ -400,16 +400,18 @@ int dbg_release_bp_slot(struct perf_event *bp) ...@@ -400,16 +400,18 @@ int dbg_release_bp_slot(struct perf_event *bp)
return 0; return 0;
} }
static int validate_hw_breakpoint(struct perf_event *bp) static int hw_breakpoint_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{ {
int ret; int err;
ret = arch_validate_hwbkpt_settings(bp); err = hw_breakpoint_arch_parse(bp, attr, hw);
if (ret) if (err)
return ret; return err;
if (arch_check_bp_in_kernelspace(bp)) { if (arch_check_bp_in_kernelspace(hw)) {
if (bp->attr.exclude_kernel) if (attr->exclude_kernel)
return -EINVAL; return -EINVAL;
/* /*
* Don't let unprivileged users set a breakpoint in the trap * Don't let unprivileged users set a breakpoint in the trap
...@@ -424,19 +426,22 @@ static int validate_hw_breakpoint(struct perf_event *bp) ...@@ -424,19 +426,22 @@ static int validate_hw_breakpoint(struct perf_event *bp)
int register_perf_hw_breakpoint(struct perf_event *bp) int register_perf_hw_breakpoint(struct perf_event *bp)
{ {
int ret; struct arch_hw_breakpoint hw;
int err;
ret = reserve_bp_slot(bp);
if (ret)
return ret;
ret = validate_hw_breakpoint(bp); err = reserve_bp_slot(bp);
if (err)
return err;
/* if arch_validate_hwbkpt_settings() fails then release bp slot */ err = hw_breakpoint_parse(bp, &bp->attr, &hw);
if (ret) if (err) {
release_bp_slot(bp); release_bp_slot(bp);
return err;
}
return ret; bp->hw.info = hw;
return 0;
} }
/** /**
...@@ -456,35 +461,44 @@ register_user_hw_breakpoint(struct perf_event_attr *attr, ...@@ -456,35 +461,44 @@ register_user_hw_breakpoint(struct perf_event_attr *attr,
} }
EXPORT_SYMBOL_GPL(register_user_hw_breakpoint); EXPORT_SYMBOL_GPL(register_user_hw_breakpoint);
static void hw_breakpoint_copy_attr(struct perf_event_attr *to,
struct perf_event_attr *from)
{
to->bp_addr = from->bp_addr;
to->bp_type = from->bp_type;
to->bp_len = from->bp_len;
to->disabled = from->disabled;
}
int int
modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *attr, modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *attr,
bool check) bool check)
{ {
u64 old_addr = bp->attr.bp_addr; struct arch_hw_breakpoint hw;
u64 old_len = bp->attr.bp_len; int err;
int old_type = bp->attr.bp_type;
bool modify = attr->bp_type != old_type;
int err = 0;
bp->attr.bp_addr = attr->bp_addr; err = hw_breakpoint_parse(bp, attr, &hw);
bp->attr.bp_type = attr->bp_type; if (err)
bp->attr.bp_len = attr->bp_len; return err;
if (check && memcmp(&bp->attr, attr, sizeof(*attr))) if (check) {
return -EINVAL; struct perf_event_attr old_attr;
err = validate_hw_breakpoint(bp); old_attr = bp->attr;
if (!err && modify) hw_breakpoint_copy_attr(&old_attr, attr);
err = modify_bp_slot(bp, old_type); if (memcmp(&old_attr, attr, sizeof(*attr)))
return -EINVAL;
}
if (err) { if (bp->attr.bp_type != attr->bp_type) {
bp->attr.bp_addr = old_addr; err = modify_bp_slot(bp, bp->attr.bp_type, attr->bp_type);
bp->attr.bp_type = old_type; if (err)
bp->attr.bp_len = old_len; return err;
return err;
} }
bp->attr.disabled = attr->disabled; hw_breakpoint_copy_attr(&bp->attr, attr);
bp->hw.info = hw;
return 0; return 0;
} }
......
...@@ -918,7 +918,7 @@ int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer * ...@@ -918,7 +918,7 @@ int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *
EXPORT_SYMBOL_GPL(uprobe_register); EXPORT_SYMBOL_GPL(uprobe_register);
/* /*
* uprobe_apply - unregister a already registered probe. * uprobe_apply - unregister an already registered probe.
* @inode: the file in which the probe has to be removed. * @inode: the file in which the probe has to be removed.
* @offset: offset from the start of the file. * @offset: offset from the start of the file.
* @uc: consumer which wants to add more or remove some breakpoints * @uc: consumer which wants to add more or remove some breakpoints
...@@ -947,7 +947,7 @@ int uprobe_apply(struct inode *inode, loff_t offset, ...@@ -947,7 +947,7 @@ int uprobe_apply(struct inode *inode, loff_t offset,
} }
/* /*
* uprobe_unregister - unregister a already registered probe. * uprobe_unregister - unregister an already registered probe.
* @inode: the file in which the probe has to be removed. * @inode: the file in which the probe has to be removed.
* @offset: offset from the start of the file. * @offset: offset from the start of the file.
* @uc: identify which probe if multiple probes are colocated. * @uc: identify which probe if multiple probes are colocated.
...@@ -1403,7 +1403,7 @@ static struct return_instance *free_ret_instance(struct return_instance *ri) ...@@ -1403,7 +1403,7 @@ static struct return_instance *free_ret_instance(struct return_instance *ri)
/* /*
* Called with no locks held. * Called with no locks held.
* Called in context of a exiting or a exec-ing thread. * Called in context of an exiting or an exec-ing thread.
*/ */
void uprobe_free_utask(struct task_struct *t) void uprobe_free_utask(struct task_struct *t)
{ {
......
...@@ -184,9 +184,6 @@ static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs) ...@@ -184,9 +184,6 @@ static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs)
if (should_fail(&fei_fault_attr, 1)) { if (should_fail(&fei_fault_attr, 1)) {
regs_set_return_value(regs, attr->retval); regs_set_return_value(regs, attr->retval);
override_function_with_return(regs); override_function_with_return(regs);
/* Kprobe specific fixup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
} }
......
...@@ -627,8 +627,8 @@ static void optimize_kprobe(struct kprobe *p) ...@@ -627,8 +627,8 @@ static void optimize_kprobe(struct kprobe *p)
(kprobe_disabled(p) || kprobes_all_disarmed)) (kprobe_disabled(p) || kprobes_all_disarmed))
return; return;
/* Both of break_handler and post_handler are not supported. */ /* kprobes with post_handler can not be optimized */
if (p->break_handler || p->post_handler) if (p->post_handler)
return; return;
op = container_of(p, struct optimized_kprobe, kp); op = container_of(p, struct optimized_kprobe, kp);
...@@ -710,9 +710,7 @@ static void reuse_unused_kprobe(struct kprobe *ap) ...@@ -710,9 +710,7 @@ static void reuse_unused_kprobe(struct kprobe *ap)
* there is still a relative jump) and disabled. * there is still a relative jump) and disabled.
*/ */
op = container_of(ap, struct optimized_kprobe, kp); op = container_of(ap, struct optimized_kprobe, kp);
if (unlikely(list_empty(&op->list))) WARN_ON_ONCE(list_empty(&op->list));
printk(KERN_WARNING "Warning: found a stray unused "
"aggrprobe@%p\n", ap->addr);
/* Enable the probe again */ /* Enable the probe again */
ap->flags &= ~KPROBE_FLAG_DISABLED; ap->flags &= ~KPROBE_FLAG_DISABLED;
/* Optimize it again (remove from op->list) */ /* Optimize it again (remove from op->list) */
...@@ -985,7 +983,8 @@ static int arm_kprobe_ftrace(struct kprobe *p) ...@@ -985,7 +983,8 @@ static int arm_kprobe_ftrace(struct kprobe *p)
ret = ftrace_set_filter_ip(&kprobe_ftrace_ops, ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
(unsigned long)p->addr, 0, 0); (unsigned long)p->addr, 0, 0);
if (ret) { if (ret) {
pr_debug("Failed to arm kprobe-ftrace at %p (%d)\n", p->addr, ret); pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
p->addr, ret);
return ret; return ret;
} }
...@@ -1025,7 +1024,8 @@ static int disarm_kprobe_ftrace(struct kprobe *p) ...@@ -1025,7 +1024,8 @@ static int disarm_kprobe_ftrace(struct kprobe *p)
ret = ftrace_set_filter_ip(&kprobe_ftrace_ops, ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
(unsigned long)p->addr, 1, 0); (unsigned long)p->addr, 1, 0);
WARN(ret < 0, "Failed to disarm kprobe-ftrace at %p (%d)\n", p->addr, ret); WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
p->addr, ret);
return ret; return ret;
} }
#else /* !CONFIG_KPROBES_ON_FTRACE */ #else /* !CONFIG_KPROBES_ON_FTRACE */
...@@ -1116,20 +1116,6 @@ static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, ...@@ -1116,20 +1116,6 @@ static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs,
} }
NOKPROBE_SYMBOL(aggr_fault_handler); NOKPROBE_SYMBOL(aggr_fault_handler);
static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe *cur = __this_cpu_read(kprobe_instance);
int ret = 0;
if (cur && cur->break_handler) {
if (cur->break_handler(cur, regs))
ret = 1;
}
reset_kprobe_instance();
return ret;
}
NOKPROBE_SYMBOL(aggr_break_handler);
/* Walks the list and increments nmissed count for multiprobe case */ /* Walks the list and increments nmissed count for multiprobe case */
void kprobes_inc_nmissed_count(struct kprobe *p) void kprobes_inc_nmissed_count(struct kprobe *p)
{ {
...@@ -1270,24 +1256,15 @@ static void cleanup_rp_inst(struct kretprobe *rp) ...@@ -1270,24 +1256,15 @@ static void cleanup_rp_inst(struct kretprobe *rp)
} }
NOKPROBE_SYMBOL(cleanup_rp_inst); NOKPROBE_SYMBOL(cleanup_rp_inst);
/* /* Add the new probe to ap->list */
* Add the new probe to ap->list. Fail if this is the
* second jprobe at the address - two jprobes can't coexist
*/
static int add_new_kprobe(struct kprobe *ap, struct kprobe *p) static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
{ {
BUG_ON(kprobe_gone(ap) || kprobe_gone(p)); BUG_ON(kprobe_gone(ap) || kprobe_gone(p));
if (p->break_handler || p->post_handler) if (p->post_handler)
unoptimize_kprobe(ap, true); /* Fall back to normal kprobe */ unoptimize_kprobe(ap, true); /* Fall back to normal kprobe */
if (p->break_handler) { list_add_rcu(&p->list, &ap->list);
if (ap->break_handler)
return -EEXIST;
list_add_tail_rcu(&p->list, &ap->list);
ap->break_handler = aggr_break_handler;
} else
list_add_rcu(&p->list, &ap->list);
if (p->post_handler && !ap->post_handler) if (p->post_handler && !ap->post_handler)
ap->post_handler = aggr_post_handler; ap->post_handler = aggr_post_handler;
...@@ -1310,8 +1287,6 @@ static void init_aggr_kprobe(struct kprobe *ap, struct kprobe *p) ...@@ -1310,8 +1287,6 @@ static void init_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
/* We don't care the kprobe which has gone. */ /* We don't care the kprobe which has gone. */
if (p->post_handler && !kprobe_gone(p)) if (p->post_handler && !kprobe_gone(p))
ap->post_handler = aggr_post_handler; ap->post_handler = aggr_post_handler;
if (p->break_handler && !kprobe_gone(p))
ap->break_handler = aggr_break_handler;
INIT_LIST_HEAD(&ap->list); INIT_LIST_HEAD(&ap->list);
INIT_HLIST_NODE(&ap->hlist); INIT_HLIST_NODE(&ap->hlist);
...@@ -1706,8 +1681,6 @@ static int __unregister_kprobe_top(struct kprobe *p) ...@@ -1706,8 +1681,6 @@ static int __unregister_kprobe_top(struct kprobe *p)
goto disarmed; goto disarmed;
else { else {
/* If disabling probe has special handlers, update aggrprobe */ /* If disabling probe has special handlers, update aggrprobe */
if (p->break_handler && !kprobe_gone(p))
ap->break_handler = NULL;
if (p->post_handler && !kprobe_gone(p)) { if (p->post_handler && !kprobe_gone(p)) {
list_for_each_entry_rcu(list_p, &ap->list, list) { list_for_each_entry_rcu(list_p, &ap->list, list) {
if ((list_p != p) && (list_p->post_handler)) if ((list_p != p) && (list_p->post_handler))
...@@ -1812,77 +1785,6 @@ unsigned long __weak arch_deref_entry_point(void *entry) ...@@ -1812,77 +1785,6 @@ unsigned long __weak arch_deref_entry_point(void *entry)
return (unsigned long)entry; return (unsigned long)entry;
} }
#if 0
int register_jprobes(struct jprobe **jps, int num)
{
int ret = 0, i;
if (num <= 0)
return -EINVAL;
for (i = 0; i < num; i++) {
ret = register_jprobe(jps[i]);
if (ret < 0) {
if (i > 0)
unregister_jprobes(jps, i);
break;
}
}
return ret;
}
EXPORT_SYMBOL_GPL(register_jprobes);
int register_jprobe(struct jprobe *jp)
{
unsigned long addr, offset;
struct kprobe *kp = &jp->kp;
/*
* Verify probepoint as well as the jprobe handler are
* valid function entry points.
*/
addr = arch_deref_entry_point(jp->entry);
if (kallsyms_lookup_size_offset(addr, NULL, &offset) && offset == 0 &&
kprobe_on_func_entry(kp->addr, kp->symbol_name, kp->offset)) {
kp->pre_handler = setjmp_pre_handler;
kp->break_handler = longjmp_break_handler;
return register_kprobe(kp);
}
return -EINVAL;
}
EXPORT_SYMBOL_GPL(register_jprobe);
void unregister_jprobe(struct jprobe *jp)
{
unregister_jprobes(&jp, 1);
}
EXPORT_SYMBOL_GPL(unregister_jprobe);
void unregister_jprobes(struct jprobe **jps, int num)
{
int i;
if (num <= 0)
return;
mutex_lock(&kprobe_mutex);
for (i = 0; i < num; i++)
if (__unregister_kprobe_top(&jps[i]->kp) < 0)
jps[i]->kp.addr = NULL;
mutex_unlock(&kprobe_mutex);
synchronize_sched();
for (i = 0; i < num; i++) {
if (jps[i]->kp.addr)
__unregister_kprobe_bottom(&jps[i]->kp);
}
}
EXPORT_SYMBOL_GPL(unregister_jprobes);
#endif
#ifdef CONFIG_KRETPROBES #ifdef CONFIG_KRETPROBES
/* /*
* This kprobe pre_handler is registered with every kretprobe. When probe * This kprobe pre_handler is registered with every kretprobe. When probe
...@@ -1982,7 +1884,6 @@ int register_kretprobe(struct kretprobe *rp) ...@@ -1982,7 +1884,6 @@ int register_kretprobe(struct kretprobe *rp)
rp->kp.pre_handler = pre_handler_kretprobe; rp->kp.pre_handler = pre_handler_kretprobe;
rp->kp.post_handler = NULL; rp->kp.post_handler = NULL;
rp->kp.fault_handler = NULL; rp->kp.fault_handler = NULL;
rp->kp.break_handler = NULL;
/* Pre-allocate memory for max kretprobe instances */ /* Pre-allocate memory for max kretprobe instances */
if (rp->maxactive <= 0) { if (rp->maxactive <= 0) {
...@@ -2105,7 +2006,6 @@ static void kill_kprobe(struct kprobe *p) ...@@ -2105,7 +2006,6 @@ static void kill_kprobe(struct kprobe *p)
list_for_each_entry_rcu(kp, &p->list, list) list_for_each_entry_rcu(kp, &p->list, list)
kp->flags |= KPROBE_FLAG_GONE; kp->flags |= KPROBE_FLAG_GONE;
p->post_handler = NULL; p->post_handler = NULL;
p->break_handler = NULL;
kill_optimized_kprobe(p); kill_optimized_kprobe(p);
} }
/* /*
...@@ -2169,11 +2069,12 @@ int enable_kprobe(struct kprobe *kp) ...@@ -2169,11 +2069,12 @@ int enable_kprobe(struct kprobe *kp)
} }
EXPORT_SYMBOL_GPL(enable_kprobe); EXPORT_SYMBOL_GPL(enable_kprobe);
/* Caller must NOT call this in usual path. This is only for critical case */
void dump_kprobe(struct kprobe *kp) void dump_kprobe(struct kprobe *kp)
{ {
printk(KERN_WARNING "Dumping kprobe:\n"); pr_err("Dumping kprobe:\n");
printk(KERN_WARNING "Name: %s\nAddress: %p\nOffset: %x\n", pr_err("Name: %s\nOffset: %x\nAddress: %pS\n",
kp->symbol_name, kp->addr, kp->offset); kp->symbol_name, kp->offset, kp->addr);
} }
NOKPROBE_SYMBOL(dump_kprobe); NOKPROBE_SYMBOL(dump_kprobe);
...@@ -2196,11 +2097,8 @@ static int __init populate_kprobe_blacklist(unsigned long *start, ...@@ -2196,11 +2097,8 @@ static int __init populate_kprobe_blacklist(unsigned long *start,
entry = arch_deref_entry_point((void *)*iter); entry = arch_deref_entry_point((void *)*iter);
if (!kernel_text_address(entry) || if (!kernel_text_address(entry) ||
!kallsyms_lookup_size_offset(entry, &size, &offset)) { !kallsyms_lookup_size_offset(entry, &size, &offset))
pr_err("Failed to find blacklist at %p\n",
(void *)entry);
continue; continue;
}
ent = kmalloc(sizeof(*ent), GFP_KERNEL); ent = kmalloc(sizeof(*ent), GFP_KERNEL);
if (!ent) if (!ent)
...@@ -2326,21 +2224,23 @@ static void report_probe(struct seq_file *pi, struct kprobe *p, ...@@ -2326,21 +2224,23 @@ static void report_probe(struct seq_file *pi, struct kprobe *p,
const char *sym, int offset, char *modname, struct kprobe *pp) const char *sym, int offset, char *modname, struct kprobe *pp)
{ {
char *kprobe_type; char *kprobe_type;
void *addr = p->addr;
if (p->pre_handler == pre_handler_kretprobe) if (p->pre_handler == pre_handler_kretprobe)
kprobe_type = "r"; kprobe_type = "r";
else if (p->pre_handler == setjmp_pre_handler)
kprobe_type = "j";
else else
kprobe_type = "k"; kprobe_type = "k";
if (!kallsyms_show_value())
addr = NULL;
if (sym) if (sym)
seq_printf(pi, "%p %s %s+0x%x %s ", seq_printf(pi, "%px %s %s+0x%x %s ",
p->addr, kprobe_type, sym, offset, addr, kprobe_type, sym, offset,
(modname ? modname : " ")); (modname ? modname : " "));
else else /* try to use %pS */
seq_printf(pi, "%p %s %p ", seq_printf(pi, "%px %s %pS ",
p->addr, kprobe_type, p->addr); addr, kprobe_type, p->addr);
if (!pp) if (!pp)
pp = p; pp = p;
...@@ -2428,8 +2328,16 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v) ...@@ -2428,8 +2328,16 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v)
struct kprobe_blacklist_entry *ent = struct kprobe_blacklist_entry *ent =
list_entry(v, struct kprobe_blacklist_entry, list); list_entry(v, struct kprobe_blacklist_entry, list);
seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr, /*
(void *)ent->end_addr, (void *)ent->start_addr); * If /proc/kallsyms is not showing kernel address, we won't
* show them here either.
*/
if (!kallsyms_show_value())
seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL,
(void *)ent->start_addr);
else
seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
(void *)ent->end_addr, (void *)ent->start_addr);
return 0; return 0;
} }
...@@ -2611,7 +2519,7 @@ static int __init debugfs_kprobe_init(void) ...@@ -2611,7 +2519,7 @@ static int __init debugfs_kprobe_init(void)
if (!dir) if (!dir)
return -ENOMEM; return -ENOMEM;
file = debugfs_create_file("list", 0444, dir, NULL, file = debugfs_create_file("list", 0400, dir, NULL,
&debugfs_kprobes_operations); &debugfs_kprobes_operations);
if (!file) if (!file)
goto error; goto error;
...@@ -2621,7 +2529,7 @@ static int __init debugfs_kprobe_init(void) ...@@ -2621,7 +2529,7 @@ static int __init debugfs_kprobe_init(void)
if (!file) if (!file)
goto error; goto error;
file = debugfs_create_file("blacklist", 0444, dir, NULL, file = debugfs_create_file("blacklist", 0400, dir, NULL,
&debugfs_kprobe_blacklist_ops); &debugfs_kprobe_blacklist_ops);
if (!file) if (!file)
goto error; goto error;
...@@ -2637,6 +2545,3 @@ late_initcall(debugfs_kprobe_init); ...@@ -2637,6 +2545,3 @@ late_initcall(debugfs_kprobe_init);
#endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_DEBUG_FS */
module_init(init_kprobes); module_init(init_kprobes);
/* defined in arch/.../kernel/kprobes.c */
EXPORT_SYMBOL_GPL(jprobe_return);
...@@ -162,90 +162,6 @@ static int test_kprobes(void) ...@@ -162,90 +162,6 @@ static int test_kprobes(void)
} }
#if 0
static u32 jph_val;
static u32 j_kprobe_target(u32 value)
{
if (preemptible()) {
handler_errors++;
pr_err("jprobe-handler is preemptible\n");
}
if (value != rand1) {
handler_errors++;
pr_err("incorrect value in jprobe handler\n");
}
jph_val = rand1;
jprobe_return();
return 0;
}
static struct jprobe jp = {
.entry = j_kprobe_target,
.kp.symbol_name = "kprobe_target"
};
static int test_jprobe(void)
{
int ret;
ret = register_jprobe(&jp);
if (ret < 0) {
pr_err("register_jprobe returned %d\n", ret);
return ret;
}
ret = target(rand1);
unregister_jprobe(&jp);
if (jph_val == 0) {
pr_err("jprobe handler not called\n");
handler_errors++;
}
return 0;
}
static struct jprobe jp2 = {
.entry = j_kprobe_target,
.kp.symbol_name = "kprobe_target2"
};
static int test_jprobes(void)
{
int ret;
struct jprobe *jps[2] = {&jp, &jp2};
/* addr and flags should be cleard for reusing kprobe. */
jp.kp.addr = NULL;
jp.kp.flags = 0;
ret = register_jprobes(jps, 2);
if (ret < 0) {
pr_err("register_jprobes returned %d\n", ret);
return ret;
}
jph_val = 0;
ret = target(rand1);
if (jph_val == 0) {
pr_err("jprobe handler not called\n");
handler_errors++;
}
jph_val = 0;
ret = target2(rand1);
if (jph_val == 0) {
pr_err("jprobe handler2 not called\n");
handler_errors++;
}
unregister_jprobes(jps, 2);
return 0;
}
#else
#define test_jprobe() (0)
#define test_jprobes() (0)
#endif
#ifdef CONFIG_KRETPROBES #ifdef CONFIG_KRETPROBES
static u32 krph_val; static u32 krph_val;
...@@ -383,16 +299,6 @@ int init_test_probes(void) ...@@ -383,16 +299,6 @@ int init_test_probes(void)
if (ret < 0) if (ret < 0)
errors++; errors++;
num_tests++;
ret = test_jprobe();
if (ret < 0)
errors++;
num_tests++;
ret = test_jprobes();
if (ret < 0)
errors++;
#ifdef CONFIG_KRETPROBES #ifdef CONFIG_KRETPROBES
num_tests++; num_tests++;
ret = test_kretprobe(); ret = test_kretprobe();
......
...@@ -1228,16 +1228,11 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs) ...@@ -1228,16 +1228,11 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
/* /*
* We need to check and see if we modified the pc of the * We need to check and see if we modified the pc of the
* pt_regs, and if so clear the kprobe and return 1 so that we * pt_regs, and if so return 1 so that we don't do the
* don't do the single stepping. * single stepping.
* The ftrace kprobe handler leaves it up to us to re-enable
* preemption here before returning if we've modified the ip.
*/ */
if (orig_ip != instruction_pointer(regs)) { if (orig_ip != instruction_pointer(regs))
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
if (!ret) if (!ret)
return 0; return 0;
} }
......
...@@ -1718,7 +1718,7 @@ config KPROBES_SANITY_TEST ...@@ -1718,7 +1718,7 @@ config KPROBES_SANITY_TEST
default n default n
help help
This option provides for testing basic kprobes functionality on This option provides for testing basic kprobes functionality on
boot. A sample kprobe, jprobe and kretprobe are inserted and boot. Samples of kprobe and kretprobe are inserted and
verified for functionality. verified for functionality.
Say N if you are unsure. Say N if you are unsure.
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2012 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#define __ARCH_WANT_RENAMEAT
#include <asm-generic/unistd.h>
此差异已折叠。
此差异已折叠。
...@@ -18,6 +18,10 @@ various perf commands with the -e option. ...@@ -18,6 +18,10 @@ various perf commands with the -e option.
OPTIONS OPTIONS
------- -------
-d::
--desc::
Print extra event descriptions. (default)
--no-desc:: --no-desc::
Don't print descriptions. Don't print descriptions.
...@@ -25,11 +29,13 @@ Don't print descriptions. ...@@ -25,11 +29,13 @@ Don't print descriptions.
--long-desc:: --long-desc::
Print longer event descriptions. Print longer event descriptions.
--debug::
Enable debugging output.
--details:: --details::
Print how named events are resolved internally into perf events, and also Print how named events are resolved internally into perf events, and also
any extra expressions computed by perf stat. any extra expressions computed by perf stat.
[[EVENT_MODIFIERS]] [[EVENT_MODIFIERS]]
EVENT MODIFIERS EVENT MODIFIERS
--------------- ---------------
...@@ -234,7 +240,7 @@ perf also supports group leader sampling using the :S specifier. ...@@ -234,7 +240,7 @@ perf also supports group leader sampling using the :S specifier.
perf record -e '{cycles,instructions}:S' ... perf record -e '{cycles,instructions}:S' ...
perf report --group perf report --group
Normally all events in a event group sample, but with :S only Normally all events in an event group sample, but with :S only
the first event (the leader) samples, and it only reads the values of the the first event (the leader) samples, and it only reads the values of the
other events in the group. other events in the group.
......
...@@ -94,7 +94,7 @@ OPTIONS ...@@ -94,7 +94,7 @@ OPTIONS
"perf report" to view group events together. "perf report" to view group events together.
--filter=<filter>:: --filter=<filter>::
Event filter. This option should follow a event selector (-e) which Event filter. This option should follow an event selector (-e) which
selects either tracepoint event(s) or a hardware trace PMU selects either tracepoint event(s) or a hardware trace PMU
(e.g. Intel PT or CoreSight). (e.g. Intel PT or CoreSight).
...@@ -153,7 +153,7 @@ OPTIONS ...@@ -153,7 +153,7 @@ OPTIONS
--exclude-perf:: --exclude-perf::
Don't record events issued by perf itself. This option should follow Don't record events issued by perf itself. This option should follow
a event selector (-e) which selects tracepoint event(s). It adds a an event selector (-e) which selects tracepoint event(s). It adds a
filter expression 'common_pid != $PERFPID' to filters. If other filter expression 'common_pid != $PERFPID' to filters. If other
'--filter' exists, the new filter expression will be combined with '--filter' exists, the new filter expression will be combined with
them by '&&'. them by '&&'.
......
...@@ -54,6 +54,8 @@ endif ...@@ -54,6 +54,8 @@ endif
ifeq ($(SRCARCH),arm64) ifeq ($(SRCARCH),arm64)
NO_PERF_REGS := 0 NO_PERF_REGS := 0
NO_SYSCALL_TABLE := 0
CFLAGS += -I$(OUTPUT)arch/arm64/include/generated
LIBUNWIND_LIBS = -lunwind -lunwind-aarch64 LIBUNWIND_LIBS = -lunwind -lunwind-aarch64
endif endif
...@@ -905,8 +907,8 @@ bindir = $(abspath $(prefix)/$(bindir_relative)) ...@@ -905,8 +907,8 @@ bindir = $(abspath $(prefix)/$(bindir_relative))
mandir = share/man mandir = share/man
infodir = share/info infodir = share/info
perfexecdir = libexec/perf-core perfexecdir = libexec/perf-core
perf_include_dir = lib/include/perf perf_include_dir = lib/perf/include
perf_examples_dir = lib/examples/perf perf_examples_dir = lib/perf/examples
sharedir = $(prefix)/share sharedir = $(prefix)/share
template_dir = share/perf-core/templates template_dir = share/perf-core/templates
STRACE_GROUPS_DIR = share/perf-core/strace/groups STRACE_GROUPS_DIR = share/perf-core/strace/groups
......
...@@ -384,6 +384,8 @@ export INSTALL SHELL_PATH ...@@ -384,6 +384,8 @@ export INSTALL SHELL_PATH
SHELL = $(SHELL_PATH) SHELL = $(SHELL_PATH)
linux_uapi_dir := $(srctree)/tools/include/uapi/linux
beauty_outdir := $(OUTPUT)trace/beauty/generated beauty_outdir := $(OUTPUT)trace/beauty/generated
beauty_ioctl_outdir := $(beauty_outdir)/ioctl beauty_ioctl_outdir := $(beauty_outdir)/ioctl
drm_ioctl_array := $(beauty_ioctl_outdir)/drm_ioctl_array.c drm_ioctl_array := $(beauty_ioctl_outdir)/drm_ioctl_array.c
...@@ -431,6 +433,12 @@ kvm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/kvm_ioctl.sh ...@@ -431,6 +433,12 @@ kvm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/kvm_ioctl.sh
$(kvm_ioctl_array): $(kvm_hdr_dir)/kvm.h $(kvm_ioctl_tbl) $(kvm_ioctl_array): $(kvm_hdr_dir)/kvm.h $(kvm_ioctl_tbl)
$(Q)$(SHELL) '$(kvm_ioctl_tbl)' $(kvm_hdr_dir) > $@ $(Q)$(SHELL) '$(kvm_ioctl_tbl)' $(kvm_hdr_dir) > $@
socket_ipproto_array := $(beauty_outdir)/socket_ipproto_array.c
socket_ipproto_tbl := $(srctree)/tools/perf/trace/beauty/socket_ipproto.sh
$(socket_ipproto_array): $(linux_uapi_dir)/in.h $(socket_ipproto_tbl)
$(Q)$(SHELL) '$(socket_ipproto_tbl)' $(linux_uapi_dir) > $@
vhost_virtio_ioctl_array := $(beauty_ioctl_outdir)/vhost_virtio_ioctl_array.c vhost_virtio_ioctl_array := $(beauty_ioctl_outdir)/vhost_virtio_ioctl_array.c
vhost_virtio_hdr_dir := $(srctree)/tools/include/uapi/linux vhost_virtio_hdr_dir := $(srctree)/tools/include/uapi/linux
vhost_virtio_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/vhost_virtio_ioctl.sh vhost_virtio_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/vhost_virtio_ioctl.sh
...@@ -566,6 +574,7 @@ prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders $(drm_ioc ...@@ -566,6 +574,7 @@ prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders $(drm_ioc
$(sndrv_ctl_ioctl_array) \ $(sndrv_ctl_ioctl_array) \
$(kcmp_type_array) \ $(kcmp_type_array) \
$(kvm_ioctl_array) \ $(kvm_ioctl_array) \
$(socket_ipproto_array) \
$(vhost_virtio_ioctl_array) \ $(vhost_virtio_ioctl_array) \
$(madvise_behavior_array) \ $(madvise_behavior_array) \
$(perf_ioctl_array) \ $(perf_ioctl_array) \
...@@ -860,6 +869,7 @@ clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clea ...@@ -860,6 +869,7 @@ clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clea
$(OUTPUT)$(sndrv_pcm_ioctl_array) \ $(OUTPUT)$(sndrv_pcm_ioctl_array) \
$(OUTPUT)$(kvm_ioctl_array) \ $(OUTPUT)$(kvm_ioctl_array) \
$(OUTPUT)$(kcmp_type_array) \ $(OUTPUT)$(kcmp_type_array) \
$(OUTPUT)$(socket_ipproto_array) \
$(OUTPUT)$(vhost_virtio_ioctl_array) \ $(OUTPUT)$(vhost_virtio_ioctl_array) \
$(OUTPUT)$(perf_ioctl_array) \ $(OUTPUT)$(perf_ioctl_array) \
$(OUTPUT)$(prctl_option_array) \ $(OUTPUT)$(prctl_option_array) \
......
...@@ -4,3 +4,24 @@ PERF_HAVE_DWARF_REGS := 1 ...@@ -4,3 +4,24 @@ PERF_HAVE_DWARF_REGS := 1
endif endif
PERF_HAVE_JITDUMP := 1 PERF_HAVE_JITDUMP := 1
PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1 PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
#
# Syscall table generation for perf
#
out := $(OUTPUT)arch/arm64/include/generated/asm
header := $(out)/syscalls.c
sysdef := $(srctree)/tools/include/uapi/asm-generic/unistd.h
sysprf := $(srctree)/tools/perf/arch/arm64/entry/syscalls/
systbl := $(sysprf)/mksyscalltbl
# Create output directory if not already present
_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
$(header): $(sysdef) $(systbl)
$(Q)$(SHELL) '$(systbl)' '$(CC)' '$(HOSTCC)' $(sysdef) > $@
clean::
$(call QUIET_CLEAN, arm64) $(RM) $(header)
archheaders: $(header)
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
#
# Generate system call table for perf. Derived from
# powerpc script.
#
# Copyright IBM Corp. 2017
# Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
# Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
# Changed by: Kim Phillips <kim.phillips@arm.com>
gcc=$1
hostcc=$2
input=$3
if ! test -r $input; then
echo "Could not read input file" >&2
exit 1
fi
create_table_from_c()
{
local sc nr last_sc
create_table_exe=`mktemp /tmp/create-table-XXXXXX`
{
cat <<-_EoHEADER
#include <stdio.h>
#define __ARCH_WANT_RENAMEAT
#include "$input"
int main(int argc, char *argv[])
{
_EoHEADER
while read sc nr; do
printf "%s\n" " printf(\"\\t[%d] = \\\"$sc\\\",\\n\", __NR_$sc);"
last_sc=$sc
done
printf "%s\n" " printf(\"#define SYSCALLTBL_ARM64_MAX_ID %d\\n\", __NR_$last_sc);"
printf "}\n"
} | $hostcc -o $create_table_exe -x c -
$create_table_exe
rm -f $create_table_exe
}
create_table()
{
echo "static const char *syscalltbl_arm64[] = {"
create_table_from_c
echo "};"
}
$gcc -E -dM -x c $input \
|sed -ne 's/^#define __NR_//p' \
|sort -t' ' -k2 -nu \
|create_table
...@@ -58,9 +58,13 @@ static int check_return_reg(int ra_regno, Dwarf_Frame *frame) ...@@ -58,9 +58,13 @@ static int check_return_reg(int ra_regno, Dwarf_Frame *frame)
} }
/* /*
* Check if return address is on the stack. * Check if return address is on the stack. If return address
* is in a register (typically R0), it is yet to be saved on
* the stack.
*/ */
if (nops != 0 || ops != NULL) if ((nops != 0 || ops != NULL) &&
!(nops == 1 && ops[0].atom == DW_OP_regx &&
ops[0].number2 == 0 && ops[0].offset == 0))
return 0; return 0;
/* /*
...@@ -246,7 +250,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain) ...@@ -246,7 +250,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
if (!chain || chain->nr < 3) if (!chain || chain->nr < 3)
return skip_slot; return skip_slot;
ip = chain->ips[2]; ip = chain->ips[1];
thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al); thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
......
...@@ -102,7 +102,7 @@ const char * const kvm_skip_events[] = { ...@@ -102,7 +102,7 @@ const char * const kvm_skip_events[] = {
int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid) int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid)
{ {
if (strstr(cpuid, "IBM/S390")) { if (strstr(cpuid, "IBM")) {
kvm->exit_reasons = sie_exit_reasons; kvm->exit_reasons = sie_exit_reasons;
kvm->exit_reasons_isa = "SIE"; kvm->exit_reasons_isa = "SIE";
} else } else
......
...@@ -2193,7 +2193,7 @@ static void print_cacheline(struct c2c_hists *c2c_hists, ...@@ -2193,7 +2193,7 @@ static void print_cacheline(struct c2c_hists *c2c_hists,
fprintf(out, "%s\n", bf); fprintf(out, "%s\n", bf);
fprintf(out, " -------------------------------------------------------------\n"); fprintf(out, " -------------------------------------------------------------\n");
hists__fprintf(&c2c_hists->hists, false, 0, 0, 0, out, true); hists__fprintf(&c2c_hists->hists, false, 0, 0, 0, out, false);
} }
static void print_pareto(FILE *out) static void print_pareto(FILE *out)
...@@ -2268,7 +2268,7 @@ static void perf_c2c__hists_fprintf(FILE *out, struct perf_session *session) ...@@ -2268,7 +2268,7 @@ static void perf_c2c__hists_fprintf(FILE *out, struct perf_session *session)
fprintf(out, "=================================================\n"); fprintf(out, "=================================================\n");
fprintf(out, "#\n"); fprintf(out, "#\n");
hists__fprintf(&c2c.hists.hists, true, 0, 0, 0, stdout, false); hists__fprintf(&c2c.hists.hists, true, 0, 0, 0, stdout, true);
fprintf(out, "\n"); fprintf(out, "\n");
fprintf(out, "=================================================\n"); fprintf(out, "=================================================\n");
...@@ -2349,6 +2349,9 @@ static int perf_c2c__browse_cacheline(struct hist_entry *he) ...@@ -2349,6 +2349,9 @@ static int perf_c2c__browse_cacheline(struct hist_entry *he)
" s Toggle full length of symbol and source line columns \n" " s Toggle full length of symbol and source line columns \n"
" q Return back to cacheline list \n"; " q Return back to cacheline list \n";
if (!he)
return 0;
/* Display compact version first. */ /* Display compact version first. */
c2c.symbol_full = false; c2c.symbol_full = false;
......
...@@ -696,7 +696,7 @@ static void hists__process(struct hists *hists) ...@@ -696,7 +696,7 @@ static void hists__process(struct hists *hists)
hists__output_resort(hists, NULL); hists__output_resort(hists, NULL);
hists__fprintf(hists, !quiet, 0, 0, 0, stdout, hists__fprintf(hists, !quiet, 0, 0, 0, stdout,
symbol_conf.use_callchain); !symbol_conf.use_callchain);
} }
static void data__fprintf(void) static void data__fprintf(void)
......
此差异已折叠。
此差异已折叠。
...@@ -307,7 +307,7 @@ static void perf_top__print_sym_table(struct perf_top *top) ...@@ -307,7 +307,7 @@ static void perf_top__print_sym_table(struct perf_top *top)
hists__output_recalc_col_len(hists, top->print_entries - printed); hists__output_recalc_col_len(hists, top->print_entries - printed);
putchar('\n'); putchar('\n');
hists__fprintf(hists, false, top->print_entries - printed, win_width, hists__fprintf(hists, false, top->print_entries - printed, win_width,
top->min_percent, stdout, symbol_conf.use_callchain); top->min_percent, stdout, !symbol_conf.use_callchain);
} }
static void prompt_integer(int *target, const char *msg) static void prompt_integer(int *target, const char *msg)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册