- 28 7月, 2014 1 次提交
-
-
由 Michael Ellerman 提交于
Because we reuse cpuhw->mmcr on each call to compute_mmcr() there's a risk that we could forget to set one of the values and use whatever value was in there previously. Currently all the implementations are careful to set all the values, but it's safer to clear them all before we call compute_mmcr(). Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 7月, 2014 1 次提交
-
-
由 Michael Ellerman 提交于
In the recent commit b50a6c58 "Clear MMCR2 when enabling PMU", I screwed up the handling of MMCR2 for tasks using EBB. We must make sure we set MMCR2 *before* ebb_switch_in(), otherwise we overwrite the value of MMCR2 that userspace may have written. That potentially breaks a task that uses EBB and manually uses MMCR2 for event freezing. Fixes: b50a6c58 ("powerpc/perf: Clear MMCR2 when enabling PMU") Cc: stable@vger.kernel.org Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 11 7月, 2014 3 次提交
-
-
由 Anton Blanchard 提交于
We are seeing a lot of PMU warnings on POWER8: Can't find PMC that caused IRQ Looking closer, the active PMC is 0 at this point and we took a PMU exception on the transition from negative to 0. Some versions of POWER8 have an issue where they edge detect and not level detect PMC overflows. A number of places program the PMC with (0x80000000 - period_left), where period_left can be negative. We can either fix all of these or just ensure that period_left is always >= 1. This patch takes the second option. Cc: <stable@vger.kernel.org> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Joel Stanley 提交于
On POWER8 when switching to a KVM guest we set bits in MMCR2 to freeze the PMU counters. Aside from on boot they are then never reset, resulting in stuck perf counters for any user in the guest or host. We now set MMCR2 to 0 whenever enabling the PMU, which provides a sane state for perf to use the PMU counters under either the guest or the host. This was manifesting as a bug with ppc64_cpu --frequency: $ sudo ppc64_cpu --frequency WARNING: couldn't run on cpu 0 WARNING: couldn't run on cpu 8 ... WARNING: couldn't run on cpu 144 WARNING: couldn't run on cpu 152 min: 18446744073.710 GHz (cpu -1) max: 0.000 GHz (cpu -1) avg: 0.000 GHz The command uses a perf counter to measure CPU cycles over a fixed amount of time, in order to approximate the frequency of the machine. The counters were returning zero once a guest was started, regardless of weather it was still running or had been shut down. By dumping the value of MMCR2, it was observed that once a guest is running MMCR2 is set to 1s - which stops counters from running: $ sudo sh -c 'echo p > /proc/sysrq-trigger' CPU: 0 PMU registers, ppmu = POWER8 n_counters = 6 PMC1: 5b635e38 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000 PMC5: 1bf5a646 PMC6: 5793d378 PMC7: deadbeef PMC8: deadbeef MMCR0: 0000000080000000 MMCR1: 000000001e000000 MMCRA: 0000040000000000 MMCR2: fffffffffffffc00 EBBHR: 0000000000000000 EBBRR: 0000000000000000 BESCR: 0000000000000000 SIAR: 00000000000a51cc SDAR: c00000000fc40000 SIER: 0000000001000000 This is done unconditionally in book3s_hv_interrupts.S upon entering the guest, and the original value is only save/restored if the host has indicated it was using the PMU. This is okay, however the user of the PMU needs to ensure that it is in a defined state when it starts using it. Fixes: e05b9b9e ("powerpc/perf: Power8 PMU support") Cc: stable@vger.kernel.org Signed-off-by: NJoel Stanley <joel@jms.id.au> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Joel Stanley 提交于
Instead of separate bits for every POWER8 PMU feature, have a single one for v2.07 of the architecture. This saves us adding a MMCR2 define for a future patch. Cc: stable@vger.kernel.org Signed-off-by: NJoel Stanley <joel@jms.id.au> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 3月, 2014 4 次提交
-
-
由 Michael Ellerman 提交于
The previous commit added constraint and register handling to allow processes using EBB (Event Based Branches) to request access to the BHRB (Branch History Rolling Buffer). With that in place we can allow processes using EBB to access the BHRB. This is achieved by setting BHRBA in MMCR0 when we enable EBB access. We must also clear BHRBA when we are disabling. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Although we already block EBB events which request sampling using sample_period, technically it's possible for an event to set sample_type but not sample_period. Nothing terrible will happen if an EBB event does specify sample_type, but it signals a major confusion on the part of userspace, and so we do them the favor of rejecting it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Some power8 revisions have a hardware bug where we can lose a PMU exception, this commit adds a workaround to detect the bad condition and rectify the situation. See the comment in the commit for a full description. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anshuman Khandual 提交于
Currently the sysrq ShowRegs command does not print any PMU registers as we have an empty definition for perf_event_print_debug(). This patch defines perf_event_print_debug() to print various PMU registers. Example output: CPU: 0 PMU registers, ppmu = POWER7 n_counters = 6 PMC1: 00000000 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000 PMC5: 00000000 PMC6: 00000000 PMC7: deadbeef PMC8: deadbeef MMCR0: 0000000080000000 MMCR1: 0000000000000000 MMCRA: 0f00000001000000 SIAR: 0000000000000000 SDAR: 0000000000000000 SIER: 0000000000000000 Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Fix 32 bit build and rework formatting for compactness] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 11 2月, 2014 1 次提交
-
-
由 Anshuman Khandual 提交于
Right now the config_bhrb() PMU specific call happens after write_mmcr0(), which actually enables the PMU for event counting and interrupts. So there is a small window of time where the PMU and BHRB runs without the required HW branch filter (if any) enabled in BHRB. This can cause some of the branch samples to be collected through BHRB without any filter applied and hence affects the correctness of the results. This patch moves the BHRB config function call before enabling interrupts. Here are some data points captured via trace prints which depicts how we could get PMU interrupts with BHRB filter NOT enabled with a standard perf record command line (asking for branch record information as well). $ perf record -j any_call ls Before the patch:- ls-1962 [003] d... 2065.299590: .perf_event_interrupt: MMCRA: 40000000000 ls-1962 [003] d... 2065.299603: .perf_event_interrupt: MMCRA: 40000000000 ... All the PMU interrupts before this point did not have the requested HW branch filter enabled in the MMCRA. ls-1962 [003] d... 2065.299647: .perf_event_interrupt: MMCRA: 40040000000 ls-1962 [003] d... 2065.299662: .perf_event_interrupt: MMCRA: 40040000000 After the patch:- ls-1850 [008] d... 190.311828: .perf_event_interrupt: MMCRA: 40040000000 ls-1850 [008] d... 190.311848: .perf_event_interrupt: MMCRA: 40040000000 All the PMU interrupts have the requested HW BHRB branch filter enabled in MMCRA. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Fixed up whitespace and cleaned up changelog] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 8月, 2013 1 次提交
-
-
由 Anton Blanchard 提交于
Address some of the trivial sparse warnings in arch/powerpc. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 8月, 2013 1 次提交
-
-
由 Michael Ellerman 提交于
We use bit 63 of the event code for userspace to request that the event be counted using EBB (Event Based Branches). Export this value, making it part of the API - though only on processors that support EBB. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 7月, 2013 1 次提交
-
-
由 Anshuman Khandual 提交于
When the task moves around the system, the corresponding cpuhw per cpu strcuture should be popullated with the BHRB filter request value so that PMU could be configured appropriately with that during the next call into power_pmu_enable(). Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 7月, 2013 6 次提交
-
-
由 Michael Ellerman 提交于
Add support for EBB (Event Based Branches) on 64-bit book3s. See the included documentation for more details. EBBs are a feature which allows the hardware to branch directly to a specified user space address when a PMU event overflows. This can be used by programs for self-monitoring with no kernel involvement in the inner loop. Most of the logic is in the generic book3s code, primarily to avoid a proliferation of PMU callbacks. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
In power_pmu_enable() we still enable the PMU even if we have zero events. This should have no effect but doesn't make much sense. Instead just return after telling the hypervisor that we are not using the PMCs. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> CC: <stable@vger.kernel.org> [v3.10] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
In power_pmu_enable() we can use the existing out label to reduce the number of return paths. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> CC: <stable@vger.kernel.org> [v3.10] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
On Power8 we can freeze PMC5 and 6 if we're not using them. Normally they run all the time. As noticed by Anshuman, we should unfreeze them when we disable the PMU as there are legacy tools which expect them to run all the time. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> CC: <stable@vger.kernel.org> [v3.10] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
In pmu_disable() we disable the PMU by setting the FC (Freeze Counters) bit in MMCR0. In order to do this we have to read/modify/write MMCR0. It's possible that we read a value from MMCR0 which has PMAO (PMU Alert Occurred) set. When we write that value back it will cause an interrupt to occur. We will then end up in the PMU interrupt handler even though we are supposed to have just disabled the PMU. We can avoid this by making sure we never write PMAO back. We should not lose interrupts because when the PMU is re-enabled the overflowed values will cause another interrupt. We also reorder the clearing of SAMPLE_ENABLE so that is done after the PMU is frozen. Otherwise there is a small window between the clearing of SAMPLE_ENABLE and the setting of FC where we could take an interrupt and incorrectly see SAMPLE_ENABLE not set. This would for example change the logic in perf_read_regs(). Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> CC: <stable@vger.kernel.org> [v3.10] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the powerpc uses of the __cpuinit macros. There are no __CPUINIT users in assembly files in powerpc. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Josh Boyer <jwboyer@gmail.com> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 6月, 2013 1 次提交
-
-
由 Michael Ellerman 提交于
In commit bc09c219 "Fix finding overflowed PMC in interrupt" we added a printk() to the PMU exception handler. Unfortunately that is not safe. The problem is that the PMU exception may run even when interrupts are soft disabled, aka NMI context. We do this so that we can profile parts of the kernel that have interrupts soft-disabled. But by calling printk() from the exception handler, we can potentially deadlock in the printk code on logbuf_lock, eg: [c00000038ba575c0] c000000000081928 .vprintk_emit+0xa8/0x540 [c00000038ba576a0] c0000000007bcde8 .printk+0x48/0x58 [c00000038ba57710] c000000000076504 .perf_event_interrupt+0x2d4/0x490 [c00000038ba57810] c00000000001f6f8 .performance_monitor_exception+0x48/0x60 [c00000038ba57880] c0000000000032cc performance_monitor_common+0x14c/0x180 --- Exception: f01 (Performance Monitor) at c0000000007b25d4 ._raw_spin_lock_irq +0x64/0xc0 [c00000038ba57bf0] c00000000007ed90 .devkmsg_read+0xd0/0x5a0 [c00000038ba57d00] c0000000001c2934 .vfs_read+0xc4/0x1e0 [c00000038ba57d90] c0000000001c2cd8 .SyS_read+0x58/0xd0 [c00000038ba57e30] c000000000009d54 syscall_exit+0x0/0x98 --- Exception: c01 (System Call) at 00001fffffbf6f7c SP (3ffff6d4de10) is in userspace Fix it by making sure we only call printk() when we are not in NMI context. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Cc: <stable@vger.kernel.org> # 3.9 Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 6月, 2013 2 次提交
-
-
由 Michael Ellerman 提交于
Commit 8f61aa32 "Add support for SIER" missed updates to siar_valid() and perf_get_data_addr(). In both cases we need to check the SIER instead of mmcra. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
This is a revert and then some of commit 860aad71 "Add regs_no_sipr()". This workaround was only needed on early chip versions. As before NO_SIPR becomes a static flag of the PMU struct. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 5月, 2013 3 次提交
-
-
由 Michael Neuling 提交于
Currently we only set the "to" address in the branch stack when the CPU explicitly gives us a value. Unfortunately it only does this for XL form branches (eg blr, bctr, bctar) and not I and B form branches (eg b, bc). Fortunately if we read the instruction from memory we can extract the offset of a branch and calculate the target address. This adds a function power_pmu_bhrb_to() to calculate the target/to address of the corresponding I and B form branches. It handles branches in both user and kernel spaces. It also plumbs this into the perf brhb reading code. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
The current Branch History Rolling Buffer (BHRB) code misinterprets the order of entries in the hardware buffer. It assumes that a branch target address will be read _after_ its corresponding branch. In reality the branch target comes before (lower mfbhrb entry) it's corresponding branch. This is a rewrite of the code to take this into account. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
The new Branch History Rolling buffer (BHRB) code is only useful on 64bit processors, so move it into the #ifdef CONFIG_PPC64 region. This avoids code bloat on 32bit systems. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 26 4月, 2013 6 次提交
-
-
由 Anshuman Khandual 提交于
Provides basic enablement for perf branch stack sampling framework on POWER8 processor based platforms. Adds new BHRB related elements into cpu_hw_event structure to represent current BHRB config, BHRB filter configuration, manage context and to hold output BHRB buffer during PMU interrupt before passing to the user space. This also enables processing of BHRB data and converts them into generic perf branch stack data format. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
On power8 we have a new SIER (Sampled Instruction Event Register), which captures information about instructions when we have random sampling enabled. Add support for loading the SIER into pt_regs, overloading regs->dar. Also set the new NO_SIPR flag in regs->result if we don't have SIPR. Update regs_sihv/sipr() to look for SIPR/SIHV in SIER. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
On power8 the presence or absence of SIPR depends on settings at runtime, so convert to using a dynamic flag for NO_SIPR. Existing backends that set NO_SIPR unconditionally set the dynamic flag obviously. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Add an accessor for regs->result so we can use it to store more flags in future. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
On power8 the SIPR and SIHV are not in MMCRA, so convert the routines to take regs and change the names accordingly. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
In perf_ip_adjust() we potentially use the MMCRA[SLOT] field to adjust the reported IP of a sampled instruction. Currently the logic is written so that if the backend does NOT have the PPMU_ALT_SIPR flag set then we assume MMCRA[SLOT] exists. However on power8 we do not want to set ALT_SIPR (it's in a third location), and we also do not have MMCRA[SLOT]. So add a new flag which only indicates whether MMCRA[SLOT] exists. Naively we'd set it on everything except power6/7, because they set ALT_SIPR, and we've reversed the polarity of the flag. But it's more complicated than that. mpc7450 is 32-bit, and uses its own version of perf_ip_adjust() which doesn't use MMCRA[SLOT], so it doesn't need the new flag set and the behaviour is unchanged. PPC970 (and I assume power4) don't have MMCRA[SLOT], so shouldn't have the new flag set. This is a behaviour change on those cpus, though we were probably getting lucky and the bits in question were 0. power5 and power5+ set the new flag, behaviour unchanged. power6 & power7 do not set the new flag, behaviour unchanged. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 2月, 2013 1 次提交
-
-
由 Sukadev Bhattiprolu 提交于
Make the generic perf events in POWER7 available via sysfs. $ ls /sys/bus/event_source/devices/cpu/events branch-instructions branch-misses cache-misses cache-references cpu-cycles instructions stalled-cycles-backend stalled-cycles-frontend $ cat /sys/bus/event_source/devices/cpu/events/cache-misses event=0x400f0 This patch is based on commits that implement this functionality on x86. Eg: commit a4747393 Author: Jiri Olsa <jolsa@redhat.com> Date: Wed Oct 10 14:53:11 2012 +0200 perf/x86: Make hardware event translations available in sysfs Changelog:[v2] [Jiri Osla] Drop EVENT_ID() macro since it is only used once. Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anton Blanchard <anton@au1.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: linuxppc-dev@ozlabs.org Link: http://lkml.kernel.org/r/20130123062454.GD13720@us.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 29 1月, 2013 1 次提交
-
-
perf/Power: PERF_EVENT_IOC_ENABLE does not reenable event If we disable a perf event because we exceeded the specified ->event_limit, power_pmu_stop() sets the PERF_HES_STOPPED flag on the event. If the application then re-enables the event using PERF_EVENT_IOC_ENABLE ioctl, we don't ever clear this STOPPED flag. Consequently, the user space is never notified of the event. Following message has more background and test case. http://lists.eecs.utk.edu/pipermail/ptools-perfapi/2012-October/002528.html Used the following test cases to verify that this patch works on latest PAPI. $ papi.git/src/ctests/nonthread PAPI_TOT_CYC@5000000 $ papi.git/src/ctests/overflow_single_event Changelog[v2]: - [Paul Mackerras] Also clear PERF_HES_UPTODATE flag since we are restarting the event; cleanup comments and patch description. Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 1月, 2013 2 次提交
-
-
由 Michael Neuling 提交于
On POWER7 when we have really small counts left before overflow, we can take a PMU IRQ, but the PMC gets wound back to just before the overflow. If the kernel is setting the PMC to a value just before the overflow, we can get interrupted again without the PMC making any progress (ie another buggy overflow). In this case, we can end up making no forward progress, with the PMC interrupt returning us to the same count over and over. The below detects when we are making no forward progress (ie. delta = 0) and then increases the amount left before the overflow. This stops us from locking up. Signed-off-by: NMichael Neuling <mikey@neuling.org> Reviewed-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> cc: Paul Mackerras <paulus@samba.org> cc: Anton Blanchard <anton@samba.org> cc: Linux PPC dev <linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
If a PMC is about to overflow on a counter that's on an active perf event (ie. less than 256 from the end) and a _different_ PMC overflows just at this time (a PMC that's not on an active perf event), we currently mark the event as found, but in reality it's not as it's likely the other PMC that caused the IRQ. Since we mark it as found the second catch all for overflows doesn't run, and we don't reset the overflowing PMC ever. Hence we keep hitting that same PMC IRQ over and over and don't reset the actual overflowing counter. This is a rewrite of the perf interrupt handler for book3s to get around this. We now check to see if any of the PMCs have actually overflowed (ie >= 0x80000000). If yes, record it for active counters and just reset it for inactive counters. If it's not overflowed, then we check to see if it's one of the buggy power7 counters and if it is, record it and continue. If none of the PMCs match this, then we make note that we couldn't find the PMC that caused the IRQ. Signed-off-by: NMichael Neuling <mikey@neuling.org> Reviewed-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> cc: Paul Mackerras <paulus@samba.org> cc: Anton Blanchard <anton@samba.org> cc: Linux PPC dev <linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 18 10月, 2012 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This reverts commit 81331211. This revert was requested by the author of the patch as it seems to cause system hangs with some low frequency events
-
- 27 9月, 2012 1 次提交
-
-
powerpc/perf: Sample only if SIAR-Valid bit is set in P7+ On POWER7+ two new bits (mmcra[35] and mmcra[36]) indicate whether the contents of SIAR and SDAR are valid. For marked instructions on P7+, we must save the contents of SIAR and SDAR registers only if these new bits are set. This code/check for the SIAR-Valid bit is specific to P7+, so rather than waste a CPU-feature bit use the PVR flag. Note that Carl Love proposed a similar change for oprofile: https://lkml.org/lkml/2012/6/22/309Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 9月, 2012 1 次提交
-
-
由 Michael Ellerman 提交于
We have an old FIXME in reg.h which points out that we should standardise on PVR_foo for our PVR #defines. Currently we use PVR_ on 32-bit and PV_ on 64-bit. So do that rename and remove the FIXME. Seeing as we're touching all but one usage of __is_processor(), rename it to something less ugly and more indicative of what it does, which is simply to check the PVR version. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 8月, 2012 1 次提交
-
-
由 Sukadev Bhattiprolu 提交于
For certain speculative events on Power7, 'perf stat' reports far higher event count than 'perf record' for the same event. As described in following commit, a performance monitor exception is raised even when the the performance events are rolled back. commit 0837e324 Author: Anton Blanchard <anton@samba.org> Date: Wed Mar 9 14:38:42 2011 +1100 perf_event_interrupt() records an event only when an overflow occurs. But this check for overflow is a simple 'if (val < 0)'. Because the events are rolled back, this check for overflow fails and the event is not recorded. perf_event_interrupt() later uses pmc_overflow() to detect the overflow and resets the counters and the events are lost completely. To properly detect the overflow of rolled back events, use pmc_overflow() even when recording events. To reproduce: $ cat strcpy.c #include <stdio.h> #include <string.h> main() { char buf[256]; alarm(5); while(1) strcpy(buf, "string1"); } $ perf record -e r20014 ./strcpy $ perf report -n > report.1 $ perf stat -e r20014 > report.2 # Compare report.1 and report.2 Reported-by: NMaynard Johnson <mpjohn@us.ibm.com> Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 7月, 2012 1 次提交
-
-
由 Anton Blanchard 提交于
At the moment we always use the SIAR if the PMU supports continuous sampling. Unfortunately the SIAR and the PMU exception are not synchronised for non marked events so we can end up with callchains that dont make sense. The following patch checks the HV and PR bits for samples coming from userspace and always uses pt_regs for them. Userspace will never have interrupts off so there is no real advantage to using the SIAR for non marked events in userspace. I had experimented with a patch that did a similar thing for kernel samples but we lost a significant amount of information. I was unable to profile any of our early exception code for example. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-