- 26 4月, 2013 4 次提交
-
-
由 Michael Ellerman 提交于
On power8 the presence or absence of SIPR depends on settings at runtime, so convert to using a dynamic flag for NO_SIPR. Existing backends that set NO_SIPR unconditionally set the dynamic flag obviously. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Add an accessor for regs->result so we can use it to store more flags in future. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
On power8 the SIPR and SIHV are not in MMCRA, so convert the routines to take regs and change the names accordingly. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
In perf_ip_adjust() we potentially use the MMCRA[SLOT] field to adjust the reported IP of a sampled instruction. Currently the logic is written so that if the backend does NOT have the PPMU_ALT_SIPR flag set then we assume MMCRA[SLOT] exists. However on power8 we do not want to set ALT_SIPR (it's in a third location), and we also do not have MMCRA[SLOT]. So add a new flag which only indicates whether MMCRA[SLOT] exists. Naively we'd set it on everything except power6/7, because they set ALT_SIPR, and we've reversed the polarity of the flag. But it's more complicated than that. mpc7450 is 32-bit, and uses its own version of perf_ip_adjust() which doesn't use MMCRA[SLOT], so it doesn't need the new flag set and the behaviour is unchanged. PPC970 (and I assume power4) don't have MMCRA[SLOT], so shouldn't have the new flag set. This is a behaviour change on those cpus, though we were probably getting lucky and the bits in question were 0. power5 and power5+ set the new flag, behaviour unchanged. power6 & power7 do not set the new flag, behaviour unchanged. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 2月, 2013 1 次提交
-
-
由 Sukadev Bhattiprolu 提交于
Make the generic perf events in POWER7 available via sysfs. $ ls /sys/bus/event_source/devices/cpu/events branch-instructions branch-misses cache-misses cache-references cpu-cycles instructions stalled-cycles-backend stalled-cycles-frontend $ cat /sys/bus/event_source/devices/cpu/events/cache-misses event=0x400f0 This patch is based on commits that implement this functionality on x86. Eg: commit a4747393 Author: Jiri Olsa <jolsa@redhat.com> Date: Wed Oct 10 14:53:11 2012 +0200 perf/x86: Make hardware event translations available in sysfs Changelog:[v2] [Jiri Osla] Drop EVENT_ID() macro since it is only used once. Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anton Blanchard <anton@au1.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: linuxppc-dev@ozlabs.org Link: http://lkml.kernel.org/r/20130123062454.GD13720@us.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 29 1月, 2013 1 次提交
-
-
perf/Power: PERF_EVENT_IOC_ENABLE does not reenable event If we disable a perf event because we exceeded the specified ->event_limit, power_pmu_stop() sets the PERF_HES_STOPPED flag on the event. If the application then re-enables the event using PERF_EVENT_IOC_ENABLE ioctl, we don't ever clear this STOPPED flag. Consequently, the user space is never notified of the event. Following message has more background and test case. http://lists.eecs.utk.edu/pipermail/ptools-perfapi/2012-October/002528.html Used the following test cases to verify that this patch works on latest PAPI. $ papi.git/src/ctests/nonthread PAPI_TOT_CYC@5000000 $ papi.git/src/ctests/overflow_single_event Changelog[v2]: - [Paul Mackerras] Also clear PERF_HES_UPTODATE flag since we are restarting the event; cleanup comments and patch description. Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 1月, 2013 2 次提交
-
-
由 Michael Neuling 提交于
On POWER7 when we have really small counts left before overflow, we can take a PMU IRQ, but the PMC gets wound back to just before the overflow. If the kernel is setting the PMC to a value just before the overflow, we can get interrupted again without the PMC making any progress (ie another buggy overflow). In this case, we can end up making no forward progress, with the PMC interrupt returning us to the same count over and over. The below detects when we are making no forward progress (ie. delta = 0) and then increases the amount left before the overflow. This stops us from locking up. Signed-off-by: NMichael Neuling <mikey@neuling.org> Reviewed-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> cc: Paul Mackerras <paulus@samba.org> cc: Anton Blanchard <anton@samba.org> cc: Linux PPC dev <linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
If a PMC is about to overflow on a counter that's on an active perf event (ie. less than 256 from the end) and a _different_ PMC overflows just at this time (a PMC that's not on an active perf event), we currently mark the event as found, but in reality it's not as it's likely the other PMC that caused the IRQ. Since we mark it as found the second catch all for overflows doesn't run, and we don't reset the overflowing PMC ever. Hence we keep hitting that same PMC IRQ over and over and don't reset the actual overflowing counter. This is a rewrite of the perf interrupt handler for book3s to get around this. We now check to see if any of the PMCs have actually overflowed (ie >= 0x80000000). If yes, record it for active counters and just reset it for inactive counters. If it's not overflowed, then we check to see if it's one of the buggy power7 counters and if it is, record it and continue. If none of the PMCs match this, then we make note that we couldn't find the PMC that caused the IRQ. Signed-off-by: NMichael Neuling <mikey@neuling.org> Reviewed-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> cc: Paul Mackerras <paulus@samba.org> cc: Anton Blanchard <anton@samba.org> cc: Linux PPC dev <linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 18 10月, 2012 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This reverts commit 81331211. This revert was requested by the author of the patch as it seems to cause system hangs with some low frequency events
-
- 27 9月, 2012 1 次提交
-
-
powerpc/perf: Sample only if SIAR-Valid bit is set in P7+ On POWER7+ two new bits (mmcra[35] and mmcra[36]) indicate whether the contents of SIAR and SDAR are valid. For marked instructions on P7+, we must save the contents of SIAR and SDAR registers only if these new bits are set. This code/check for the SIAR-Valid bit is specific to P7+, so rather than waste a CPU-feature bit use the PVR flag. Note that Carl Love proposed a similar change for oprofile: https://lkml.org/lkml/2012/6/22/309Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 9月, 2012 1 次提交
-
-
由 Michael Ellerman 提交于
We have an old FIXME in reg.h which points out that we should standardise on PVR_foo for our PVR #defines. Currently we use PVR_ on 32-bit and PV_ on 64-bit. So do that rename and remove the FIXME. Seeing as we're touching all but one usage of __is_processor(), rename it to something less ugly and more indicative of what it does, which is simply to check the PVR version. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 8月, 2012 1 次提交
-
-
由 Sukadev Bhattiprolu 提交于
For certain speculative events on Power7, 'perf stat' reports far higher event count than 'perf record' for the same event. As described in following commit, a performance monitor exception is raised even when the the performance events are rolled back. commit 0837e324 Author: Anton Blanchard <anton@samba.org> Date: Wed Mar 9 14:38:42 2011 +1100 perf_event_interrupt() records an event only when an overflow occurs. But this check for overflow is a simple 'if (val < 0)'. Because the events are rolled back, this check for overflow fails and the event is not recorded. perf_event_interrupt() later uses pmc_overflow() to detect the overflow and resets the counters and the events are lost completely. To properly detect the overflow of rolled back events, use pmc_overflow() even when recording events. To reproduce: $ cat strcpy.c #include <stdio.h> #include <string.h> main() { char buf[256]; alarm(5); while(1) strcpy(buf, "string1"); } $ perf record -e r20014 ./strcpy $ perf report -n > report.1 $ perf stat -e r20014 > report.2 # Compare report.1 and report.2 Reported-by: NMaynard Johnson <mpjohn@us.ibm.com> Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 7月, 2012 3 次提交
-
-
由 Anton Blanchard 提交于
At the moment we always use the SIAR if the PMU supports continuous sampling. Unfortunately the SIAR and the PMU exception are not synchronised for non marked events so we can end up with callchains that dont make sense. The following patch checks the HV and PR bits for samples coming from userspace and always uses pt_regs for them. Userspace will never have interrupts off so there is no real advantage to using the SIAR for non marked events in userspace. I had experimented with a patch that did a similar thing for kernel samples but we lost a significant amount of information. I was unable to profile any of our early exception code for example. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The logic to choose whether to use the SIAR or get the information out of pt_regs is going to get more complicated, so do it once in perf_read_regs. We overload regs->result which is gross but we are already doing it with regs->dsisr. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We want to access the MMCRA_SIHV and MMCRA_SIPR bits elsewhere so create mmcra_sihv and mmcra_sipr which hide the differences between the old and new layout of the bits. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 5月, 2012 1 次提交
-
-
由 Robert Richter 提交于
We always need to pass the last sample period to perf_sample_data_init(), otherwise the event distribution will be wrong. Thus, modifiyng the function interface with the required period as argument. So basically a pattern like this: perf_sample_data_init(&data, ~0ULL); data.period = event->hw.last_period; will now be like that: perf_sample_data_init(&data, ~0ULL, event->hw.last_period); Avoids unininitialized data.period and simplifies code. Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-3-git-send-email-robert.richter@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 3月, 2012 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
970 and Power4 don't support "continuous sampling" which means that when we aren't in marked instruction sampling mode (marked events), SIAR isn't updated with the last instruction sampled before the perf interrupt. On those processors, we must thus use the exception SRR0 value as the sampled instruction pointer. Those processors also don't support the SIPR and SIHV bits in MMCRA which means we need some kind of heuristic to decide if SIAR values represent kernel or user addresses. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 2月, 2012 1 次提交
-
-
由 Michael Ellerman 提交于
The perf code has grown a lot since it started, and is big enough to warrant its own subdirectory. For reference it's ~60% bigger than the oprofile code. It declutters the kernel directory, makes it simpler to grep for "just perf stuff", and allows us to shorten some filenames. While we're at it, make it more obvious that we have two implementations of the core perf logic. One for (roughly) Book3S CPUs, which was the original implementation, and the other for Freescale embedded CPUs. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 16 2月, 2012 1 次提交
-
-
由 Anton Blanchard 提交于
perf on POWER stopped working after commit e050e3f0 (perf: Fix broken interrupt rate throttling). That patch exposed a bug in the POWER perf_events code. Since the PMCs count upwards and take an exception when the top bit is set, we want to write 0x80000000 - left in power_pmu_start. We were instead programming in left which effectively disables the counter until we eventually hit 0x80000000. This could take seconds or longer. With the patch applied I get the expected number of samples: SAMPLE events: 9948 Signed-off-by: NAnton Blanchard <anton@samba.org> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <stable@kernel.org>
-
- 21 12月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
Put the logic to compute the event index into a per pmu method. This is required because the x86 rules are weird and wonderful and don't match the capabilities of the current scheme. AFAIK only powerpc actually has a usable userspace read of the PMCs but I'm not at all sure anybody actually used that. ARM is restored to the default since it currently does not support userspace access at all. And all software events are provided with a method that reports their index as 0 (disabled). Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Will Deacon <will.deacon@arm.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: Eric B Munson <emunson@mgebm.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Stephane Eranian <eranian@google.com> Cc: Arun Sharma <asharma@fb.com> Link: http://lkml.kernel.org/n/tip-dfydxodki16lylkt3gl2j7cw@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 19 7月, 2011 1 次提交
-
-
由 Dmitry Eremin-Solenikov 提交于
This fixes the following warning: WARNING: arch/powerpc/kernel/built-in.o(.text+0x29768): Section mismatch in reference from the function .register_power_pmu() to the function .cpuinit.text:.power_pmu_notifier() The function .register_power_pmu() references the function __cpuinit .power_pmu_notifier(). This is often because .register_power_pmu lacks a __cpuinit annotation or the annotation of .power_pmu_notifier is wrong. Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 7月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
The nmi parameter indicated if we could do wakeups from the current context, if not, we would set some state and self-IPI and let the resulting interrupt do the wakeup. For the various event classes: - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from the PMI-tail (ARM etc.) - tracepoint: nmi=0; since tracepoint could be from NMI context. - software: nmi=[0,1]; some, like the schedule thing cannot perform wakeups, and hence need 0. As one can see, there is very little nmi=1 usage, and the down-side of not using it is that on some platforms some software events can have a jiffy delay in wakeup (when arch_irq_work_raise isn't implemented). The up-side however is that we can remove the nmi parameter and save a bunch of conditionals in fast paths. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Will Deacon <will.deacon@arm.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: Eric B Munson <emunson@mgebm.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Don Zickus <dzickus@redhat.com> Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 4月, 2011 1 次提交
-
-
由 Eric B Munson 提交于
Because of speculative event roll back, it is possible for some event coutners to decrease between reads on POWER7. This causes a problem with the way that counters are updated. Delta calues are calculated in a 64 bit value and the top 32 bits are masked. If the register value has decreased, this leaves us with a very large positive value added to the kernel counters. This patch protects against this by skipping the update if the delta would be negative. This can lead to a lack of precision in the coutner values, but from my testing the value is typcially fewer than 10 samples at a time. Signed-off-by: NEric B Munson <emunson@mgebm.net> Cc: stable@kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 16 3月, 2011 1 次提交
-
-
由 Anton Blanchard 提交于
Events on POWER7 can roll back if a speculative event doesn't eventually complete. Unfortunately in some rare cases they will raise a performance monitor exception. We need to catch this to ensure we reset the PMC. In all cases the PMC will be 256 or less cycles from overflow. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@kernel.org> # as far back as it applies cleanly LKML-Reference: <20110309143842.6c22845e@kryten> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 1月, 2011 1 次提交
-
-
由 Anton Blanchard 提交于
When profiling a benchmark that is almost 100% userspace, I noticed some wildly inaccurate profiles that showed almost all time spent in the kernel. Closer examination shows we were programming a tiny number of cycles into the PMU after each overflow (about ~200 away from the next overflow). This gets us stuck in a loop which we eventually break out of by throttling the PMU (there are regular throttle/unthrottle events in the log). It looks like we aren't setting event->hw.last_period to something same and the frequency to period calculations in perf are going haywire. With the following patch we find the correct period after a few interrupts and stay there. I also see no more throttle events. Signed-off-by: NAnton Blanchard <anton@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> LKML-Reference: <20110117161742.5feb3761@kryten> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 12月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Extend the perf_pmu_register() interface to allow for named and dynamic pmu types. Because we need to support the existing static types we cannot use dynamic types for everything, hence provide a type argument. If we want to enumerate the PMUs they need a name, provide one. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101117222056.259707703@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 19 10月, 2010 1 次提交
-
-
由 Paul Mackerras 提交于
Commit c3f00c70 ("perf: Separate find_get_context() from event initialization") changed the generic perf_event code to call perf_event_alloc, which calls the arch-specific event_init code, before looking up the context for the new event. Unfortunately, power_pmu_event_init uses event->ctx->task to see whether the new event is a per-task event or a system-wide event, and thus crashes since event->ctx is NULL at the point where power_pmu_event_init gets called. (The reason it needs to know whether it is a per-task event is because there are some hardware events on Power systems which only count when the processor is not idle, and there are some fixed-function counters which count such events. For example, the "run cycles" event counts cycles when the processor is not idle. If the user asks to count cycles, we can use "run cycles" if this is a per-task event, since the processor is running when the task is running, by definition. We can't use "run cycles" if the user asks for "cycles" on a system-wide counter.) Fortunately the information we need is in the event->attach_state field, so we just use that instead. Signed-off-by: NPaul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101019055535.GA10398@drongo> Signed-off-by: NIngo Molnar <mingo@elte.hu> Reported-by: NAlexey Kardashevskiy <aik@au1.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 9月, 2010 5 次提交
-
-
由 Peter Zijlstra 提交于
Replace pmu::{enable,disable,start,stop,unthrottle} with pmu::{add,del,start,stop}, all of which take a flags argument. The new interface extends the capability to stop a counter while keeping it scheduled on the PMU. We replace the throttled state with the generic stopped state. This also allows us to efficiently stop/start counters over certain code paths (like IRQ handlers). It also allows scheduling a counter without it starting, allowing for a generic frozen state (useful for rotating stopped counters). The stopped state is implemented in two different ways, depending on how the architecture implemented the throttled state: 1) We disable the counter: a) the pmu has per-counter enable bits, we flip that b) we program a NOP event, preserving the counter state 2) We store the counter state and ignore all read/overflow events Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Changes perf_disable() into perf_pmu_disable(). Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since the current perf_disable() usage is only an optimization, remove it for now. This eases the removal of the __weak hw_perf_enable() interface. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Simple registration interface for struct pmu, this provides the infrastructure for removing all the weak functions. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"` Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 7月, 2010 1 次提交
-
-
由 Matt Evans 提交于
When power_pmu_disable() removes the given event from a particular index into cpuhw->event[], it shuffles down higher event[] entries. But, this array is paired with cpuhw->events[] and cpuhw->flags[] so should shuffle them similarly. If these arrays get out of sync, code such as power_check_constraints() will fail. This caused a bug where events were temporarily disabled and then failed to be re-enabled; subsequent code tried to write_pmc() with its (disabled) idx of 0, causing a message "oops trying to write PMC0". This triggers this bug on POWER7, running a miss-heavy test: perf record -e L1-dcache-load-misses -e L1-dcache-store-misses ./misstest Signed-off-by: NMatt Evans <matt@ozlabs.org> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 6月, 2010 2 次提交
-
-
由 Peter Zijlstra 提交于
Since now all modification to event->count (and ->prev_count and ->period_left) are local to a cpu, change then to local64_t so we avoid the LOCK'ed ops. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Clarify some of the transactional group scheduling API details and change it so that a successfull ->commit_txn also closes the transaction. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <1274803086.5882.1752.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 5月, 2010 1 次提交
-
-
由 Lin Ming 提交于
[paulus@samba.org: Set cpuhw->event[i]->hw.config in power_pmu_commit_txn.] Signed-off-by: NLin Ming <ming.m.lin@intel.com> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100508102841.GA10650@brick.ozlabs.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 3月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Fix: arch/powerpc/kernel/perf_event.c:1334: error: 'power_pmu_notifier' undeclared (first use in this function) arch/powerpc/kernel/perf_event.c:1334: error: (Each undeclared identifier is reported only once arch/powerpc/kernel/perf_event.c:1334: error: for each function it appears in.) arch/powerpc/kernel/perf_event.c:1334: error: implicit declaration of function 'power_pmu_notifier' arch/powerpc/kernel/perf_event.c:1334: error: implicit declaration of function 'register_cpu_notifier' Due to commit 3f6da390 (perf: Rework and fix the arch CPU-hotplug hooks). Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 3月, 2010 2 次提交
-
-
由 Peter Zijlstra 提交于
Remove the hw_perf_event_*() hotplug hooks in favour of per PMU hotplug notifiers. This has the advantage of reducing the static weak interface as well as exposing all hotplug actions to the PMU. Use this to fix x86 hotplug usage where we did things in ONLINE which should have been done in UP_PREPARE or STARTING. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mundt <lethal@linux-sh.org> Cc: paulus@samba.org Cc: eranian@google.com Cc: robert.richter@amd.com Cc: fweisbec@gmail.com Cc: Arnaldo Carvalho de Melo <acme@infradead.org> LKML-Reference: <20100305154128.736225361@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
This makes it easier to extend perf_sample_data and fixes a bug on arm and sparc, which failed to set ->raw to NULL, which can cause crashes when combined with PERF_SAMPLE_RAW. It also optimizes PowerPC and tracepoint, because the struct initialization is forced to zero out the whole structure. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NJean Pihet <jpihet@mvista.com> Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: Jamie Iles <jamie.iles@picochip.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Cc: stable@kernel.org LKML-Reference: <20100304140100.315416040@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-