- 09 11月, 2012 3 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
Multi-cluster ARMv7 systems may have CPU PMUs with different number of counters. This patch updates armv7_pmnc_counter_valid so that it takes a pmu argument and checks the counter validity against that. We also remove a number of redundant counter checks whether the current PMU is not easily retrievable. Signed-off-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep KarkadaNagesha 提交于
The arm_pmu functions have wildly varied parameters which can often be derived from struct perf_event. This patch changes the arm_pmu function prototypes so that struct perf_event pointers are passed in preference to fields that can be derived from the event. Signed-off-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep KarkadaNagesha 提交于
Supporting multiple, heterogeneous CPU PMUs requires us to allocate the arm_pmu structures dynamically as the devices are probed. This patch removes the static structure definitions for each CPU PMU type and instead passes pointers to the PMU-specific init functions. Signed-off-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 8月, 2012 2 次提交
-
-
由 Will Deacon 提交于
The CPU PMU code is tightly coupled with generic ARM PMU handling code. This makes it cumbersome when trying to add support for other ARM PMUs (e.g. interconnect, L2 cache controller, bus) as the generic parts of the code are not readily reusable. This patch cleans up perf_event.c so that reusable code is exposed via header files to other potential PMU drivers. The CPU code is consistently named to identify it as such and also to prepare for moving it into a separate file. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The CPU PMU is probed using the current cpuid information as part of the early_initcall initialising the architecture perf backend. For architectures without NMI (such as ARM), this does not need to be performed early and can be deferred to the driver probe callback. This also allows us to probe the devicetree in preference to parsing the current cpuid, which may be invalid on a big.LITTLE multi-cluster system. This patch defers the PMU probing and uses the devicetree information when available. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 7月, 2012 1 次提交
-
-
由 Will Deacon 提交于
In order to provide PMU name strings compatible with the OProfile user ABI, an enumeration of all PMUs is currently used by perf to identify each PMU uniquely. Unfortunately, this does not scale well in the presence of multiple PMUs and creates a single, global namespace across all PMUs in the system. This patch removes the enumeration and instead uses the name string for the PMU to map onto the OProfile variant. perf_pmu_name is implemented for CPU PMUs, which is all that OProfile cares about anyway. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 5月, 2012 1 次提交
-
-
由 Robert Richter 提交于
We always need to pass the last sample period to perf_sample_data_init(), otherwise the event distribution will be wrong. Thus, modifiyng the function interface with the required period as argument. So basically a pattern like this: perf_sample_data_init(&data, ~0ULL); data.period = event->hw.last_period; will now be like that: perf_sample_data_init(&data, ~0ULL, event->hw.last_period); Avoids unininitialized data.period and simplifies code. Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-3-git-send-email-robert.richter@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 3月, 2012 1 次提交
-
-
由 Will Deacon 提交于
Cortex-A7 implements an ARMv7-compatible PMU compliant with the PMUv2 architecture specification. This patch adds support for the PMU to the ARM perf backend. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 3月, 2012 3 次提交
-
-
由 Will Deacon 提交于
The PMU IRQ handlers in perf assume that if a counter has overflowed then perf must be responsible. In the paranoid world of crazy hardware, this could be false, so check that we do have a valid event before attempting to dereference NULL in the interrupt path. Cc: <stable@vger.kernel.org> Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When disabling a counter on an ARMv7 PMU, we should also clear the overflow flag in case an overflow occurred whilst stopping the counter. This prevents a spurious overflow being picked up later and leading to either false accounting or a NULL dereference. Cc: <stable@vger.kernel.org> Reported-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
On ARM, the PMU does not stop counting after an overflow and therefore IRQ latency affects the new counter value read by the kernel. This is significant for non-sampling runs where it is possible for the new value to overtake the previous one, causing the delta to be out by up to max_period events. Commit a737823d ("ARM: 6835/1: perf: ensure overflows aren't missed due to IRQ latency") attempted to fix this problem by allowing interrupt handlers to pass an overflow flag to the event update function, causing the overflow calculation to assume that the counter passed through zero when going from prev to new. Unfortunately, this doesn't work when overflow occurs on the perf_task_tick path because we have the flag cleared and end up computing a large negative delta. This patch removes the overflow flag from armpmu_event_update and instead limits the sample_period to half of the max_period for non-sampling profiling runs. Cc: <stable@vger.kernel.org> Signed-off-by: NMing Lei <ming.lei@canonical.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 2月, 2012 1 次提交
-
-
由 Will Deacon 提交于
Commit 89d6c0b5 ("perf, arch: Add generic NODE cache events") added empty NODE event definitions for the ARM PMU implementations. This was merged along with Cortex-A5 and Cortex-A15 PMU support, so they missed out on the original patch. This patch adds the empty definitions to Cortex-A5 and Cortex-A15. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 12月, 2011 2 次提交
-
-
由 Will Deacon 提交于
Commit 8f622422 ("perf events: Add generic front-end and back-end stalled cycle event definitions") added two new ABI events for counting stalled cycles. This patch adds support for these new events to the ARM perf implementation. Cc: Jamie Iles <jamie@jamieiles.com> Cc: Jean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
This patch updates the ARMv7 perf event numbers so that: (1) A consistent naming scheme is used between different CPUs. (2) Only events actually used by Linux are described. (3) Where possible, architected events are used in preference to CPU-specific events. This results in the removal of a load of unused, hardcoded data and makes it more clear as to which events are supported on each PMU. Cc: Jean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 10月, 2011 1 次提交
-
-
由 Will Deacon 提交于
Using COHERENT_LINE_{MISS,HIT} for cache misses and references respectively is completely wrong. Instead, use the L1D events which are a better and more useful approximation despite ignoring instruction traffic. Reported-by: NAlasdair Grant <alasdair.grant@arm.com> Reported-by: NMatt Horsnell <matt.horsnell@arm.com> Reported-by: NMichael Williams <michael.williams@arm.com> Cc: stable@kernel.org Cc: Jean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 8月, 2011 9 次提交
-
-
由 Mark Rutland 提交于
Currently struct cpu_hw_events stores data on events running on a PMU associated with a CPU. As this data is general enough to be used for system PMUs, this name is a misnomer, and may cause confusion when it is used for system PMUs. Additionally, 'armpmu' is commonly used as a parameter name for an instance of struct arm_pmu. The name is also used for a global instance which represents the CPU's PMU. As cpu_hw_events is now not tied to CPU PMUs, it is renamed to pmu_hw_events, with instances of it renamed similarly. As the global 'armpmu' is CPU-specfic, it is renamed to cpu_pmu. This should make it clearer which code is generic, and which is coupled with the CPU. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NAshwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently mapping an event type to a hardware configuration value depends on the data being pointed to from struct arm_pmu. These fields (cache_map, event_map, raw_event_mask) are currently specific to CPU PMUs, and do not serve the general case well. This patch replaces the event map pointers on struct arm_pmu with a new 'map_event' function pointer. Small shim functions are used to reuse the existing common code. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NAshwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently, a single lock serialises access to CPU PMU registers. This global locking is unnecessary as PMU registers are local to the CPU they monitor. This patch replaces the global lock with a per-CPU lock. As the lock is in struct cpu_hw_events, PMUs providing a single cpu_hw_events instance can be locked globally. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NAshwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently, pmu_hw_events::active_mask is used to keep track of which events are active in hardware. As we can stop counters and their interrupts, this is unnecessary. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NAshwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The Cortex-A15 PMU implements the PMUv2 specification and therefore has support for some mode exclusion. This patch adds support for excluding user, kernel and hypervisor counts from a given event. Acked-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The current ARMv7 PMU backend indexes event counters from two, with index zero being reserved and index one being used to represent the cycle counter. This patch tidies up the code by indexing from one instead (with zero for the cycle counter). This allows us to remove many of the accessor macros along with the counter enumeration and makes the code much more readable. Acked-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
This patch ensures that integers are used to represent event indices in the ARMv7 PMU backend. This ensures consistency between functions and also with the arm_pmu structure. Acked-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The ARMv7 perf backend mixes up u32 and unsigned long, which is rather ugly. This patch makes the ARMv7 PMU code consistently use the u32 type instead. Acked-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
This patch removes const qualifiers from instances of struct arm_pmu, and functions initialising them, in preparation for generalising arm_pmu usage to system (AKA uncore) PMUs. This will allow for dynamically modifiable structures (locks, struct pmu) to be added as members of struct arm_pmu. Acked-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 7月, 2011 4 次提交
-
-
由 Will Deacon 提交于
This patch adds support for the Cortex-A15 PMU to the ARMv7 perf-event backend. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
This patch adds support for the Cortex-A5 PMU to the ARMv7 perf-event backend. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The PMUv2 specification reserves a number of event encodings for common events. This patch adds these events to the common event enumeration in preparation for PMUv2 cores, such as Cortex-A15. Acked-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The comment about measuring TLB misses and refills in the ARMv7 perf backend makes little sense and refers loosely to raw counters that should be used instead. This patch removes the comments to avoid any confusion. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 7月, 2011 2 次提交
-
-
由 Peter Zijlstra 提交于
Add a NODE level to the generic cache events which is used to measure local vs remote memory accesses. Like all other cache events, an ACCESS is HIT+MISS, if there is no way to distinguish between reads and writes do reads only etc.. The below needs filling out for !x86 (which I filled out with unsupported events). I'm fairly sure ARM can leave it like that since it doesn't strike me as an architecture that even has NUMA support. SH might have something since it does appear to have some NUMA bits. Sparc64, PowerPC and MIPS certainly want a good look there since they clearly are NUMA capable. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: David Daney <ddaney@caviumnetworks.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptopSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
The nmi parameter indicated if we could do wakeups from the current context, if not, we would set some state and self-IPI and let the resulting interrupt do the wakeup. For the various event classes: - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from the PMI-tail (ARM etc.) - tracepoint: nmi=0; since tracepoint could be from NMI context. - software: nmi=[0,1]; some, like the schedule thing cannot perform wakeups, and hence need 0. As one can see, there is very little nmi=1 usage, and the down-side of not using it is that on some platforms some software events can have a jiffy delay in wakeup (when arch_irq_work_raise isn't implemented). The up-side however is that we can remove the nmi parameter and save a bunch of conditionals in fast paths. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Will Deacon <will.deacon@arm.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: Eric B Munson <emunson@mgebm.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Don Zickus <dzickus@redhat.com> Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 3月, 2011 3 次提交
-
-
由 Will Deacon 提交于
If a counter overflows during a perf stat profiling run it may overtake the last known value of the counter: 0 prev new 0xffffffff |----------|-------|----------------------| In this case, the number of events that have occurred is (0xffffffff - prev) + new. Unfortunately, the event update code will not realise an overflow has occurred and will instead report the event delta as (new - prev) which may be considerably smaller than the real count. This patch adds an extra argument to armpmu_event_update which indicates whether or not an overflow has occurred. If an overflow has occurred then we use the maximum period of the counter to calculate the elapsed events. Acked-by: NJamie Iles <jamie@jamieiles.com> Reported-by: NAshwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
ARMv7 dictates that the interrupt-enable and count-enable registers for each PMU counter are UNKNOWN following core reset. This patch adds a new (optional) function pointer to struct arm_pmu for resetting the PMU state during init. The reset function is called on each CPU via an arch_initcall in the generic ARM perf_event code and allows the PMU backend to write sane values to any UNKNOWN registers. Acked-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The ARMv7 architecture does not guarantee that effects from co-processor writes are immediately visible to following instructions. This patch adds two isbs to the ARMv7 perf code: (1) Immediately after selecting an event register, so that the PMU state following this instruction is consistent with the new event. (2) Immediately before writing to the PMCR, so that any previous writes to the PMU have taken effect before (typically) enabling the counters. Acked-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2010 2 次提交
-
-
由 Will Deacon 提交于
For kernels built with PREEMPT_RT, critical sections protected by standard spinlocks are preemptible. This is not acceptable on perf as (a) we may be scheduled onto a different CPU whilst reading/writing banked PMU registers and (b) the latency when reading the PMU registers becomes unpredictable. This patch upgrades the pmu_lock spinlock to a raw_spinlock instead. Reported-by: NJamie Iles <jamie@jamieiles.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Russell reported a number of warnings coming from sparse when checking the ARM perf_event.c files: | perf_event.c seems to also have problems too: | | CHECK arch/arm/kernel/perf_event.c | arch/arm/kernel/perf_event.c:37:1: warning: symbol 'pmu_lock' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:70:1: warning: symbol 'cpu_hw_events' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:1006:1: warning: symbol 'armv6pmu_enable_event' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:1113:1: warning: symbol 'armv6pmu_stop' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:1956:6: warning: symbol 'armv7pmu_enable_event' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:3072:14: warning: incorrect type in argument 1 (different address spaces) | arch/arm/kernel/perf_event.c:3072:14: expected void const volatile [noderef] <asn:1>*<noident> | arch/arm/kernel/perf_event.c:3072:14: got struct frame_tail *tail | arch/arm/kernel/perf_event.c:3074:49: warning: incorrect type in argument 2 (different address spaces) | arch/arm/kernel/perf_event.c:3074:49: expected void const [noderef] <asn:1>*from | arch/arm/kernel/perf_event.c:3074:49: got struct frame_tail *tail This patch resolves these issues so we can live in silence again. Reported-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 11月, 2010 1 次提交
-
-
由 Will Deacon 提交于
The ARM perf_event.c file contains all PMU backends and, as new PMUs are introduced, will continue to grow. This patch follows the example of x86 and splits the PMU implementations into separate files which are then #included back into the main file. Compile-time guards are added to each PMU file to avoid compiling in code that is not relevant for the version of the architecture which we are targetting. Acked-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-