- 25 4月, 2016 2 次提交
-
-
由 Ashok Kumar 提交于
Defined all the ARMv8 recommended implementation defined events from J3 - "ARM recommendations for IMPLEMENTATION DEFINED event numbers" in ARM DDI 0487A.g. Signed-off-by: NAshok Kumar <ashoks@broadcom.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ashok Kumar 提交于
changed all the common events name definition as per the document ARM DDI 0487A.g SoC specific event names follow the general naming style in the file and doesn't reflect any document. changed ARMV8_A53_PERFCTR_PREFETCH_LINEFILL to ARMV8_A53_PERFCTR_PREF_LINEFILL to match with other SoC specific event names which use _PREF_ style. corrected typo l21 to l2i. Signed-off-by: NAshok Kumar <ashoks@broadcom.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 3月, 2016 1 次提交
-
-
由 Shannon Zhao 提交于
To use the ARMv8 PMU related register defines from the KVM code, we move the relevant definitions to asm/perf_event.h header file and rename them with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 3月, 2016 2 次提交
-
-
由 Will Deacon 提交于
Commit 7175f059 ("arm64: perf: Enable PMCR long cycle counter bit") added initial support for a 64-bit cycle counter enabled using PMCR.LC. Unfortunately, that patch doesn't extend ARMV8_EVTYPE_MASK, so any attempts to set the enable bit are ignored by armv8pmu_pmcr_write. This patch extends the mask to include the new bit. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
When the kernel is running in HYP (with VHE), it is necessary to include EL2 events if the user requests counting kernel or hypervisor events. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 19 2月, 2016 4 次提交
-
-
由 Jan Glauber 提交于
ARMv8.1 increases the PMU event number space to 16 bit so increase the EVTYPE mask. Signed-off-by: NJan Glauber <jglauber@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jan Glauber 提交于
With the long cycle counter bit (LC) disabled the cycle counter is not working on ThunderX SOC (ThunderX only implements Aarch64). Also, according to documentation LC == 0 is deprecated. To keep the code simple the patch does not introduce 64 bit wide counter functions. Instead writing the cycle counter always sets the upper 32 bits so overflow interrupts are generated as before. Original patch from Andrew Pinksi <Andrew.Pinksi@caviumnetworks.com> Signed-off-by: NJan Glauber <jglauber@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jan Glauber 提交于
Support PMU events on Caviums ThunderX SOC. ThunderX supports some additional counters compared to the default ARMv8 PMUv3: - branch instructions counter - stall frontend & backend counters - L1 dcache load & store counters - L1 icache counters - iTLB & dTLB counters - L1 dcache & icache prefetch counters Signed-off-by: NJan Glauber <jglauber@cavium.com> [will: capitalisation] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jan Glauber 提交于
The implemented Cortex A57 events are strictly-speaking not A57 specific. They are ARM recommended implementation defined events and can be found on other ARMv8 SOCs like Cavium ThunderX too. Therefore rename these events to allow using them in other implementations too. Signed-off-by: NJan Glauber <jglauber@cavium.com> [will: capitalisation and ordering] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 12月, 2015 2 次提交
-
-
由 Will Deacon 提交于
Cortex-A72 has a PMUv3 implementation that is compatible with the PMU implemented by Cortex-A57. This patch hooks up the new compatible string so that the Cortex-A57 event mappings are used. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
It's all very well providing an events directory to userspace that details our events in terms of "event=0xNN", but if we don't define how to encode the "event" field in the perf attr.config, then it's a waste of time. This patch adds a single format entry to describe that the event field occupies the bottom 10 bits of our config field on ARMv8 (PMUv3). Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 12月, 2015 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The pmuserenr_el0 register value is architecturally UNKNOWN on reset. Current kernel code resets that register value iff the core pmu device is correctly probed in the kernel. On platforms with missing DT pmu nodes (or disabled perf events in the kernel), the pmu is not probed, therefore the pmuserenr_el0 register is not reset in the kernel, which means that its value retains the reset value that is architecturally UNKNOWN (system may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access is available at EL0, which must be disallowed). This patch adds code that resets pmuserenr_el0 on cold boot and restores it on core resume from shutdown, so that the pmuserenr_el0 setup is always enforced in the kernel. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 11月, 2015 2 次提交
-
-
由 Drew Richardson 提交于
Add additional information about the ARM architected hardware events to make counters self describing. This makes the hardware PMUs easier to use as perf list contains possible events instead of users having to refer to documentation like the ARM TRMs. Signed-off-by: NDrew Richardson <drew.richardson@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Drew Richardson 提交于
The enums are not necessary and this allows the event values to be used to construct static strings at compile time. Signed-off-by: NDrew Richardson <drew.richardson@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 10月, 2015 3 次提交
-
-
由 Mark Rutland 提交于
The Cortex-A57 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The Cortex-A53 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Now that the arm_pmu framework has been factored out to drivers/perf we can make use of it for arm64, gaining support for heterogeneous PMUs and unifying the two codebases before they diverge further. The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching the style previously applied to the 32-bit PMUs. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 7月, 2015 4 次提交
-
-
由 Mark Rutland 提交于
Most of the cache events an architecture might support do not map well to those provided by the ARM architecture, and as such most entries in the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid physical event identifier, the *_UNSUPPORTED macros expand to a non-zero value and thus each unsupported event must be explicitly initialised as such. This leads to large diffs when adding support for a new CPU, and makes it difficult to spot the important information. This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED macros to initialise all entries to *_UNSUPPORTED before overriding this for the specific events we actually support, resulting in a significant source code reduction. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We currently bundle the callchain handling code with the PMU code, despite the fact the two are distinct, and the former can be useful even in the absence of the latter. Follow the example of arch/arm and factor the callchain handling into its own file dependent on CONFIG_PERF_EVENTS rather than CONFIG_HW_PERF_EVENTS. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep Holla 提交于
arch_find_n_match_cpu_physical_id parses the device tree to get the device node for a given logical cpu index. However, since ARM PMUs get probed after the CPU device nodes are stashed while registering the cpus, we can use of_cpu_device_node_get to avoid another DT parse. This patch replaces arch_find_n_match_cpu_physical_id with of_cpu_device_node_get to reuse the stashed value directly instead. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K. Poulose 提交于
ARM64 pmu prints an error message in event_init() when no hardware PMU is available. This is pretty annoying as it keeps printing the message for every single trial, flooding the kernel logs, unnecessarily. The return code is sufficient for the user to figure out the reason. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 7月, 2015 2 次提交
-
-
由 Shannon Zhao 提交于
Commit d795ef9a ("arm64: perf: don't warn about missing interrupt-affinity property for PPIs") added a check for PPIs so that we avoid parsing the interrupt-affinity property for these naturally affine interrupts. Unfortunately, this check can trigger an early (successful) return and we will not assign the value of cpu_pmu->plat_device. This patch fixes the issue. Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Stephen Boyd 提交于
It's possible, albeit unlikely, that using the of_node here will reference freed memory. Call of_node_put() after printing the name to be safe. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 5月, 2015 1 次提交
-
-
由 Anders Roxell 提交于
Mark the PMU interrupts as non-threadable, as is the case with arch/arm: d9c3365b ARM: 7813/1: Mark pmu interupt IRQF_NO_THREAD Acked-by: NWill Deacon <will.deacon@arm.com> Suggested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 5月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Commit d795ef9a ("arm64: perf: don't warn about missing interrupt-affinity property for PPIs") added a check for PPIs so that we avoid parsing the interrupt-affinity property for these naturally affine interrupts. Unfortunately, this check can trigger an early (successful) return and we will leak the irqs array. This patch fixes the issue by reordering the code so that the check is performed before any independent allocation. Reported-by: NDavid Binderman <dcb314@hotmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 4月, 2015 2 次提交
-
-
由 Suzuki K. Poulose 提交于
With commit d5efd9cc ("arm64: pmu: add support for interrupt-affinity property"), we print a warning when we find a PMU SPI with a missing missing interrupt-affinity property in a pmu node. Unfortunately, we pass the wrong (NULL) device node to of_node_full_name, resulting in unhelpful messages such as: hw perfevents: Failed to parse <no-node>/interrupt-affinity[0] This patch fixes the name to that of the pmu node. Fixes: d5efd9cc (arm64: pmu: add support for interrupt-affinity property) Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
PPIs are affine by nature, so the interrupt-affinity property is not used and therefore we shouldn't print a warning in its absence. Reported-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Reviewed-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 3月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Historically, the PMU devicetree bindings have expected SPIs to be listed in order of *logical* CPU number. This is problematic for bootloaders, especially when the boot CPU (logical ID 0) isn't listed first in the devicetree. This patch adds a new optional property, interrupt-affinity, to the PMU node which allows the interrupt affinity to be described using a list of phandled to CPU nodes, with each entry in the list corresponding to the SPI at the same index in the interrupts property. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 3月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
The perf core implicitly rejects events spanning multiple HW PMUs, as in these cases the event->ctx will differ. However this validation is performed after pmu::event_init() is called in perf_init_event(), and thus pmu::event_init() may be called with a group leader from a different HW PMU. The ARM64 PMU driver does not take this fact into account, and when validating groups assumes that it can call to_arm_pmu(event->pmu) for any HW event. When the event in question is from another HW PMU this is wrong, and results in dereferencing garbage. This patch updates the ARM64 PMU driver to first test for and reject events from other PMUs, moving the to_arm_pmu and related logic after this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with a CCI PMU present: Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL) CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249 Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT) task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000 PC is at 0x0 LR is at validate_event+0x90/0xa8 pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145 sp : ffffffc07b0a3ba0 [< (null)>] (null) [<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc [<ffffffc00015d870>] perf_try_init_event+0x34/0x70 [<ffffffc000164094>] perf_init_event+0xe0/0x10c [<ffffffc000164348>] perf_event_alloc+0x288/0x358 [<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c Code: bad PC value Also cleans up the code to use the arm_pmu only when we know that we are dealing with an arm pmu event. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NPeter Ziljstra (Intel) <peterz@infradead.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 12月, 2014 1 次提交
-
-
由 Daniel Thompson 提交于
If the overflow threshold for a counter is set above or near the 0xffffffff boundary then the kernel may lose track of the overflow causing only events that occur *after* the overflow to be recorded. Specifically the problem occurs when the value of the performance counter overtakes its original programmed value due to wrap around. Typical solutions to this problem are either to avoid programming in values likely to be overtaken or to treat the overflow bit as the 33rd bit of the counter. Its somewhat fiddly to refactor the code to correctly handle the 33rd bit during irqsave sections (context switches for example) so instead we take the simpler approach of avoiding values likely to be overtaken. We set the limit to half of max_period because this matches the limit imposed in __hw_perf_event_init(). This causes a doubling of the interrupt rate for large threshold values, however even with a very fast counter ticking at 4GHz the interrupt rate would only be ~1Hz. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 10月, 2014 1 次提交
-
-
由 Uwe Kleine-König 提交于
of_device_ids (i.e. compatible strings and the respective data) are not supposed to change at runtime. All functions working with of_device_ids provided by <linux/of.h> work with const of_device_ids. So mark the only non-const struct in arch/arm64 as const, too. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 4月, 2014 1 次提交
-
-
由 Mark Salter 提交于
Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with: arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’: arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’ arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’ This is due to some recent arm64 perf commits with compat support: commit 23c7d70d: ARM64: perf: add support for frame pointer unwinding in compat mode commit 2ee0d7fd: ARM64: perf: add support for perf registers API Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch allows the arm64 kernel to build with and without CONFIG_COMPAT. Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Jean Pihet 提交于
When profiling a 32-bit application, user space callchain unwinding using the frame pointer is performed in compat mode. The code is taken over from the AARCH32 code and adapted to work on AARCH64. Signed-off-by: NJean Pihet <jean.pihet@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 12月, 2013 1 次提交
-
-
由 Vinayak Kale 提交于
Add support for irq registration when pmu interrupt is percpu. Signed-off-by: NVinayak Kale <vkale@apm.com> Signed-off-by: NTuan Phan <tphan@apm.com> [will: tidied up cross-calling to pass &irq] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 10月, 2013 1 次提交
-
-
由 Christoph Lameter 提交于
This is the ARM part of Christoph's patchset cleaning up the various uses of __get_cpu_var across the tree. The idea is to convert __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and fewer registers are used when code is generated. [will: fixed debug ref counting checks and pcpu array accesses] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 10月, 2013 1 次提交
-
-
由 Vinayak Kale 提交于
This patch fixes ARMV8_EVTYPE_* macros since evtCount (event number) field width is 10bits in event selection register. Signed-off-by: NVinayak Kale <vkale@apm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 8月, 2013 4 次提交
-
-
由 Will Deacon 提交于
This is a port of f2fe09b0 ("ARM: 7663/1: perf: fix ARMv7 EVTYPE_MASK to include NSH bit") to arm64, which fixes the broken evtype mask to include the NSH bit, allowing profiling at EL2. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of cb2d8b34 ("ARM: 7698/1: perf: fix group validation when using enable_on_exec") to arm64, which fixes the event validation checking so that events in the OFF state are still considered when enable_on_exec is true. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of c95eb318 ("ARM: 7809/1: perf: fix event validation for software group leaders") to arm64, which fixes a panic in the arm64 perf backend found as a result of Vince's fuzzing tool. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of d9f96635 ("ARM: 7810/1: perf: Fix array out of bounds access in armpmu_map_hw_event()") to arm64, which fixes an oops in the arm64 perf backend found as a result of Vince's fuzzing tool. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-