- 14 10月, 2010 1 次提交
-
-
由 Ingo Molnar 提交于
Fix this linux-next build failure that Stephen reported: arch/arm/kernel/perf_event.c: In function 'armpmu_event_init': arch/arm/kernel/perf_event.c:543: error: request for member 'num_events' in something not a structure or union Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> LKML-Reference: <20101014164925.4fa16b75.sfr@canb.auug.org.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 10月, 2010 1 次提交
-
-
由 Matt Fleming 提交于
The number of counters for the registered pmu is needed in a few places so provide a helper function that returns this number. Signed-off-by: NMatt Fleming <matt@console-pimps.org> Tested-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPaul Mundt <lethal@linux-sh.org> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 10 9月, 2010 6 次提交
-
-
由 Peter Zijlstra 提交于
Neither the overcommit nor the reservation sysfs parameter were actually working, remove them as they'll only get in the way. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Replace pmu::{enable,disable,start,stop,unthrottle} with pmu::{add,del,start,stop}, all of which take a flags argument. The new interface extends the capability to stop a counter while keeping it scheduled on the PMU. We replace the throttled state with the generic stopped state. This also allows us to efficiently stop/start counters over certain code paths (like IRQ handlers). It also allows scheduling a counter without it starting, allowing for a generic frozen state (useful for rotating stopped counters). The stopped state is implemented in two different ways, depending on how the architecture implemented the throttled state: 1) We disable the counter: a) the pmu has per-counter enable bits, we flip that b) we program a NOP event, preserving the counter state 2) We store the counter state and ignore all read/overflow events Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Changes perf_disable() into perf_pmu_disable(). Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since the current perf_disable() usage is only an optimization, remove it for now. This eases the removal of the __weak hw_perf_enable() interface. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Simple registration interface for struct pmu, this provides the infrastructure for removing all the weak functions. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"` Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 02 9月, 2010 1 次提交
-
-
由 Will Deacon 提交于
The validate_event function in the ARM perf events backend has the following problems: 1.) Events that are disabled count towards the cost. 2.) Events associated with other PMUs [for example, software events or breakpoints] do not count towards the cost, but do fail validation, causing the group to fail. This patch changes validate_event so that it ignores events in the PERF_EVENT_STATE_OFF state or that are scheduled for other PMUs. Reported-by: NPawel Moll <pawel.moll@arm.com> Acked-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 9月, 2010 1 次提交
-
-
由 Will Deacon 提交于
This is purely a cosmetic change to the ARM perf backend because the current comments about the relationship between NMIs, interrupt context and perf_event_do_pending are misleading. This patch updates the comments so that they reflect what the code actually does (which is in line with other architectures). Acked-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 8月, 2010 4 次提交
-
-
由 Frederic Weisbecker 提交于
Store the kernel and user contexts from the generic layer instead of archs, this gathers some repetitive code. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NPaul Mackerras <paulus@samba.org> Tested-by: NWill Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Stephane Eranian <eranian@google.com> Cc: David Miller <davem@davemloft.net> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Borislav Petkov <bp@amd64.org>
-
由 Frederic Weisbecker 提交于
- Most archs use one callchain buffer per cpu, except x86 that needs to deal with NMIs. Provide a default perf_callchain_buffer() implementation that x86 overrides. - Centralize all the kernel/user regs handling and invoke new arch handlers from there: perf_callchain_user() / perf_callchain_kernel() That avoid all the user_mode(), current->mm checks and so... - Invert some parameters in perf_callchain_*() helpers: entry to the left, regs to the right, following the traditional (dst, src). Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NPaul Mackerras <paulus@samba.org> Tested-by: NWill Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Stephane Eranian <eranian@google.com> Cc: David Miller <davem@davemloft.net> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Borislav Petkov <bp@amd64.org>
-
由 Frederic Weisbecker 提交于
callchain_store() is the same on every archs, inline it in perf_event.h and rename it to perf_callchain_store() to avoid any collision. This removes repetitive code. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NPaul Mackerras <paulus@samba.org> Tested-by: NWill Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Stephane Eranian <eranian@google.com> Cc: David Miller <davem@davemloft.net> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Borislav Petkov <bp@amd64.org>
-
由 Frederic Weisbecker 提交于
Drop the TASK_RUNNING test on user tasks for callchains as this check doesn't seem to make any sense. Also remove the tests for !current that is not supposed to happen and current->pid as this should be handled at the generic level, with exclude_idle attribute. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Tested-by: NWill Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Cc: David Miller <davem@davemloft.net> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Borislav Petkov <bp@amd64.org>
-
- 05 7月, 2010 1 次提交
-
-
由 Will Deacon 提交于
Hardware performance counters on ARM are 32-bits wide but atomic64_t variables are used to represent counter data in the hw_perf_event structure. The armpmu_event_update function right-shifts a signed 64-bit delta variable and adds the result to the event count. This can lead to shifting in sign-bits if the MSB of the 32-bit counter value is set. This results in perf output such as: Performance counter stats for 'sleep 20': 18446744073460670464 cycles <-- 0xFFFFFFFFF12A6000 7783773 instructions # 0.000 IPC 465 context-switches 161 page-faults 1172393 branches 20.154242147 seconds time elapsed This patch ensures that the delta value is treated as unsigned so that the right shift sets the upper bits to zero. Cc: <stable@kernel.org> Acked-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 6月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Since now all modification to event->count (and ->prev_count and ->period_left) are local to a cpu, change then to local64_t so we avoid the LOCK'ed ops. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 5月, 2010 4 次提交
-
-
由 Will Deacon 提交于
For OProfile to initialise oprofilefs correctly, it needs to know the number of counters it can represent. This patch adds a function to the ARM perf-events backend to return the number of hardware counters available for the current PMU. Cc: Jamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The perf-events framework for ARM only supports v6 and v7 cores. This patch adds support for xscale v1 and v2 PMUs to perf, based on the OProfile drivers in arch/arm/oprofile/op_model_xscale.c Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The ARM perf-events framework provides support for a number of different PMUs using struct arm_pmu. The char *name field of this struct can be used to identify the PMU, but this is cumbersome if used outside of perf. This patch replaces the name string for a PMU with an enum, which holds a unique ID for the PMU being represented. This ID can be used to index an array of names within perf, so no functionality is lost. The presence of the ID field, allows other kernel subsystems [currently oprofile] to use their own mappings for the PMU name. Cc: Jean Pihet <jpihet@mvista.com> Acked-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The current PMU infrastructure for ARM requires that the IRQs for the PMU device are fixed at compile time and are selected based on the ARCH_ or MACH_ flags. This has the disadvantage of tying the Kernel down to a particular board as far as profiling is concerned. This patch replaces the compile-time IRQ registration with a runtime mechanism which allows the IRQs to be registered with the framework as a platform_device. A further advantage of this change is that there is scope for registering different types of performance counters in the future by changing the id of the platform_device and attaching different resources to it. Acked-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 3月, 2010 2 次提交
-
-
由 Will Deacon 提交于
The event selection mask for ARMv7 cores [ARMV7_EVTSEL_MASK] is incorrectly set to 0x7f. This means that the top bit of an event ID is ignored, so counting branch misses (id=0x10) and ISBs (id=0x90) give the same results. This patch sets the event selection mask to the correct value of 0xff. Signed-off-by: NJean Pihet <jpihet@mvista.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
If IRQ balancing is used on a multicore ARM system, PMU interrupt lines may be relocated onto CPUs other than the one causing the counter overflow. This can result in misattribution of events to the wrong core and, in the case that the CPU handling the interrupt has not experience counter overflow, the interrupt can be disabled because the handler returns IRQ_NONE. This patch adds the IRQF_NOBALANCING flag to the request_irq call in perf_events.c. Acked-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 3月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
This makes it easier to extend perf_sample_data and fixes a bug on arm and sparc, which failed to set ->raw to NULL, which can cause crashes when combined with PERF_SAMPLE_RAW. It also optimizes PowerPC and tracepoint, because the struct initialization is forced to zero out the whole structure. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NJean Pihet <jpihet@mvista.com> Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: Jamie Iles <jamie.iles@picochip.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Cc: stable@kernel.org LKML-Reference: <20100304140100.315416040@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 2月, 2010 2 次提交
-
-
由 Jean PIHET 提交于
Adds the Performance Events support for ARMv7 processor, using the PMNC unit in HW. Supports the following: - Cortex-A8 and Cortex-A9 processors, - dynamic detection of the number of available counters, based on the PMCR value, - runtime detection of the CPU arch (v6 or v7) and model (Cortex-A8 or Cortex-A9) Tested on OMAP3 (Cortex-A8) only. Signed-off-by: NJean Pihet <jpihet@mvista.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jamie Iles 提交于
This patch implements support for ARMv6 performance counters in the Linux performance events subsystem. ARMv6 architectures that have the performance counters should enable HW_PERF_EVENTS to get hardware performance events support in addition to the software events. Note: only ARM Ltd ARM cores are supported. This implementation also provides an ARM PMU abstraction layer to allow ARMv7 and others to be supported in the future by adding new a 'struct arm_pmu'. Cc: Jean Pihet <jpihet@mvista.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NJamie Iles <jamie.iles@picochip.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-