1. 25 4月, 2016 2 次提交
  2. 29 3月, 2016 1 次提交
  3. 01 3月, 2016 2 次提交
  4. 19 2月, 2016 4 次提交
  5. 22 12月, 2015 2 次提交
  6. 21 12月, 2015 1 次提交
    • L
      arm64: kernel: enforce pmuserenr_el0 initialization and restore · 60792ad3
      Lorenzo Pieralisi 提交于
      The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
      Current kernel code resets that register value iff the core pmu device is
      correctly probed in the kernel. On platforms with missing DT pmu nodes (or
      disabled perf events in the kernel), the pmu is not probed, therefore the
      pmuserenr_el0 register is not reset in the kernel, which means that its
      value retains the reset value that is architecturally UNKNOWN (system
      may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
      is available at EL0, which must be disallowed).
      
      This patch adds code that resets pmuserenr_el0 on cold boot and restores
      it on core resume from shutdown, so that the pmuserenr_el0 setup is
      always enforced in the kernel.
      
      Cc: <stable@vger.kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      60792ad3
  7. 17 11月, 2015 2 次提交
  8. 07 10月, 2015 3 次提交
  9. 27 7月, 2015 4 次提交
    • M
      arm64: perf: condense event number maps · ae2fb7ec
      Mark Rutland 提交于
      Most of the cache events an architecture might support do not map well
      to those provided by the ARM architecture, and as such most entries in
      the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid
      physical event identifier, the *_UNSUPPORTED macros expand to a non-zero
      value and thus each unsupported event must be explicitly initialised as
      such. This leads to large diffs when adding support for a new CPU, and
      makes it difficult to spot the important information.
      
      This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED
      macros to initialise all entries to *_UNSUPPORTED before overriding this
      for the specific events we actually support, resulting in a significant
      source code reduction.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ae2fb7ec
    • M
      arm64: perf: factor out callchain code · 52da443e
      Mark Rutland 提交于
      We currently bundle the callchain handling code with the PMU code,
      despite the fact the two are distinct, and the former can be useful even
      in the absence of the latter.
      
      Follow the example of arch/arm and factor the callchain handling into
      its own file dependent on CONFIG_PERF_EVENTS rather than
      CONFIG_HW_PERF_EVENTS.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      52da443e
    • S
      arm64: perf: replace arch_find_n_match_cpu_physical_id with of_cpu_device_node_get · d09ce834
      Sudeep Holla 提交于
      arch_find_n_match_cpu_physical_id parses the device tree to get the
      device node for a given logical cpu index. However, since ARM PMUs get
      probed after the CPU device nodes are stashed while registering the
      cpus, we can use of_cpu_device_node_get to avoid another DT parse.
      
      This patch replaces arch_find_n_match_cpu_physical_id with
      of_cpu_device_node_get to reuse the stashed value directly instead.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d09ce834
    • S
      arm64: perf: Remove unnecessary printk · 2d23ed04
      Suzuki K. Poulose 提交于
      ARM64 pmu prints an error message in event_init() when
      no hardware PMU is available. This is pretty annoying as
      it keeps printing the message for every single trial, flooding
      the kernel logs, unnecessarily. The return code is sufficient for
      the user to figure out the reason.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2d23ed04
  10. 01 7月, 2015 2 次提交
  11. 19 5月, 2015 1 次提交
  12. 12 5月, 2015 1 次提交
  13. 30 4月, 2015 2 次提交
  14. 24 3月, 2015 1 次提交
    • W
      arm64: pmu: add support for interrupt-affinity property · d5efd9cc
      Will Deacon 提交于
      Historically, the PMU devicetree bindings have expected SPIs to be
      listed in order of *logical* CPU number. This is problematic for
      bootloaders, especially when the boot CPU (logical ID 0) isn't listed
      first in the devicetree.
      
      This patch adds a new optional property, interrupt-affinity, to the
      PMU node which allows the interrupt affinity to be described using
      a list of phandled to CPU nodes, with each entry in the list
      corresponding to the SPI at the same index in the interrupts property.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d5efd9cc
  15. 20 3月, 2015 1 次提交
    • S
      arm64: perf: reject groups spanning multiple HW PMUs · 8fff105e
      Suzuki K. Poulose 提交于
      The perf core implicitly rejects events spanning multiple HW PMUs, as in
      these cases the event->ctx will differ. However this validation is
      performed after pmu::event_init() is called in perf_init_event(), and
      thus pmu::event_init() may be called with a group leader from a
      different HW PMU.
      
      The ARM64 PMU driver does not take this fact into account, and when
      validating groups assumes that it can call to_arm_pmu(event->pmu) for
      any HW event. When the event in question is from another HW PMU this is
      wrong, and results in dereferencing garbage.
      
      This patch updates the ARM64 PMU driver to first test for and reject
      events from other PMUs, moving the to_arm_pmu and related logic after
      this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with
      a CCI PMU present:
      
      Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL)
      CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249
      Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT)
      task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000
      PC is at 0x0
      LR is at validate_event+0x90/0xa8
      pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145
      sp : ffffffc07b0a3ba0
      
      [<          (null)>]           (null)
      [<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc
      [<ffffffc00015d870>] perf_try_init_event+0x34/0x70
      [<ffffffc000164094>] perf_init_event+0xe0/0x10c
      [<ffffffc000164348>] perf_event_alloc+0x288/0x358
      [<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c
      Code: bad PC value
      
      Also cleans up the code to use the arm_pmu only when we know
      that we are dealing with an arm pmu event.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NPeter Ziljstra (Intel) <peterz@infradead.org>
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8fff105e
  16. 04 12月, 2014 1 次提交
    • D
      arm64: perf: Prevent wraparound during overflow · cbbf2e6e
      Daniel Thompson 提交于
      If the overflow threshold for a counter is set above or near the
      0xffffffff boundary then the kernel may lose track of the overflow
      causing only events that occur *after* the overflow to be recorded.
      Specifically the problem occurs when the value of the performance counter
      overtakes its original programmed value due to wrap around.
      
      Typical solutions to this problem are either to avoid programming in
      values likely to be overtaken or to treat the overflow bit as the 33rd
      bit of the counter.
      
      Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
      during irqsave sections (context switches for example) so instead we take
      the simpler approach of avoiding values likely to be overtaken.
      
      We set the limit to half of max_period because this matches the limit
      imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
      rate for large threshold values, however even with a very fast counter
      ticking at 4GHz the interrupt rate would only be ~1Hz.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cbbf2e6e
  17. 03 10月, 2014 1 次提交
  18. 07 4月, 2014 1 次提交
    • M
      arm64: fix !CONFIG_COMPAT build failures · ff268ff7
      Mark Salter 提交于
      Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with:
      
        arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’:
        arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’
      
        arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’
      
      This is due to some recent arm64 perf commits with compat support:
      
        commit 23c7d70d:
          ARM64: perf: add support for frame pointer unwinding in compat mode
      
        commit 2ee0d7fd:
          ARM64: perf: add support for perf registers API
      
      Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not
      defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch
      allows the arm64 kernel to build with and without CONFIG_COMPAT.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ff268ff7
  19. 13 3月, 2014 1 次提交
  20. 20 12月, 2013 1 次提交
  21. 29 10月, 2013 1 次提交
  22. 25 10月, 2013 1 次提交
  23. 20 8月, 2013 4 次提交