1. 25 4月, 2017 1 次提交
  2. 11 4月, 2017 2 次提交
  3. 18 11月, 2016 1 次提交
    • W
      KVM: arm64: Fix the issues when guest PMCCFILTR is configured · b112c84a
      Wei Huang 提交于
      KVM calls kvm_pmu_set_counter_event_type() when PMCCFILTR is configured.
      But this function can't deals with PMCCFILTR correctly because the evtCount
      bits of PMCCFILTR, which is reserved 0, conflits with the SW_INCR event
      type of other PMXEVTYPER<n> registers. To fix it, when eventsel == 0, this
      function shouldn't return immediately; instead it needs to check further
      if select_idx is ARMV8_PMU_CYCLE_IDX.
      
      Another issue is that KVM shouldn't copy the eventsel bits of PMCCFILTER
      blindly to attr.config. Instead it ought to convert the request to the
      "cpu cycle" event type (i.e. 0x11).
      
      To support this patch and to prevent duplicated definitions, a limited
      set of ARMv8 perf event types were relocated from perf_event.c to
      asm/perf_event.h.
      
      Cc: stable@vger.kernel.org # 4.6+
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NWei Huang <wei@redhat.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      b112c84a
  4. 17 9月, 2016 3 次提交
  5. 09 9月, 2016 1 次提交
  6. 22 8月, 2016 1 次提交
  7. 25 4月, 2016 6 次提交
  8. 29 3月, 2016 1 次提交
  9. 01 3月, 2016 2 次提交
  10. 19 2月, 2016 4 次提交
  11. 22 12月, 2015 2 次提交
  12. 21 12月, 2015 1 次提交
    • L
      arm64: kernel: enforce pmuserenr_el0 initialization and restore · 60792ad3
      Lorenzo Pieralisi 提交于
      The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
      Current kernel code resets that register value iff the core pmu device is
      correctly probed in the kernel. On platforms with missing DT pmu nodes (or
      disabled perf events in the kernel), the pmu is not probed, therefore the
      pmuserenr_el0 register is not reset in the kernel, which means that its
      value retains the reset value that is architecturally UNKNOWN (system
      may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
      is available at EL0, which must be disallowed).
      
      This patch adds code that resets pmuserenr_el0 on cold boot and restores
      it on core resume from shutdown, so that the pmuserenr_el0 setup is
      always enforced in the kernel.
      
      Cc: <stable@vger.kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      60792ad3
  13. 17 11月, 2015 2 次提交
  14. 07 10月, 2015 3 次提交
  15. 27 7月, 2015 4 次提交
    • M
      arm64: perf: condense event number maps · ae2fb7ec
      Mark Rutland 提交于
      Most of the cache events an architecture might support do not map well
      to those provided by the ARM architecture, and as such most entries in
      the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid
      physical event identifier, the *_UNSUPPORTED macros expand to a non-zero
      value and thus each unsupported event must be explicitly initialised as
      such. This leads to large diffs when adding support for a new CPU, and
      makes it difficult to spot the important information.
      
      This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED
      macros to initialise all entries to *_UNSUPPORTED before overriding this
      for the specific events we actually support, resulting in a significant
      source code reduction.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ae2fb7ec
    • M
      arm64: perf: factor out callchain code · 52da443e
      Mark Rutland 提交于
      We currently bundle the callchain handling code with the PMU code,
      despite the fact the two are distinct, and the former can be useful even
      in the absence of the latter.
      
      Follow the example of arch/arm and factor the callchain handling into
      its own file dependent on CONFIG_PERF_EVENTS rather than
      CONFIG_HW_PERF_EVENTS.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      52da443e
    • S
      arm64: perf: replace arch_find_n_match_cpu_physical_id with of_cpu_device_node_get · d09ce834
      Sudeep Holla 提交于
      arch_find_n_match_cpu_physical_id parses the device tree to get the
      device node for a given logical cpu index. However, since ARM PMUs get
      probed after the CPU device nodes are stashed while registering the
      cpus, we can use of_cpu_device_node_get to avoid another DT parse.
      
      This patch replaces arch_find_n_match_cpu_physical_id with
      of_cpu_device_node_get to reuse the stashed value directly instead.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d09ce834
    • S
      arm64: perf: Remove unnecessary printk · 2d23ed04
      Suzuki K. Poulose 提交于
      ARM64 pmu prints an error message in event_init() when
      no hardware PMU is available. This is pretty annoying as
      it keeps printing the message for every single trial, flooding
      the kernel logs, unnecessarily. The return code is sufficient for
      the user to figure out the reason.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2d23ed04
  16. 01 7月, 2015 2 次提交
  17. 19 5月, 2015 1 次提交
  18. 12 5月, 2015 1 次提交
  19. 30 4月, 2015 2 次提交