1. 09 2月, 2022 4 次提交
  2. 08 12月, 2021 1 次提交
  3. 01 12月, 2021 1 次提交
  4. 17 11月, 2021 1 次提交
  5. 17 10月, 2021 1 次提交
  6. 20 9月, 2021 1 次提交
  7. 11 8月, 2021 1 次提交
  8. 02 8月, 2021 2 次提交
  9. 18 6月, 2021 2 次提交
  10. 22 4月, 2021 1 次提交
  11. 06 3月, 2021 1 次提交
  12. 03 2月, 2021 2 次提交
  13. 21 1月, 2021 1 次提交
  14. 27 12月, 2020 1 次提交
  15. 27 11月, 2020 5 次提交
  16. 29 9月, 2020 5 次提交
    • M
      KVM: arm64: Mask out filtered events in PCMEID{0,1}_EL1 · 88865bec
      Marc Zyngier 提交于
      As we can now hide events from the guest, let's also adjust its view of
      PCMEID{0,1}_EL1 so that it can figure out why some common events are not
      counting as they should.
      
      The astute user can still look into the TRM for their CPU and find out
      they've been cheated, though. Nobody's perfect.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      88865bec
    • M
      KVM: arm64: Add PMU event filtering infrastructure · d7eec236
      Marc Zyngier 提交于
      It can be desirable to expose a PMU to a guest, and yet not want the
      guest to be able to count some of the implemented events (because this
      would give information on shared resources, for example.
      
      For this, let's extend the PMUv3 device API, and offer a way to setup a
      bitmap of the allowed events (the default being no bitmap, and thus no
      filtering).
      
      Userspace can thus allow/deny ranges of event. The default policy
      depends on the "polarity" of the first filter setup (default deny if the
      filter allows events, and default allow if the filter denies events).
      This allows to setup exactly what is allowed for a given guest.
      
      Note that although the ioctl is per-vcpu, the map of allowed events is
      global to the VM (it can be setup from any vcpu until the vcpu PMU is
      initialized).
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      d7eec236
    • M
      KVM: arm64: Use event mask matching architecture revision · fd65a3b5
      Marc Zyngier 提交于
      The PMU code suffers from a small defect where we assume that the event
      number provided by the guest is always 16 bit wide, even if the CPU only
      implements the ARMv8.0 architecture. This isn't really problematic in
      the sense that the event number ends up in a system register, cropping
      it to the right width, but still this needs fixing.
      
      In order to make it work, let's probe the version of the PMU that the
      guest is going to use. This is done by temporarily creating a kernel
      event and looking at the PMUVer field that has been saved at probe time
      in the associated arm_pmu structure. This in turn gets saved in the kvm
      structure, and subsequently used to compute the event mask that gets
      used throughout the PMU code.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      fd65a3b5
    • M
      KVM: arm64: Refactor PMU attribute error handling · 42223fb1
      Marc Zyngier 提交于
      The PMU emulation error handling is pretty messy when dealing with
      attributes. Let's refactor it so that we have less duplication,
      and that it is easy to extend later on.
      
      A functional change is that kvm_arm_pmu_v3_init() used to return
      -ENXIO when the PMU feature wasn't set. The error is now reported
      as -ENODEV, matching the documentation. -ENXIO is still returned
      when the interrupt isn't properly configured.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      42223fb1
    • J
      KVM: arm64: pmu: Make overflow handler NMI safe · 95e92e45
      Julien Thierry 提交于
      kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
      NMI context, defer waking the vcpu to an irq_work queue.
      
      A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
      running the irq_work for a non-existent vcpu by calling irq_work_sync() on
      the PMU destroy path.
      
      [Alexandru E.: Added irq_work_sync()]
      Signed-off-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
      Cc: kvm@vger.kernel.org
      Cc: kvmarm@lists.cs.columbia.edu
      Link: https://lore.kernel.org/r/20200924110706.254996-6-alexandru.elisei@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      95e92e45
  17. 16 5月, 2020 1 次提交
  18. 28 1月, 2020 4 次提交
  19. 20 10月, 2019 3 次提交
    • M
      KVM: arm64: pmu: Reset sample period on overflow handling · 8c3252c0
      Marc Zyngier 提交于
      The PMU emulation code uses the perf event sample period to trigger
      the overflow detection. This works fine  for the *first* overflow
      handling, but results in a huge number of interrupts on the host,
      unrelated to the number of interrupts handled in the guest (a x20
      factor is pretty common for the cycle counter). On a slow system
      (such as a SW model), this can result in the guest only making
      forward progress at a glacial pace.
      
      It turns out that the clue is in the name. The sample period is
      exactly that: a period. And once the an overflow has occured,
      the following period should be the full width of the associated
      counter, instead of whatever the guest had initially programed.
      
      Reset the sample period to the architected value in the overflow
      handler, which now results in a number of host interrupts that is
      much closer to the number of interrupts in the guest.
      
      Fixes: b02386eb ("arm64: KVM: Add PMU overflow interrupt routing")
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      8c3252c0
    • M
      KVM: arm64: pmu: Set the CHAINED attribute before creating the in-kernel event · 725ce669
      Marc Zyngier 提交于
      The current convention for KVM to request a chained event from the
      host PMU is to set bit[0] in attr.config1 (PERF_ATTR_CFG1_KVM_PMU_CHAINED).
      
      But as it turns out, this bit gets set *after* we create the kernel
      event that backs our virtual counter, meaning that we never get
      a 64bit counter.
      
      Moving the setting to an earlier point solves the problem.
      
      Fixes: 80f393a2 ("KVM: arm/arm64: Support chained PMU counters")
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      725ce669
    • M
      KVM: arm64: pmu: Fix cycle counter truncation · f4e23cf9
      Marc Zyngier 提交于
      When a counter is disabled, its value is sampled before the event
      is being disabled, and the value written back in the shadow register.
      
      In that process, the value gets truncated to 32bit, which is adequate
      for any counter but the cycle counter (defined as a 64bit counter).
      
      This obviously results in a corrupted counter, and things like
      "perf record -e cycles" not working at all when run in a guest...
      A similar, but less critical bug exists in kvm_pmu_get_counter_value.
      
      Make the truncation conditional on the counter not being the cycle
      counter, which results in a minor code reorganisation.
      
      Fixes: 80f393a2 ("KVM: arm/arm64: Support chained PMU counters")
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Reported-by: NJulien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      f4e23cf9
  20. 23 7月, 2019 1 次提交
  21. 05 7月, 2019 1 次提交