1. 10 11月, 2020 1 次提交
  2. 30 10月, 2020 1 次提交
  3. 30 9月, 2020 1 次提交
  4. 29 9月, 2020 4 次提交
    • M
      KVM: arm64: Get rid of kvm_arm_have_ssbd() · 73114677
      Marc Zyngier 提交于
      kvm_arm_have_ssbd() is now completely unused, get rid of it.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      73114677
    • W
      arm64: Rewrite Spectre-v2 mitigation code · d4647f0a
      Will Deacon 提交于
      The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain.
      This is largely due to it being written hastily, without much clue as to
      how things would pan out, and also because it ends up mixing policy and
      state in such a way that it is very difficult to figure out what's going
      on.
      
      Rewrite the Spectre-v2 mitigation so that it clearly separates state from
      policy and follows a more structured approach to handling the mitigation.
      Signed-off-by: NWill Deacon <will@kernel.org>
      d4647f0a
    • M
      KVM: arm64: Add PMU event filtering infrastructure · d7eec236
      Marc Zyngier 提交于
      It can be desirable to expose a PMU to a guest, and yet not want the
      guest to be able to count some of the implemented events (because this
      would give information on shared resources, for example.
      
      For this, let's extend the PMUv3 device API, and offer a way to setup a
      bitmap of the allowed events (the default being no bitmap, and thus no
      filtering).
      
      Userspace can thus allow/deny ranges of event. The default policy
      depends on the "polarity" of the first filter setup (default deny if the
      filter allows events, and default allow if the filter denies events).
      This allows to setup exactly what is allowed for a given guest.
      
      Note that although the ioctl is per-vcpu, the map of allowed events is
      global to the VM (it can be setup from any vcpu until the vcpu PMU is
      initialized).
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      d7eec236
    • M
      KVM: arm64: Use event mask matching architecture revision · fd65a3b5
      Marc Zyngier 提交于
      The PMU code suffers from a small defect where we assume that the event
      number provided by the guest is always 16 bit wide, even if the CPU only
      implements the ARMv8.0 architecture. This isn't really problematic in
      the sense that the event number ends up in a system register, cropping
      it to the right width, but still this needs fixing.
      
      In order to make it work, let's probe the version of the PMU that the
      guest is going to use. This is done by temporarily creating a kernel
      event and looking at the PMUVer field that has been saved at probe time
      in the associated arm_pmu structure. This in turn gets saved in the kvm
      structure, and subsequently used to compute the event mask that gets
      used throughout the PMU code.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      fd65a3b5
  5. 16 9月, 2020 3 次提交
  6. 11 9月, 2020 2 次提交
  7. 22 8月, 2020 1 次提交
    • W
      KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() · fdfe7cbd
      Will Deacon 提交于
      The 'flags' field of 'struct mmu_notifier_range' is used to indicate
      whether invalidate_range_{start,end}() are permitted to block. In the
      case of kvm_mmu_notifier_invalidate_range_start(), this field is not
      forwarded on to the architecture-specific implementation of
      kvm_unmap_hva_range() and therefore the backend cannot sensibly decide
      whether or not to block.
      
      Add an extra 'flags' parameter to kvm_unmap_hva_range() so that
      architectures are aware as to whether or not they are permitted to block.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Message-Id: <20200811102725.7121-2-will@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fdfe7cbd
  8. 21 8月, 2020 2 次提交
  9. 28 7月, 2020 1 次提交
  10. 10 7月, 2020 3 次提交
  11. 07 7月, 2020 9 次提交
  12. 06 7月, 2020 2 次提交
  13. 10 6月, 2020 1 次提交
  14. 09 6月, 2020 1 次提交
  15. 29 5月, 2020 1 次提交
  16. 28 5月, 2020 1 次提交
  17. 25 5月, 2020 1 次提交
  18. 16 5月, 2020 2 次提交
    • K
      KVM: arm64: Support enabling dirty log gradually in small chunks · c862626e
      Keqian Zhu 提交于
      There is already support of enabling dirty log gradually in small chunks
      for x86 in commit 3c9bd400 ("KVM: x86: enable dirty log gradually in
      small chunks"). This adds support for arm64.
      
      x86 still writes protect all huge pages when DIRTY_LOG_INITIALLY_ALL_SET
      is enabled. However, for arm64, both huge pages and normal pages can be
      write protected gradually by userspace.
      
      Under the Huawei Kunpeng 920 2.6GHz platform, I did some tests on 128G
      Linux VMs with different page size. The memory pressure is 127G in each
      case. The time taken of memory_global_dirty_log_start in QEMU is listed
      below:
      
      Page Size      Before    After Optimization
        4K            650ms         1.8ms
        2M             4ms          1.8ms
        1G             2ms          1.8ms
      
      Besides the time reduction, the biggest improvement is that we will minimize
      the performance side effect (because of dissolving huge pages and marking
      memslots dirty) on guest after enabling dirty log.
      Signed-off-by: NKeqian Zhu <zhukeqian1@huawei.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20200413122023.52583-1-zhukeqian1@huawei.com
      c862626e
    • D
      kvm: add halt-polling cpu usage stats · cb953129
      David Matlack 提交于
      Two new stats for exposing halt-polling cpu usage:
      halt_poll_success_ns
      halt_poll_fail_ns
      
      Thus sum of these 2 stats is the total cpu time spent polling. "success"
      means the VCPU polled until a virtual interrupt was delivered. "fail"
      means the VCPU had to schedule out (either because the maximum poll time
      was reached or it needed to yield the CPU).
      
      To avoid touching every arch's kvm_vcpu_stat struct, only update and
      export halt-polling cpu usage stats if we're on x86.
      
      Exporting cpu usage as a u64 and in nanoseconds means we will overflow at
      ~500 years, which seems reasonably large.
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NJon Cargille <jcargill@google.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      
      Message-Id: <20200508182240.68440-1-jcargill@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cb953129
  19. 04 5月, 2020 1 次提交
  20. 24 3月, 2020 1 次提交
  21. 17 2月, 2020 1 次提交