1. 29 10月, 2019 1 次提交
    • C
      KVM: arm64: Don't set HCR_EL2.TVM when S2FWB is supported · 5c401308
      Christoffer Dall 提交于
      On CPUs that support S2FWB (Armv8.4+), KVM configures the stage 2 page
      tables to override the memory attributes of memory accesses, regardless
      of the stage 1 page table configurations, and also when the stage 1 MMU
      is turned off.  This results in all memory accesses to RAM being
      cacheable, including during early boot of the guest.
      
      On CPUs without this feature, memory accesses were non-cacheable during
      boot until the guest turned on the stage 1 MMU, and we had to detect
      when the guest turned on the MMU, such that we could invalidate all cache
      entries and ensure a consistent view of memory with the MMU turned on.
      When the guest turned on the caches, we would call stage2_flush_vm()
      from kvm_toggle_cache().
      
      However, stage2_flush_vm() walks all the stage 2 tables, and calls
      __kvm_flush-dcache_pte, which on a system with S2FWB does ... absolutely
      nothing.
      
      We can avoid that whole song and dance, and simply not set TVM when
      creating a VM on a system that has S2FWB.
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20191028130541.30536-1-christoffer.dall@arm.com
      5c401308
  2. 26 7月, 2019 1 次提交
    • Z
      KVM: arm64: Update kvm_arm_exception_class and esr_class_str for new EC · 6701c619
      Zenghui Yu 提交于
      We've added two ESR exception classes for new ARM hardware extensions:
      ESR_ELx_EC_PAC and ESR_ELx_EC_SVE, but failed to update the strings
      used in tracing and other debug.
      
      Let's update "kvm_arm_exception_class" for these two EC, which the
      new EC will be visible to user-space via kvm_exit trace events
      Also update to "esr_class_str" for ESR_ELx_EC_PAC, by which we can
      get more readable debug info.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Dave Martin <Dave.Martin@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NZenghui Yu <yuzenghui@huawei.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      6701c619
  3. 19 6月, 2019 1 次提交
  4. 20 12月, 2018 2 次提交
    • W
      arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1 · df655b75
      Will Deacon 提交于
      Although bit 31 of VTCR_EL2 is RES1, we inadvertently end up setting all
      of the upper 32 bits to 1 as well because we define VTCR_EL2_RES1 as
      signed, which is sign-extended when assigning to kvm->arch.vtcr.
      
      Lucky for us, the architecture currently treats these upper bits as RES0
      so, whilst we've been naughty, we haven't set fire to anything yet.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      df655b75
    • C
      KVM: arm/arm64: Fixup the kvm_exit tracepoint · 71a7e47f
      Christoffer Dall 提交于
      The kvm_exit tracepoint strangely always reported exits as being IRQs.
      This seems to be because either the __print_symbolic or the tracepoint
      macros use a variable named idx.
      
      Take this chance to update the fields in the tracepoint to reflect the
      concepts in the arm64 architecture that we pass to the tracepoint and
      move the exception type table to the same location and header files as
      the exits code.
      
      We also clear out the exception code to 0 for IRQ exits (which
      translates to UNKNOWN in text) to make it slighyly less confusing to
      parse the trace output.
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      71a7e47f
  5. 14 12月, 2018 2 次提交
  6. 03 10月, 2018 1 次提交
  7. 01 10月, 2018 6 次提交
  8. 18 9月, 2018 1 次提交
  9. 09 7月, 2018 1 次提交
  10. 19 3月, 2018 1 次提交
  11. 26 2月, 2018 1 次提交
    • M
      arm64/kvm: Prohibit guest LOR accesses · cc33c4e2
      Mark Rutland 提交于
      We don't currently limit guest accesses to the LOR registers, which we
      neither virtualize nor context-switch. As such, guests are provided with
      unusable information/controls, and are not isolated from each other (or
      the host).
      
      To prevent these issues, we can trap register accesses and present the
      illusion LORegions are unssupported by the CPU. To do this, we mask
      ID_AA64MMFR1.LO, and set HCR_EL2.TLOR to trap accesses to the following
      registers:
      
      * LORC_EL1
      * LOREA_EL1
      * LORID_EL1
      * LORN_EL1
      * LORSA_EL1
      
      ... when trapped, we inject an UNDEFINED exception to EL1, simulating
      their non-existence.
      
      As noted in D7.2.67, when no LORegions are implemented, LoadLOAcquire
      and StoreLORelease must behave as LoadAcquire and StoreRelease
      respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's
      EL2 is first initialized, as the host kernel will not modify this.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Vladimir Murzin <vladimir.murzin@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      cc33c4e2
  12. 16 1月, 2018 1 次提交
  13. 29 11月, 2017 1 次提交
  14. 03 11月, 2017 2 次提交
  15. 23 6月, 2017 1 次提交
  16. 03 2月, 2017 1 次提交
    • W
      arm64: KVM: Save/restore the host SPE state when entering/leaving a VM · f85279b4
      Will Deacon 提交于
      The SPE buffer is virtually addressed, using the page tables of the CPU
      MMU. Unusually, this means that the EL0/1 page table may be live whilst
      we're executing at EL2 on non-VHE configurations. When VHE is in use,
      we can use the same property to profile the guest behind its back.
      
      This patch adds the relevant disabling and flushing code to KVM so that
      the host can make use of SPE without corrupting guest memory, and any
      attempts by a guest to use SPE will result in a trap.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f85279b4
  17. 08 9月, 2016 1 次提交
  18. 14 6月, 2016 1 次提交
  19. 10 5月, 2016 1 次提交
    • C
      kvm: arm64: Enable hardware updates of the Access Flag for Stage 2 page tables · 06485053
      Catalin Marinas 提交于
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      VTCR_EL2.HA enabled (bit 21), when the CPU accesses an IPA with the
      PTE_AF bit cleared in the stage 2 page table, instead of raising an
      Access Flag fault to EL2 the CPU sets the actual page table entry bit
      (10). To ensure that kernel modifications to the page table do not
      inadvertently revert a bit set by hardware updates, certain Stage 2
      software pte/pmd operations must be performed atomically.
      
      The main user of the AF bit is the kvm_age_hva() mechanism. The
      kvm_age_hva_handler() function performs a "test and clear young" action
      on the pte/pmd. This needs to be atomic in respect of automatic hardware
      updates of the AF bit. Since the AF bit is in the same position for both
      Stage 1 and Stage 2, the patch reuses the existing
      ptep_test_and_clear_young() functionality if
      __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG is defined. Otherwise, the
      existing pte_young/pte_mkold mechanism is preserved.
      
      The kvm_set_s2pte_readonly() (and the corresponding pmd equivalent) have
      to perform atomic modifications in order to avoid a race with updates of
      the AF bit. The arm64 implementation has been re-written using
      exclusives.
      
      Currently, kvm_set_s2pte_writable() (and pmd equivalent) take a pointer
      argument and modify the pte/pmd in place. However, these functions are
      only used on local variables rather than actual page table entries, so
      it makes more sense to follow the pte_mkwrite() approach for stage 1
      attributes. The change to kvm_s2pte_mkwrite() makes it clear that these
      functions do not modify the actual page table entries.
      
      The (pte|pmd)_mkyoung() uses on Stage 2 entries (setting the AF bit
      explicitly) do not need to be modified since hardware updates of the
      dirty status are not supported by KVM, so there is no possibility of
      losing such information.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      06485053
  20. 28 4月, 2016 1 次提交
  21. 21 4月, 2016 3 次提交
  22. 06 4月, 2016 1 次提交
    • M
      arm64: KVM: Warn when PARange is less than 40 bits · 6141570c
      Marc Zyngier 提交于
      We always thought that 40bits of PA range would be the minimum people
      would actually build. Anything less is terrifyingly small.
      
      Turns out that we were both right and wrong. Nobody has ever built
      such a system, but the ARM Foundation Model has a PARange set to 36bits.
      Just because we can. Oh well. Now, the KVM API explicitely says that
      we offer a 40bit PA space to the VM, so we shouldn't run KVM on
      the Foundation Model at all.
      
      That being said, this patch offers a less agressive alternative, and
      loudly warns about the configuration being unsupported. You'll still
      be able to run VMs (at your own risks, though).
      
      This is just a workaround until we have a proper userspace API where
      we report the PARange to userspace.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      6141570c
  23. 31 3月, 2016 1 次提交
  24. 05 3月, 2016 1 次提交
  25. 01 3月, 2016 2 次提交
  26. 11 2月, 2016 1 次提交
  27. 25 1月, 2016 1 次提交
  28. 18 12月, 2015 1 次提交
  29. 23 10月, 2015 1 次提交
    • C
      arm/arm64: KVM: Improve kvm_exit tracepoint · b5905dc1
      Christoffer Dall 提交于
      The ARM architecture only saves the exit class to the HSR (ESR_EL2 for
      arm64) on synchronous exceptions, not on asynchronous exceptions like an
      IRQ.  However, we only report the exception class on kvm_exit, which is
      confusing because an IRQ looks like it exited at some PC with the same
      reason as the previous exit.  Add a lookup table for the exception index
      and prepend the kvm_exit tracepoint text with the exception type to
      clarify this situation.
      
      Also resolve the exception class (EC) to a human-friendly text version
      so the trace output becomes immediately usable for debugging this code.
      
      Cc: Wei Huang <wei@redhat.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      b5905dc1