1. 10 11月, 2020 2 次提交
  2. 29 9月, 2020 1 次提交
  3. 19 9月, 2020 2 次提交
  4. 30 7月, 2020 1 次提交
  5. 07 7月, 2020 4 次提交
  6. 06 7月, 2020 2 次提交
    • G
      KVM: arm64: Rename HSR to ESR · 3a949f4c
      Gavin Shan 提交于
      kvm/arm32 isn't supported since commit 541ad015 ("arm: Remove
      32bit KVM host support"). So HSR isn't meaningful since then. This
      renames HSR to ESR accordingly. This shouldn't cause any functional
      changes:
      
         * Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() to make the
           function names self-explanatory.
         * Rename variables from @hsr to @esr to make them self-explanatory.
      
      Note that the renaming on uapi and tracepoint will cause ABI changes,
      which we should avoid. Specificly, there are 4 related source files
      in this regard:
      
         * arch/arm64/include/uapi/asm/kvm.h  (struct kvm_debug_exit_arch::hsr)
         * arch/arm64/kvm/handle_exit.c       (struct kvm_debug_exit_arch::hsr)
         * arch/arm64/kvm/trace_arm.h         (tracepoints)
         * arch/arm64/kvm/trace_handle_exit.h (tracepoints)
      Signed-off-by: NGavin Shan <gshan@redhat.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NAndrew Scull <ascull@google.com>
      Link: https://lore.kernel.org/r/20200630015705.103366-1-gshan@redhat.com
      3a949f4c
    • D
      KVM: arm64: Remove __hyp_text macro, use build rules instead · c50cb043
      David Brazdil 提交于
      With nVHE code now fully separated from the rest of the kernel, the effects of
      the __hyp_text macro (which had to be applied on all nVHE code) can be
      achieved with build rules instead. The macro used to:
        (a) move code to .hyp.text ELF section, now done by renaming .text using
            `objcopy`, and
        (b) `notrace` and `__noscs` would negate effects of CC_FLAGS_FTRACE and
            CC_FLAGS_SCS, respectivelly, now those flags are  erased from
            KBUILD_CFLAGS (same way as in EFI stub).
      
      Note that by removing __hyp_text from code shared with VHE, all VHE code is now
      compiled into .text and without `notrace` and `__noscs`.
      
      Use of '.pushsection .hyp.text' removed from assembly files as this is now also
      covered by the build rules.
      
      For MAINTAINERS: if needed to re-run, uses of macro were removed with the
      following command. Formatting was fixed up manually.
      
        find arch/arm64/kvm/hyp -type f -name '*.c' -o -name '*.h' \
             -exec sed -i 's/ __hyp_text//g' {} +
      Signed-off-by: NDavid Brazdil <dbrazdil@google.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20200625131420.71444-15-dbrazdil@google.com
      c50cb043
  7. 09 6月, 2020 1 次提交
    • M
      KVM: arm64: Save the host's PtrAuth keys in non-preemptible context · ef3e40a7
      Marc Zyngier 提交于
      When using the PtrAuth feature in a guest, we need to save the host's
      keys before allowing the guest to program them. For that, we dump
      them in a per-CPU data structure (the so called host context).
      
      But both call sites that do this are in preemptible context,
      which may end up in disaster should the vcpu thread get preempted
      before reentering the guest.
      
      Instead, save the keys eagerly on each vcpu_load(). This has an
      increased overhead, but is at least safe.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      ef3e40a7
  8. 24 3月, 2020 1 次提交
  9. 17 3月, 2020 1 次提交
  10. 22 2月, 2020 1 次提交
  11. 23 1月, 2020 1 次提交
    • M
      KVM: arm/arm64: Cleanup MMIO handling · 0e20f5e2
      Marc Zyngier 提交于
      Our MMIO handling is a bit odd, in the sense that it uses an
      intermediate per-vcpu structure to store the various decoded
      information that describe the access.
      
      But the same information is readily available in the HSR/ESR_EL2
      field, and we actually use this field to populate the structure.
      
      Let's simplify the whole thing by getting rid of the superfluous
      structure and save a (tiny) bit of space in the vcpu structure.
      
      [32bit fix courtesy of Olof Johansson <olof@lixom.net>]
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      0e20f5e2
  12. 20 1月, 2020 2 次提交
    • M
      KVM: arm/arm64: Correct AArch32 SPSR on exception entry · 1cfbb484
      Mark Rutland 提交于
      Confusingly, there are three SPSR layouts that a kernel may need to deal
      with:
      
      (1) An AArch64 SPSR_ELx view of an AArch64 pstate
      (2) An AArch64 SPSR_ELx view of an AArch32 pstate
      (3) An AArch32 SPSR_* view of an AArch32 pstate
      
      When the KVM AArch32 support code deals with SPSR_{EL2,HYP}, it's either
      dealing with #2 or #3 consistently. On arm64 the PSR_AA32_* definitions
      match the AArch64 SPSR_ELx view, and on arm the PSR_AA32_* definitions
      match the AArch32 SPSR_* view.
      
      However, when we inject an exception into an AArch32 guest, we have to
      synthesize the AArch32 SPSR_* that the guest will see. Thus, an AArch64
      host needs to synthesize layout #3 from layout #2.
      
      This patch adds a new host_spsr_to_spsr32() helper for this, and makes
      use of it in the KVM AArch32 support code. For arm64 we need to shuffle
      the DIT bit around, and remove the SS bit, while for arm we can use the
      value as-is.
      
      I've open-coded the bit manipulation for now to avoid having to rework
      the existing PSR_* definitions into PSR64_AA32_* and PSR32_AA32_*
      definitions. I hope to perform a more thorough refactoring in future so
      that we can handle pstate view manipulation more consistently across the
      kernel tree.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/20200108134324.46500-4-mark.rutland@arm.com
      1cfbb484
    • C
      KVM: arm64: Only sign-extend MMIO up to register width · b6ae256a
      Christoffer Dall 提交于
      On AArch64 you can do a sign-extended load to either a 32-bit or 64-bit
      register, and we should only sign extend the register up to the width of
      the register as specified in the operation (by using the 32-bit Wn or
      64-bit Xn register specifier).
      
      As it turns out, the architecture provides this decoding information in
      the SF ("Sixty-Four" -- how cute...) bit.
      
      Let's take advantage of this with the usual 32-bit/64-bit header file
      dance and do the right thing on AArch64 hosts.
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/20191212195055.5541-1-christoffer.dall@arm.com
      b6ae256a
  13. 08 11月, 2019 1 次提交
  14. 29 10月, 2019 1 次提交
    • C
      KVM: arm64: Don't set HCR_EL2.TVM when S2FWB is supported · 5c401308
      Christoffer Dall 提交于
      On CPUs that support S2FWB (Armv8.4+), KVM configures the stage 2 page
      tables to override the memory attributes of memory accesses, regardless
      of the stage 1 page table configurations, and also when the stage 1 MMU
      is turned off.  This results in all memory accesses to RAM being
      cacheable, including during early boot of the guest.
      
      On CPUs without this feature, memory accesses were non-cacheable during
      boot until the guest turned on the stage 1 MMU, and we had to detect
      when the guest turned on the MMU, such that we could invalidate all cache
      entries and ensure a consistent view of memory with the MMU turned on.
      When the guest turned on the caches, we would call stage2_flush_vm()
      from kvm_toggle_cache().
      
      However, stage2_flush_vm() walks all the stage 2 tables, and calls
      __kvm_flush-dcache_pte, which on a system with S2FWB does ... absolutely
      nothing.
      
      We can avoid that whole song and dance, and simply not set TVM when
      creating a VM on a system that has S2FWB.
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20191028130541.30536-1-christoffer.dall@arm.com
      5c401308
  15. 22 10月, 2019 1 次提交
    • C
      KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace · c726200d
      Christoffer Dall 提交于
      For a long time, if a guest accessed memory outside of a memslot using
      any of the load/store instructions in the architecture which doesn't
      supply decoding information in the ESR_EL2 (the ISV bit is not set), the
      kernel would print the following message and terminate the VM as a
      result of returning -ENOSYS to userspace:
      
        load/store instruction decoding not implemented
      
      The reason behind this message is that KVM assumes that all accesses
      outside a memslot is an MMIO access which should be handled by
      userspace, and we originally expected to eventually implement some sort
      of decoding of load/store instructions where the ISV bit was not set.
      
      However, it turns out that many of the instructions which don't provide
      decoding information on abort are not safe to use for MMIO accesses, and
      the remaining few that would potentially make sense to use on MMIO
      accesses, such as those with register writeback, are not used in
      practice.  It also turns out that fetching an instruction from guest
      memory can be a pretty horrible affair, involving stopping all CPUs on
      SMP systems, handling multiple corner cases of address translation in
      software, and more.  It doesn't appear likely that we'll ever implement
      this in the kernel.
      
      What is much more common is that a user has misconfigured his/her guest
      and is actually not accessing an MMIO region, but just hitting some
      random hole in the IPA space.  In this scenario, the error message above
      is almost misleading and has led to a great deal of confusion over the
      years.
      
      It is, nevertheless, ABI to userspace, and we therefore need to
      introduce a new capability that userspace explicitly enables to change
      behavior.
      
      This patch introduces KVM_CAP_ARM_NISV_TO_USER (NISV meaning Non-ISV)
      which does exactly that, and introduces a new exit reason to report the
      event to userspace.  User space can then emulate an exception to the
      guest, restart the guest, suspend the guest, or take any other
      appropriate action as per the policy of the running system.
      Reported-by: NHeinrich Schuchardt <xypron.glpk@gmx.de>
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Reviewed-by: NAlexander Graf <graf@amazon.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      c726200d
  16. 05 7月, 2019 2 次提交
    • D
      KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_s · fdec2a9e
      Dave Martin 提交于
      Currently, the {read,write}_sysreg_el*() accessors for accessing
      particular ELs' sysregs in the presence of VHE rely on some local
      hacks and define their system register encodings in a way that is
      inconsistent with the core definitions in <asm/sysreg.h>.
      
      As a result, it is necessary to add duplicate definitions for any
      system register that already needs a definition in sysreg.h for
      other reasons.
      
      This is a bit of a maintenance headache, and the reasons for the
      _el*() accessors working the way they do is a bit historical.
      
      This patch gets rid of the shadow sysreg definitions in
      <asm/kvm_hyp.h>, converts the _el*() accessors to use the core
      __msr_s/__mrs_s interface, and converts all call sites to use the
      standard sysreg #define names (i.e., upper case, with SYS_ prefix).
      
      This patch will conflict heavily anyway, so the opportunity
      to clean up some bad whitespace in the context of the changes is
      taken.
      
      The change exposes a few system registers that have no sysreg.h
      definition, due to msr_s/mrs_s being used in place of msr/mrs:
      additions are made in order to fill in the gaps.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: https://www.spinics.net/lists/kvm-arm/msg31717.html
      [Rebased to v4.21-rc1]
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      [Rebased to v5.2-rc5, changelog updates]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      fdec2a9e
    • A
      KVM: arm/arm64: Add save/restore support for firmware workaround state · 99adb567
      Andre Przywara 提交于
      KVM implements the firmware interface for mitigating cache speculation
      vulnerabilities. Guests may use this interface to ensure mitigation is
      active.
      If we want to migrate such a guest to a host with a different support
      level for those workarounds, migration might need to fail, to ensure that
      critical guests don't loose their protection.
      
      Introduce a way for userland to save and restore the workarounds state.
      On restoring we do checks that make sure we don't downgrade our
      mitigation level.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Reviewed-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      99adb567
  17. 19 6月, 2019 1 次提交
  18. 24 4月, 2019 1 次提交
    • M
      KVM: arm/arm64: Context-switch ptrauth registers · 384b40ca
      Mark Rutland 提交于
      When pointer authentication is supported, a guest may wish to use it.
      This patch adds the necessary KVM infrastructure for this to work, with
      a semi-lazy context switch of the pointer auth state.
      
      Pointer authentication feature is only enabled when VHE is built
      in the kernel and present in the CPU implementation so only VHE code
      paths are modified.
      
      When we schedule a vcpu, we disable guest usage of pointer
      authentication instructions and accesses to the keys. While these are
      disabled, we avoid context-switching the keys. When we trap the guest
      trying to use pointer authentication functionality, we change to eagerly
      context-switching the keys, and enable the feature. The next time the
      vcpu is scheduled out/in, we start again. However the host key save is
      optimized and implemented inside ptrauth instruction/register access
      trap.
      
      Pointer authentication consists of address authentication and generic
      authentication, and CPUs in a system might have varied support for
      either. Where support for either feature is not uniform, it is hidden
      from guests via ID register emulation, as a result of the cpufeature
      framework in the host.
      
      Unfortunately, address authentication and generic authentication cannot
      be trapped separately, as the architecture provides a single EL2 trap
      covering both. If we wish to expose one without the other, we cannot
      prevent a (badly-written) guest from intermittently using a feature
      which is not uniformly supported (when scheduled on a physical CPU which
      supports the relevant feature). Hence, this patch expects both type of
      authentication to be present in a cpu.
      
      This switch of key is done from guest enter/exit assembly as preparation
      for the upcoming in-kernel pointer authentication support. Hence, these
      key switching routines are not implemented in C code as they may cause
      pointer authentication key signing error in some situations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
      , save host key in ptrauth exception trap]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      [maz: various fixups]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      384b40ca
  19. 20 2月, 2019 3 次提交
  20. 18 12月, 2018 1 次提交
    • M
      arm64: KVM: Consistently advance singlestep when emulating instructions · bd7d95ca
      Mark Rutland 提交于
      When we emulate a guest instruction, we don't advance the hardware
      singlestep state machine, and thus the guest will receive a software
      step exception after a next instruction which is not emulated by the
      host.
      
      We bodge around this in an ad-hoc fashion. Sometimes we explicitly check
      whether userspace requested a single step, and fake a debug exception
      from within the kernel. Other times, we advance the HW singlestep state
      rely on the HW to generate the exception for us. Thus, the observed step
      behaviour differs for host and guest.
      
      Let's make this simpler and consistent by always advancing the HW
      singlestep state machine when we skip an instruction. Thus we can rely
      on the hardware to generate the singlestep exception for us, and never
      need to explicitly check for an active-pending step, nor do we need to
      fake a debug exception from the guest.
      
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      bd7d95ca
  21. 21 9月, 2018 1 次提交
  22. 21 7月, 2018 1 次提交
  23. 09 7月, 2018 2 次提交
  24. 06 7月, 2018 1 次提交
  25. 04 5月, 2018 1 次提交
    • J
      KVM: arm64: Fix order of vcpu_write_sys_reg() arguments · 1975fa56
      James Morse 提交于
      A typo in kvm_vcpu_set_be()'s call:
      | vcpu_write_sys_reg(vcpu, SCTLR_EL1, sctlr)
      causes us to use the 32bit register value as an index into the sys_reg[]
      array, and sail off the end of the linear map when we try to bring up
      big-endian secondaries.
      
      | Unable to handle kernel paging request at virtual address ffff80098b982c00
      | Mem abort info:
      |  ESR = 0x96000045
      |  Exception class = DABT (current EL), IL = 32 bits
      |   SET = 0, FnV = 0
      |   EA = 0, S1PTW = 0
      | Data abort info:
      |   ISV = 0, ISS = 0x00000045
      |   CM = 0, WnR = 1
      | swapper pgtable: 4k pages, 48-bit VAs, pgdp = 000000002ea0571a
      | [ffff80098b982c00] pgd=00000009ffff8803, pud=0000000000000000
      | Internal error: Oops: 96000045 [#1] PREEMPT SMP
      | Modules linked in:
      | CPU: 2 PID: 1561 Comm: kvm-vcpu-0 Not tainted 4.17.0-rc3-00001-ga912e2261ca6-dirty #1323
      | Hardware name: ARM Juno development board (r1) (DT)
      | pstate: 60000005 (nZCv daif -PAN -UAO)
      | pc : vcpu_write_sys_reg+0x50/0x134
      | lr : vcpu_write_sys_reg+0x50/0x134
      
      | Process kvm-vcpu-0 (pid: 1561, stack limit = 0x000000006df4728b)
      | Call trace:
      |  vcpu_write_sys_reg+0x50/0x134
      |  kvm_psci_vcpu_on+0x14c/0x150
      |  kvm_psci_0_2_call+0x244/0x2a4
      |  kvm_hvc_call_handler+0x1cc/0x258
      |  handle_hvc+0x20/0x3c
      |  handle_exit+0x130/0x1ec
      |  kvm_arch_vcpu_ioctl_run+0x340/0x614
      |  kvm_vcpu_ioctl+0x4d0/0x840
      |  do_vfs_ioctl+0xc8/0x8d0
      |  ksys_ioctl+0x78/0xa8
      |  sys_ioctl+0xc/0x18
      |  el0_svc_naked+0x30/0x34
      | Code: 73620291 604d00b0 00201891 1ab10194 (957a33f8)
      |---[ end trace 4b4a4f9628596602 ]---
      
      Fix the order of the arguments.
      
      Fixes: 8d404c4c ("KVM: arm64: Rewrite system register accessors to read/write functions")
      CC: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      1975fa56
  26. 19 3月, 2018 4 次提交