1. 16 5月, 2020 6 次提交
  2. 01 5月, 2020 1 次提交
    • M
      KVM: arm64: Fix 32bit PC wrap-around · 0225fd5e
      Marc Zyngier 提交于
      In the unlikely event that a 32bit vcpu traps into the hypervisor
      on an instruction that is located right at the end of the 32bit
      range, the emulation of that instruction is going to increment
      PC past the 32bit range. This isn't great, as userspace can then
      observe this value and get a bit confused.
      
      Conversly, userspace can do things like (in the context of a 64bit
      guest that is capable of 32bit EL0) setting PSTATE to AArch64-EL0,
      set PC to a 64bit value, change PSTATE to AArch32-USR, and observe
      that PC hasn't been truncated. More confusion.
      
      Fix both by:
      - truncating PC increments for 32bit guests
      - sanitizing all 32bit regs every time a core reg is changed by
        userspace, and that PSTATE indicates a 32bit mode.
      
      Cc: stable@vger.kernel.org
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      0225fd5e
  3. 30 4月, 2020 2 次提交
  4. 02 4月, 2020 1 次提交
  5. 24 3月, 2020 1 次提交
  6. 18 3月, 2020 1 次提交
    • A
      KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 · c854188e
      Andrew Murray 提交于
      We currently expose the PMU version of the host to the guest via
      emulation of the DFR0_EL1 and AA64DFR0_EL1 debug feature registers.
      However many of the features offered beyond PMUv3 for 8.1 are not
      supported in KVM. Examples of this include support for the PMMIR
      registers (added in PMUv3 for ARMv8.4) and 64-bit event counters
      added in (PMUv3 for ARMv8.5).
      
      Let's trap the Debug Feature Registers in order to limit
      PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.1
      to avoid unexpected behaviour.
      
      Both ID_AA64DFR0.PMUVer and ID_DFR0.PerfMon follow the "Alternative ID
      scheme used for the Performance Monitors Extension version" where 0xF
      means an IMPLEMENTATION DEFINED PMU is implemented, and values 0x0-0xE
      are treated as with an unsigned field (with 0x0 meaning no PMU is
      present). As we don't expect to expose an IMPLEMENTATION DEFINED PMU,
      and our cap is below 0xF, we can treat these fields as unsigned when
      applying the cap.
      Signed-off-by: NAndrew Murray <andrew.murray@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [Mark: make field names consistent, use perfmon cap]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      c854188e
  7. 17 3月, 2020 1 次提交
    • P
      KVM: Remove unnecessary asm/kvm_host.h includes · 4d395762
      Peter Xu 提交于
      Remove includes of asm/kvm_host.h from files that already include
      linux/kvm_host.h to make it more obvious that there is no ordering issue
      between the two headers.  linux/kvm_host.h includes asm/kvm_host.h to
      pick up architecture specific settings, and this will never change, i.e.
      including asm/kvm_host.h after linux/kvm_host.h may seem problematic,
      but in practice is simply redundant.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4d395762
  8. 14 3月, 2020 1 次提交
    • M
      arm64: kvm: hyp: use cpus_have_final_cap() · b5475d8c
      Mark Rutland 提交于
      The KVM hyp code is only run after system capabilities have been
      finalized, and thus all const cap checks have been patched. This is
      noted in in __cpu_init_hyp_mode(), where we BUG() if called too early:
      
      | /*
      |  * Call initialization code, and switch to the full blown HYP code.
      |  * If the cpucaps haven't been finalized yet, something has gone very
      |  * wrong, and hyp will crash and burn when it uses any
      |  * cpus_have_const_cap() wrapper.
      |  */
      
      Given this, the hyp code can use cpus_have_final_cap() and avoid
      generating code to check the cpu_hwcaps array, which would be unsafe to
      run in hyp context.
      
      This patch migrate the KVM hyp code to cpus_have_final_cap(), avoiding
      this redundant code generation, and making it possible to detect if we
      accidentally invoke this code too early. In the latter case, the BUG()
      in cpus_have_final_cap() will cause a hyp panic.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Suzuki Poulouse <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b5475d8c
  9. 10 3月, 2020 3 次提交
    • M
      arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations · 4db61fef
      Mark Brown 提交于
      In an effort to clarify and simplify the annotation of assembly functions
      in the kernel new macros have been introduced. These replace ENTRY and
      ENDPROC with separate annotations for standard C callable functions,
      data and code with different calling conventions.
      
      Using these for __smccc_workaround_1_smc is more involved than for most
      symbols as this symbol is annotated quite unusually, rather than just have
      the explicit symbol we define _start and _end symbols which we then use to
      compute the length. This does not play at all nicely with the new style
      macros. Instead define a constant for the size of the function and use that
      in both the C code and for .org based size checks in the assembly code.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      4db61fef
    • M
      arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs · 6e52aab9
      Mark Brown 提交于
      We have recently introduced new macros for annotating assembly symbols
      for things that aren't C functions, SYM_CODE_START() and SYM_CODE_END(),
      in an effort to clarify and simplify our annotations of assembly files.
      
      Using these for __bp_harden_hyp_vecs is more involved than for most symbols
      as this symbol is annotated quite unusually as rather than just have the
      explicit symbol we define _start and _end symbols which we then use to
      compute the length. This does not play at all nicely with the new style
      macros. Since the size of the vectors is a known constant which won't vary
      the simplest thing to do is simply to drop the separate _start and _end
      symbols and just use a #define for the size.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      6e52aab9
    • M
      arm64: kvm: Annotate assembly using modern annoations · 617a2f39
      Mark Brown 提交于
      In an effort to clarify and simplify the annotation of assembly functions
      in the kernel new macros have been introduced. These replace ENTRY and
      ENDPROC with separate annotations for standard C callable functions,
      data and code with different calling conventions.  Update the more
      straightforward annotations in the kvm code to the new macros.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      617a2f39
  10. 07 3月, 2020 1 次提交
  11. 22 2月, 2020 1 次提交
  12. 17 2月, 2020 1 次提交
  13. 28 1月, 2020 3 次提交
  14. 23 1月, 2020 2 次提交
  15. 20 1月, 2020 2 次提交
    • M
      KVM: arm64: Correct PSTATE on exception entry · a425372e
      Mark Rutland 提交于
      When KVM injects an exception into a guest, it generates the PSTATE
      value from scratch, configuring PSTATE.{M[4:0],DAIF}, and setting all
      other bits to zero.
      
      This isn't correct, as the architecture specifies that some PSTATE bits
      are (conditionally) cleared or set upon an exception, and others are
      unchanged from the original context.
      
      This patch adds logic to match the architectural behaviour. To make this
      simple to follow/audit/extend, documentation references are provided,
      and bits are configured in order of their layout in SPSR_EL2. This
      layout can be seen in the diagram on ARM DDI 0487E.a page C5-429.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/20200108134324.46500-2-mark.rutland@arm.com
      a425372e
    • R
      arm64: kvm: Fix IDMAP overlap with HYP VA · f5523423
      Russell King 提交于
      Booting 5.4 on LX2160A reveals that KVM is non-functional:
      
      kvm: Limiting the IPA size due to kernel Virtual Address limit
      kvm [1]: IPA Size Limit: 43bits
      kvm [1]: IDMAP intersecting with HYP VA, unable to continue
      kvm [1]: error initializing Hyp mode: -22
      
      Debugging shows:
      
      kvm [1]: IDMAP page: 81a26000
      kvm [1]: HYP VA range: 0:22ffffffff
      
      as RAM is located at:
      
      80000000-fbdfffff : System RAM
      2080000000-237fffffff : System RAM
      
      Comparing this with the same kernel on Armada 8040 shows:
      
      kvm: Limiting the IPA size due to kernel Virtual Address limit
      kvm [1]: IPA Size Limit: 43bits
      kvm [1]: IDMAP page: 2a26000
      kvm [1]: HYP VA range: 4800000000:493fffffff
      ...
      kvm [1]: Hyp mode initialized successfully
      
      which indicates that hyp_va_msb is set, and is always set to the
      opposite value of the idmap page to avoid the overlap. This does not
      happen with the LX2160A.
      
      Further debugging shows vabits_actual = 39, kva_msb = 38 on LX2160A and
      kva_msb = 33 on Armada 8040. Looking at the bit layout of the HYP VA,
      there is still one bit available for hyp_va_msb. Set this bit
      appropriately. This allows KVM to be functional on the LX2160A, but
      without any HYP VA randomisation:
      
      kvm: Limiting the IPA size due to kernel Virtual Address limit
      kvm [1]: IPA Size Limit: 43bits
      kvm [1]: IDMAP page: 81a24000
      kvm [1]: HYP VA range: 4000000000:62ffffffff
      ...
      kvm [1]: Hyp mode initialized successfully
      
      Fixes: ed57cac8 ("arm64: KVM: Introduce EL2 VA randomisation")
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      [maz: small additional cleanups, preserved case where the tag
       is legitimately 0 and we can just use the mask, Fixes tag]
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/E1ilAiY-0000MA-RG@rmk-PC.armlinux.org.uk
      f5523423
  16. 17 1月, 2020 1 次提交
  17. 16 1月, 2020 3 次提交
  18. 15 1月, 2020 2 次提交
    • A
      arm64: Introduce ID_ISAR6 CPU register · 8e3747be
      Anshuman Khandual 提交于
      This adds basic building blocks required for ID_ISAR6 CPU register which
      identifies support for various instruction implementation on AArch32 state.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Cc: kvmarm@lists.cs.columbia.edu
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      [will: Ensure SPECRES is treated the same as on A64]
      Signed-off-by: NWill Deacon <will@kernel.org>
      8e3747be
    • S
      arm64: nofpsmid: Handle TIF_FOREIGN_FPSTATE flag cleanly · 52f73c38
      Suzuki K Poulose 提交于
      We detect the absence of FP/SIMD after an incapable CPU is brought up,
      and by then we have kernel threads running already with TIF_FOREIGN_FPSTATE set
      which could be set for early userspace applications (e.g, modprobe triggered
      from initramfs) and init. This could cause the applications to loop forever in
      do_nofity_resume() as we never clear the TIF flag, once we now know that
      we don't support FP.
      
      Fix this by making sure that we clear the TIF_FOREIGN_FPSTATE flag
      for tasks which may have them set, as we would have done in the normal
      case, but avoiding touching the hardware state (since we don't support any).
      
      Also to make sure we handle the cases seemlessly we categorise the
      helper functions to two :
       1) Helpers for common core code, which calls into take appropriate
          actions without knowing the current FPSIMD state of the CPU/task.
      
          e.g fpsimd_restore_current_state(), fpsimd_flush_task_state(),
              fpsimd_save_and_flush_cpu_state().
      
          We bail out early for these functions, taking any appropriate actions
          (e.g, clearing the TIF flag) where necessary to hide the handling
          from core code.
      
       2) Helpers used when the presence of FP/SIMD is apparent.
          i.e, save/restore the FP/SIMD register state, modify the CPU/task
          FP/SIMD state.
          e.g,
      
          fpsimd_save(), task_fpsimd_load() - save/restore task FP/SIMD registers
      
          fpsimd_bind_task_to_cpu()  \
                                      - Update the "state" metadata for CPU/task.
          fpsimd_bind_state_to_cpu() /
      
          fpsimd_update_current_state() - Update the fp/simd state for the current
                                          task from memory.
      
          These must not be called in the absence of FP/SIMD. Put in a WARNING
          to make sure they are not invoked in the absence of FP/SIMD.
      
      KVM also uses the TIF_FOREIGN_FPSTATE flag to manage the FP/SIMD state
      on the CPU. However, without FP/SIMD support we trap all accesses and
      inject undefined instruction. Thus we should never "load" guest state.
      Add a sanity check to make sure this is valid.
      
      Fixes: 82e0191a ("arm64: Support systems without FP/ASIMD")
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      52f73c38
  19. 12 12月, 2019 1 次提交
    • W
      KVM: arm64: Ensure 'params' is initialised when looking up sys register · 1ce74e96
      Will Deacon 提交于
      Commit 4b927b94 ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()")
      introduced 'find_reg_by_id()', which looks up a system register only if
      the 'id' index parameter identifies a valid system register. As part of
      the patch, existing callers of 'find_reg()' were ported over to the new
      interface, but this breaks 'index_to_sys_reg_desc()' in the case that the
      initial lookup in the vCPU target table fails because we will then call
      into 'find_reg()' for the system register table with an uninitialised
      'param' as the key to the lookup.
      
      GCC 10 is bright enough to spot this (amongst a tonne of false positives,
      but hey!):
      
        | arch/arm64/kvm/sys_regs.c: In function ‘index_to_sys_reg_desc.part.0.isra’:
        | arch/arm64/kvm/sys_regs.c:983:33: warning: ‘params.Op2’ may be used uninitialized in this function [-Wmaybe-uninitialized]
        |   983 |   (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2);
        | [...]
      
      Revert the hunk of 4b927b94 which breaks 'index_to_sys_reg_desc()' so
      that the old behaviour of checking the index upfront is restored.
      
      Fixes: 4b927b94 ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()")
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Cc: <stable@vger.kernel.org>
      Link: https://lore.kernel.org/r/20191212094049.12437-1-will@kernel.org
      1ce74e96
  20. 07 12月, 2019 1 次提交
  21. 06 12月, 2019 2 次提交
  22. 28 10月, 2019 1 次提交
  23. 26 10月, 2019 2 次提交
    • M
      arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context · bd227553
      Marc Zyngier 提交于
      When handling erratum 1319367, we must ensure that the page table
      walker cannot parse the S1 page tables while the guest is in an
      inconsistent state. This is done as follows:
      
      On guest entry:
      - TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur
      - all system registers are restored, except for TCR_EL1 and SCTLR_EL1
      - stage-2 is restored
      - SCTLR_EL1 and TCR_EL1 are restored
      
      On guest exit:
      - SCTLR_EL1.M and TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur
      - stage-2 is disabled
      - All host system registers are restored
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      bd227553
    • M
      arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs · 37553941
      Marc Zyngier 提交于
      When erratum 1319367 is being worked around, special care must
      be taken not to allow the page table walker to populate TLBs
      while we have the stage-2 translation enabled (which would otherwise
      result in a bizare mix of the host S1 and the guest S2).
      
      We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2
      configuration, and clear the same bits after having disabled S2.
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      37553941