1. 29 9月, 2020 3 次提交
  2. 28 8月, 2020 2 次提交
    • J
      KVM: arm64: Survive synchronous exceptions caused by AT instructions · 88a84ccc
      James Morse 提交于
      KVM doesn't expect any synchronous exceptions when executing, any such
      exception leads to a panic(). AT instructions access the guest page
      tables, and can cause a synchronous external abort to be taken.
      
      The arm-arm is unclear on what should happen if the guest has configured
      the hardware update of the access-flag, and a memory type in TCR_EL1 that
      does not support atomic operations. B2.2.6 "Possible implementation
      restrictions on using atomic instructions" from DDI0487F.a lists
      synchronous external abort as a possible behaviour of atomic instructions
      that target memory that isn't writeback cacheable, but the page table
      walker may behave differently.
      
      Make KVM robust to synchronous exceptions caused by AT instructions.
      Add a get_user() style helper for AT instructions that returns -EFAULT
      if an exception was generated.
      
      While KVM's version of the exception table mixes synchronous and
      asynchronous exceptions, only one of these can occur at each location.
      
      Re-enter the guest when the AT instructions take an exception on the
      assumption the guest will take the same exception. This isn't guaranteed
      to make forward progress, as the AT instructions may always walk the page
      tables, but guest execution may use the translation cached in the TLB.
      
      This isn't a problem, as since commit 5dcd0fdb ("KVM: arm64: Defer guest
      entry when an asynchronous exception is pending"), KVM will return to the
      host to process IRQs allowing the rest of the system to keep running.
      
      Cc: stable@vger.kernel.org # <v5.3: 5dcd0fdb ("KVM: arm64: Defer guest entry when an asynchronous exception is pending")
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      88a84ccc
    • J
      KVM: arm64: Add kvm_extable for vaxorcism code · e9ee186b
      James Morse 提交于
      KVM has a one instruction window where it will allow an SError exception
      to be consumed by the hypervisor without treating it as a hypervisor bug.
      This is used to consume asynchronous external abort that were caused by
      the guest.
      
      As we are about to add another location that survives unexpected exceptions,
      generalise this code to make it behave like the host's extable.
      
      KVM's version has to be mapped to EL2 to be accessible on nVHE systems.
      
      The SError vaxorcism code is a one instruction window, so has two entries
      in the extable. Because the KVM code is copied for VHE and nVHE, we end up
      with four entries, half of which correspond with code that isn't mapped.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e9ee186b
  3. 08 7月, 2020 1 次提交
    • M
      KVM: arm64: Don't use has_vhe() for CHOOSE_HYP_SYM() · 6de7dd31
      Marc Zyngier 提交于
      The recently introduced CHOOSE_HYP_SYM() macro picks one symbol
      or another, depending on whether the kernel run as a VHE
      hypervisor or not. For that, it uses the has_vhe() helper, which
      is itself implemented as a final capability.
      
      Unfortunately, __copy_hyp_vect_bpi now indirectly uses CHOOSE_HYP_SYM
      to get the __bp_harden_hyp_vecs symbol, using has_vhe() in the process.
      At this stage, the capability isn't final and things explode:
      
      [    0.000000] ACPI: SRAT not present
      [    0.000000] percpu: Embedded 34 pages/cpu s101264 r8192 d29808 u139264
      [    0.000000] Detected PIPT I-cache on CPU0
      [    0.000000] ------------[ cut here ]------------
      [    0.000000] kernel BUG at arch/arm64/include/asm/cpufeature.h:459!
      [    0.000000] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.8.0-rc4-00080-gd630681366e5 #1388
      [    0.000000] pstate: 80000085 (Nzcv daIf -PAN -UAO BTYPE=--)
      [    0.000000] pc : check_branch_predictor+0x3a4/0x408
      [    0.000000] lr : check_branch_predictor+0x2a4/0x408
      [    0.000000] sp : ffff800011693e90
      [    0.000000] x29: ffff800011693e90 x28: ffff8000116a1530
      [    0.000000] x27: ffff8000112c1008 x26: ffff800010ca6ff8
      [    0.000000] x25: ffff8000112c1000 x24: ffff8000116a1320
      [    0.000000] x23: 0000000000000000 x22: ffff8000112c1000
      [    0.000000] x21: ffff800010177120 x20: ffff8000116ae108
      [    0.000000] x19: 0000000000000000 x18: ffff800011965c90
      [    0.000000] x17: 0000000000022000 x16: 0000000000000003
      [    0.000000] x15: 00000000ffffffff x14: ffff8000118c3a38
      [    0.000000] x13: 0000000000000021 x12: 0000000000000022
      [    0.000000] x11: d37a6f4de9bd37a7 x10: 000000000000001d
      [    0.000000] x9 : 0000000000000000 x8 : ffff800011f8dad8
      [    0.000000] x7 : ffff800011965ad0 x6 : 0000000000000003
      [    0.000000] x5 : 0000000000000000 x4 : 0000000000000000
      [    0.000000] x3 : 0000000000000100 x2 : 0000000000000004
      [    0.000000] x1 : ffff8000116ae148 x0 : 0000000000000000
      [    0.000000] Call trace:
      [    0.000000]  check_branch_predictor+0x3a4/0x408
      [    0.000000]  update_cpu_capabilities+0x84/0x138
      [    0.000000]  init_cpu_features+0x2c0/0x2d8
      [    0.000000]  cpuinfo_store_boot_cpu+0x54/0x64
      [    0.000000]  smp_prepare_boot_cpu+0x2c/0x60
      [    0.000000]  start_kernel+0x16c/0x574
      [    0.000000] Code: 17ffffc7 91010281 14000198 17ffffca (d4210000)
      
      This is addressed using a two-fold process:
      - Replace has_vhe() with is_kernel_in_hyp_mode(), which tests
        whether we are running at EL2.
      - Make CHOOSE_HYP_SYM() return an *undefined* symbol when
        compiled in the nVHE hypervisor, as we really should never
        use this helper in the nVHE-specific code.
      
      With this in place, we're back to a bootable kernel again.
      
      Fixes: b877e984 ("KVM: arm64: Build hyp-entry.S separately for VHE/nVHE")
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      6de7dd31
  4. 07 7月, 2020 2 次提交
  5. 06 7月, 2020 5 次提交
  6. 11 6月, 2020 1 次提交
  7. 09 6月, 2020 1 次提交
  8. 25 5月, 2020 1 次提交
  9. 16 5月, 2020 1 次提交
  10. 10 3月, 2020 1 次提交
    • M
      arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations · 4db61fef
      Mark Brown 提交于
      In an effort to clarify and simplify the annotation of assembly functions
      in the kernel new macros have been introduced. These replace ENTRY and
      ENDPROC with separate annotations for standard C callable functions,
      data and code with different calling conventions.
      
      Using these for __smccc_workaround_1_smc is more involved than for most
      symbols as this symbol is annotated quite unusually, rather than just have
      the explicit symbol we define _start and _end symbols which we then use to
      compute the length. This does not play at all nicely with the new style
      macros. Instead define a constant for the size of the function and use that
      in both the C code and for .org based size checks in the assembly code.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      4db61fef
  11. 05 7月, 2019 2 次提交
    • J
      KVM: arm64: Consume pending SError as early as possible · 0e5b9c08
      James Morse 提交于
      On systems with v8.2 we switch the 'vaxorcism' of guest SError with an
      alternative sequence that uses the ESB-instruction, then reads DISR_EL1.
      This saves the unmasking and remasking of asynchronous exceptions.
      
      We do this after we've saved the guest registers and restored the
      host's. Any SError that becomes pending due to this will be accounted
      to the guest, when it actually occurred during host-execution.
      
      Move the ESB-instruction as early as possible. Any guest SError
      will become pending due to this ESB-instruction and then consumed to
      DISR_EL1 before the host touches anything.
      
      This lets us account for host/guest SError precisely on the guest
      exit exception boundary.
      
      Because the ESB-instruction now lands in the preamble section of
      the vectors, we need to add it to the unpatched indirect vectors
      too, and to any sequence that may be patched in over the top.
      
      The ESB-instruction always lives in the head of the vectors,
      to be before any memory write. Whereas the register-store always
      lives in the tail.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      0e5b9c08
    • J
      KVM: arm64: Abstract the size of the HYP vectors pre-amble · 3dbf100b
      James Morse 提交于
      The EL2 vector hardening feature causes KVM to generate vectors for
      each type of CPU present in the system. The generated sequences already
      do some of the early guest-exit work (i.e. saving registers). To avoid
      duplication the generated vectors branch to the original vector just
      after the preamble. This size is hard coded.
      
      Adding new instructions to the HYP vector causes strange side effects,
      which are difficult to debug as the affected code is patched in at
      runtime.
      
      Add KVM_VECTOR_PREAMBLE to tell kvm_patch_vector_branch() how big
      the preamble is. The valid_vect macro can then validate this at
      build time.
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      3dbf100b
  12. 19 6月, 2019 1 次提交
  13. 24 4月, 2019 1 次提交
  14. 20 12月, 2018 2 次提交
    • M
      arm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro · 58466766
      Marc Zyngier 提交于
      32 and 64bit use different symbols to identify the traps.
      32bit has a fine grained approach (prefetch abort, data abort and HVC),
      while 64bit is pretty happy with just "trap".
      
      This has been fine so far, except that we now need to decode some
      of that in tracepoints that are common to both architectures.
      
      Introduce ARM_EXCEPTION_IS_TRAP which abstracts the trap symbols
      and make the tracepoint use it.
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      58466766
    • C
      KVM: arm/arm64: Fixup the kvm_exit tracepoint · 71a7e47f
      Christoffer Dall 提交于
      The kvm_exit tracepoint strangely always reported exits as being IRQs.
      This seems to be because either the __print_symbolic or the tracepoint
      macros use a variable named idx.
      
      Take this chance to update the fields in the tracepoint to reflect the
      concepts in the arm64 architecture that we pass to the tracepoint and
      move the exception type table to the same location and header files as
      the exits code.
      
      We also clear out the exception code to 0 for IRQ exits (which
      translates to UNKNOWN in text) to make it slighyly less confusing to
      parse the trace output.
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      71a7e47f
  15. 19 10月, 2018 1 次提交
  16. 01 10月, 2018 1 次提交
  17. 02 6月, 2018 1 次提交
  18. 01 6月, 2018 2 次提交
  19. 25 5月, 2018 1 次提交
    • D
      KVM: arm64: Repurpose vcpu_arch.debug_flags for general-purpose flags · fa89d31c
      Dave Martin 提交于
      In struct vcpu_arch, the debug_flags field is used to store
      debug-related flags about the vcpu state.
      
      Since we are about to add some more flags related to FPSIMD and
      SVE, it makes sense to add them to the existing flags field rather
      than adding new fields.  Since there is only one debug_flags flag
      defined so far, there is plenty of free space for expansion.
      
      In preparation for adding more flags, this patch renames the
      debug_flags field to simply "flags", and updates comments
      appropriately.
      
      The flag definitions are also moved to <asm/kvm_host.h>, since
      their presence in <asm/kvm_asm.h> was for purely historical
      reasons:  these definitions are not used from asm any more, and not
      very likely to be as more Hyp asm is migrated to C.
      
      KVM_ARM64_DEBUG_DIRTY_SHIFT has not been used since commit
      1ea66d27 ("arm64: KVM: Move away from the assembly version of
      the world switch"), so this patch gets rid of that too.
      
      No functional change.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      [maz: fixed minor conflict]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      fa89d31c
  20. 20 5月, 2018 1 次提交
    • M
      arm64: KVM: Use lm_alias() for kvm_ksym_ref() · 46c4a30b
      Mark Rutland 提交于
      For historical reasons, we open-code lm_alias() in kvm_ksym_ref().
      
      Let's use lm_alias() to avoid duplication and make things clearer.
      
      As we have to pull this from <linux/mm.h> (which is not safe for
      inclusion in assembly), we may as well move the kvm_ksym_ref()
      definition into the existing !__ASSEMBLY__ block.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      46c4a30b
  21. 12 4月, 2018 1 次提交
  22. 28 3月, 2018 1 次提交
  23. 20 3月, 2018 1 次提交
  24. 19 3月, 2018 2 次提交
    • C
      KVM: arm64: Introduce VHE-specific kvm_vcpu_run · 3f5c90b8
      Christoffer Dall 提交于
      So far this is mostly (see below) a copy of the legacy non-VHE switch
      function, but we will start reworking these functions in separate
      directions to work on VHE and non-VHE in the most optimal way in later
      patches.
      
      The only difference after this patch between the VHE and non-VHE run
      functions is that we omit the branch-predictor variant-2 hardening for
      QC Falkor CPUs, because this workaround is specific to a series of
      non-VHE ARMv8.0 CPUs.
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      3f5c90b8
    • C
      KVM: arm64: Avoid storing the vcpu pointer on the stack · 4464e210
      Christoffer Dall 提交于
      We already have the percpu area for the host cpu state, which points to
      the VCPU, so there's no need to store the VCPU pointer on the stack on
      every context switch.  We can be a little more clever and just use
      tpidr_el2 for the percpu offset and load the VCPU pointer from the host
      context.
      
      This has the benefit of being able to retrieve the host context even
      when our stack is corrupted, and it has a potential performance benefit
      because we trade a store plus a load for an mrs and a load on a round
      trip to the guest.
      
      This does require us to calculate the percpu offset without including
      the offset from the kernel mapping of the percpu array to the linear
      mapping of the array (which is what we store in tpidr_el1), because a
      PC-relative generated address in EL2 is already giving us the hyp alias
      of the linear mapping of a kernel address.  We do this in
      __cpu_init_hyp_mode() by using kvm_ksym_ref().
      
      The code that accesses ESR_EL2 was previously using an alternative to
      use the _EL1 accessor on VHE systems, but this was actually unnecessary
      as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2
      accessor does the same thing on both systems.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      4464e210
  25. 09 1月, 2018 1 次提交
  26. 06 11月, 2017 1 次提交
    • C
      KVM: arm/arm64: Move timer save/restore out of the hyp code · 688c50aa
      Christoffer Dall 提交于
      As we are about to be lazy with saving and restoring the timer
      registers, we prepare by moving all possible timer configuration logic
      out of the hyp code.  All virtual timer registers can be programmed from
      EL1 and since the arch timer is always a level triggered interrupt we
      can safely do this with interrupts disabled in the host kernel on the
      way to the guest without taking vtimer interrupts in the host kernel
      (yet).
      
      The downside is that the cntvoff register can only be programmed from
      hyp mode, so we jump into hyp mode and back to program it.  This is also
      safe, because the host kernel doesn't use the virtual timer in the KVM
      code.  It may add a little performance performance penalty, but only
      until following commits where we move this operation to vcpu load/put.
      Signed-off-by: NChristoffer Dall <cdall@linaro.org>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      688c50aa
  27. 09 4月, 2017 2 次提交