1. 24 2月, 2021 1 次提交
  2. 16 11月, 2020 4 次提交
  3. 30 9月, 2020 1 次提交
  4. 29 9月, 2020 4 次提交
  5. 16 9月, 2020 3 次提交
  6. 28 8月, 2020 2 次提交
    • J
      KVM: arm64: Survive synchronous exceptions caused by AT instructions · 88a84ccc
      James Morse 提交于
      KVM doesn't expect any synchronous exceptions when executing, any such
      exception leads to a panic(). AT instructions access the guest page
      tables, and can cause a synchronous external abort to be taken.
      
      The arm-arm is unclear on what should happen if the guest has configured
      the hardware update of the access-flag, and a memory type in TCR_EL1 that
      does not support atomic operations. B2.2.6 "Possible implementation
      restrictions on using atomic instructions" from DDI0487F.a lists
      synchronous external abort as a possible behaviour of atomic instructions
      that target memory that isn't writeback cacheable, but the page table
      walker may behave differently.
      
      Make KVM robust to synchronous exceptions caused by AT instructions.
      Add a get_user() style helper for AT instructions that returns -EFAULT
      if an exception was generated.
      
      While KVM's version of the exception table mixes synchronous and
      asynchronous exceptions, only one of these can occur at each location.
      
      Re-enter the guest when the AT instructions take an exception on the
      assumption the guest will take the same exception. This isn't guaranteed
      to make forward progress, as the AT instructions may always walk the page
      tables, but guest execution may use the translation cached in the TLB.
      
      This isn't a problem, as since commit 5dcd0fdb ("KVM: arm64: Defer guest
      entry when an asynchronous exception is pending"), KVM will return to the
      host to process IRQs allowing the rest of the system to keep running.
      
      Cc: stable@vger.kernel.org # <v5.3: 5dcd0fdb ("KVM: arm64: Defer guest entry when an asynchronous exception is pending")
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      88a84ccc
    • J
      KVM: arm64: Add kvm_extable for vaxorcism code · e9ee186b
      James Morse 提交于
      KVM has a one instruction window where it will allow an SError exception
      to be consumed by the hypervisor without treating it as a hypervisor bug.
      This is used to consume asynchronous external abort that were caused by
      the guest.
      
      As we are about to add another location that survives unexpected exceptions,
      generalise this code to make it behave like the host's extable.
      
      KVM's version has to be mapped to EL2 to be accessible on nVHE systems.
      
      The SError vaxorcism code is a one instruction window, so has two entries
      in the extable. Because the KVM code is copied for VHE and nVHE, we end up
      with four entries, half of which correspond with code that isn't mapped.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e9ee186b
  7. 06 7月, 2020 4 次提交
  8. 30 4月, 2020 1 次提交
  9. 10 3月, 2020 3 次提交
    • M
      arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations · 4db61fef
      Mark Brown 提交于
      In an effort to clarify and simplify the annotation of assembly functions
      in the kernel new macros have been introduced. These replace ENTRY and
      ENDPROC with separate annotations for standard C callable functions,
      data and code with different calling conventions.
      
      Using these for __smccc_workaround_1_smc is more involved than for most
      symbols as this symbol is annotated quite unusually, rather than just have
      the explicit symbol we define _start and _end symbols which we then use to
      compute the length. This does not play at all nicely with the new style
      macros. Instead define a constant for the size of the function and use that
      in both the C code and for .org based size checks in the assembly code.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      4db61fef
    • M
      arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs · 6e52aab9
      Mark Brown 提交于
      We have recently introduced new macros for annotating assembly symbols
      for things that aren't C functions, SYM_CODE_START() and SYM_CODE_END(),
      in an effort to clarify and simplify our annotations of assembly files.
      
      Using these for __bp_harden_hyp_vecs is more involved than for most symbols
      as this symbol is annotated quite unusually as rather than just have the
      explicit symbol we define _start and _end symbols which we then use to
      compute the length. This does not play at all nicely with the new style
      macros. Since the size of the vectors is a known constant which won't vary
      the simplest thing to do is simply to drop the separate _start and _end
      symbols and just use a #define for the size.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      6e52aab9
    • M
      arm64: kvm: Annotate assembly using modern annoations · 617a2f39
      Mark Brown 提交于
      In an effort to clarify and simplify the annotation of assembly functions
      in the kernel new macros have been introduced. These replace ENTRY and
      ENDPROC with separate annotations for standard C callable functions,
      data and code with different calling conventions.  Update the more
      straightforward annotations in the kvm code to the new macros.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      617a2f39
  10. 05 7月, 2019 3 次提交
    • J
      KVM: arm64: Consume pending SError as early as possible · 0e5b9c08
      James Morse 提交于
      On systems with v8.2 we switch the 'vaxorcism' of guest SError with an
      alternative sequence that uses the ESB-instruction, then reads DISR_EL1.
      This saves the unmasking and remasking of asynchronous exceptions.
      
      We do this after we've saved the guest registers and restored the
      host's. Any SError that becomes pending due to this will be accounted
      to the guest, when it actually occurred during host-execution.
      
      Move the ESB-instruction as early as possible. Any guest SError
      will become pending due to this ESB-instruction and then consumed to
      DISR_EL1 before the host touches anything.
      
      This lets us account for host/guest SError precisely on the guest
      exit exception boundary.
      
      Because the ESB-instruction now lands in the preamble section of
      the vectors, we need to add it to the unpatched indirect vectors
      too, and to any sequence that may be patched in over the top.
      
      The ESB-instruction always lives in the head of the vectors,
      to be before any memory write. Whereas the register-store always
      lives in the tail.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      0e5b9c08
    • J
      KVM: arm64: Make indirect vectors preamble behaviour symmetric · 5d994374
      James Morse 提交于
      The KVM indirect vectors support is a little complicated. Different CPUs
      may use different exception vectors for KVM that are generated at boot.
      Adding new instructions involves checking all the possible combinations
      do the right thing.
      
      To make changes here easier to review lets state what we expect of the
      preamble:
        1. The first vector run, must always run the preamble.
        2. Patching the head or tail of the vector shouldn't remove
           preamble instructions.
      
      Today, this is easy as we only have one instruction in the preamble.
      Change the unpatched tail of the indirect vector so that it always
      runs this, regardless of patching.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      5d994374
    • J
      KVM: arm64: Abstract the size of the HYP vectors pre-amble · 3dbf100b
      James Morse 提交于
      The EL2 vector hardening feature causes KVM to generate vectors for
      each type of CPU present in the system. The generated sequences already
      do some of the early guest-exit work (i.e. saving registers). To avoid
      duplication the generated vectors branch to the original vector just
      after the preamble. This size is hard coded.
      
      Adding new instructions to the HYP vector causes strange side effects,
      which are difficult to debug as the affected code is patched in at
      runtime.
      
      Add KVM_VECTOR_PREAMBLE to tell kvm_patch_vector_branch() how big
      the preamble is. The valid_vect macro can then validate this at
      build time.
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      3dbf100b
  11. 19 6月, 2019 1 次提交
  12. 20 2月, 2019 1 次提交
  13. 07 12月, 2018 1 次提交
    • W
      arm64: entry: Place an SB sequence following an ERET instruction · 679db708
      Will Deacon 提交于
      Some CPUs can speculate past an ERET instruction and potentially perform
      speculative accesses to memory before processing the exception return.
      Since the register state is often controlled by a lower privilege level
      at the point of an ERET, this could potentially be used as part of a
      side-channel attack.
      
      This patch emits an SB sequence after each ERET so that speculation is
      held up on exception return.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      679db708
  14. 19 10月, 2018 1 次提交
  15. 01 6月, 2018 1 次提交
  16. 25 5月, 2018 1 次提交
  17. 12 4月, 2018 1 次提交
  18. 19 3月, 2018 3 次提交
    • M
      arm64: KVM: Allow far branches from vector slots to the main vectors · 71dcb8be
      Marc Zyngier 提交于
      So far, the branch from the vector slots to the main vectors can at
      most be 4GB from the main vectors (the reach of ADRP), and this
      distance is known at compile time. If we were to remap the slots
      to an unrelated VA, things would break badly.
      
      A way to achieve VA independence would be to load the absolute
      address of the vectors (__kvm_hyp_vector), either using a constant
      pool or a series of movs, followed by an indirect branch.
      
      This patches implements the latter solution, using another instance
      of a patching callback. Note that since we have to save a register
      pair on the stack, we branch to the *second* instruction in the
      vectors in order to compensate for it. This also results in having
      to adjust this balance in the invalid vector entry point.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      71dcb8be
    • M
      arm64: KVM: Move stashing of x0/x1 into the vector code itself · 7e80f637
      Marc Zyngier 提交于
      All our useful entry points into the hypervisor are starting by
      saving x0 and x1 on the stack. Let's move those into the vectors
      by introducing macros that annotate whether a vector is valid or
      not, thus indicating whether we want to stash registers or not.
      
      The only drawback is that we now also stash registers for el2_error,
      but this should never happen, and we pop them back right at the
      start of the handling sequence.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      7e80f637
    • C
      KVM: arm64: Avoid storing the vcpu pointer on the stack · 4464e210
      Christoffer Dall 提交于
      We already have the percpu area for the host cpu state, which points to
      the VCPU, so there's no need to store the VCPU pointer on the stack on
      every context switch.  We can be a little more clever and just use
      tpidr_el2 for the percpu offset and load the VCPU pointer from the host
      context.
      
      This has the benefit of being able to retrieve the host context even
      when our stack is corrupted, and it has a potential performance benefit
      because we trade a store plus a load for an mrs and a load on a round
      trip to the guest.
      
      This does require us to calculate the percpu offset without including
      the offset from the kernel mapping of the percpu array to the linear
      mapping of the array (which is what we store in tpidr_el1), because a
      PC-relative generated address in EL2 is already giving us the hyp alias
      of the linear mapping of a kernel address.  We do this in
      __cpu_init_hyp_mode() by using kvm_ksym_ref().
      
      The code that accesses ESR_EL2 was previously using an alternative to
      use the _EL1 accessor on VHE systems, but this was actually unnecessary
      as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2
      accessor does the same thing on both systems.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      4464e210
  19. 07 2月, 2018 1 次提交
  20. 13 1月, 2018 2 次提交
    • J
      KVM: arm64: Change hyp_panic()s dependency on tpidr_el2 · c97e166e
      James Morse 提交于
      Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the
      host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and
      on VHE they can be the same register.
      
      KVM calls hyp_panic() when anything unexpected happens. This may occur
      while a guest owns the EL1 registers. KVM stashes the vcpu pointer in
      tpidr_el2, which it uses to find the host context in order to restore
      the host EL1 registers before parachuting into the host's panic().
      
      The host context is a struct kvm_cpu_context allocated in the per-cpu
      area, and mapped to hyp. Given the per-cpu offset for this CPU, this is
      easy to find. Change hyp_panic() to take a pointer to the
      struct kvm_cpu_context. Wrap these calls with an asm function that
      retrieves the struct kvm_cpu_context from the host's per-cpu area.
      
      Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during
      kvm init. (Later patches will make this unnecessary for VHE hosts)
      
      We print out the vcpu pointer as part of the panic message. Add a back
      reference to the 'running vcpu' in the host cpu context to preserve this.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NChristoffer Dall <cdall@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c97e166e
    • J
      KVM: arm64: Store vcpu on the stack during __guest_enter() · 32b03d10
      James Morse 提交于
      KVM uses tpidr_el2 as its private vcpu register, which makes sense for
      non-vhe world switch as only KVM can access this register. This means
      vhe Linux has to use tpidr_el1, which KVM has to save/restore as part
      of the host context.
      
      If the SDEI handler code runs behind KVMs back, it mustn't access any
      per-cpu variables. To allow this on systems with vhe we need to make
      the host use tpidr_el2, saving KVM from save/restoring it.
      
      __guest_enter() stores the host_ctxt on the stack, do the same with
      the vcpu.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NChristoffer Dall <cdall@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      32b03d10
  21. 09 4月, 2017 1 次提交