1. 19 3月, 2018 7 次提交
  2. 26 2月, 2018 1 次提交
  3. 12 2月, 2018 1 次提交
  4. 07 2月, 2018 2 次提交
  5. 16 1月, 2018 2 次提交
    • J
      KVM: arm64: Save ESR_EL2 on guest SError · c60590b5
      James Morse 提交于
      When we exit a guest due to an SError the vcpu fault info isn't updated
      with the ESR. Today this is only done for traps.
      
      The v8.2 RAS Extensions define ISS values for SError. Update the vcpu's
      fault_info with the ESR on SError so that handle_exit() can determine
      if this was a RAS SError and decode its severity.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c60590b5
    • J
      KVM: arm64: Set an impdef ESR for Virtual-SError using VSESR_EL2. · 4715c14b
      James Morse 提交于
      Prior to v8.2's RAS Extensions, the HCR_EL2.VSE 'virtual SError' feature
      generated an SError with an implementation defined ESR_EL1.ISS, because we
      had no mechanism to specify the ESR value.
      
      On Juno this generates an all-zero ESR, the most significant bit 'ISV'
      is clear indicating the remainder of the ISS field is invalid.
      
      With the RAS Extensions we have a mechanism to specify this value, and the
      most significant bit has a new meaning: 'IDS - Implementation Defined
      Syndrome'. An all-zero SError ESR now means: 'RAS error: Uncategorized'
      instead of 'no valid ISS'.
      
      Add KVM support for the VSESR_EL2 register to specify an ESR value when
      HCR_EL2.VSE generates a virtual SError. Change kvm_inject_vabt() to
      specify an implementation-defined value.
      
      We only need to restore the VSESR_EL2 value when HCR_EL2.VSE is set, KVM
      save/restores this bit during __{,de}activate_traps() and hardware clears the
      bit once the guest has consumed the virtual-SError.
      
      Future patches may add an API (or KVM CAP) to pend a virtual SError with
      a specified ESR.
      
      Cc: Dongjiu Geng <gengdongjiu@huawei.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4715c14b
  6. 13 1月, 2018 1 次提交
    • J
      KVM: arm64: Change hyp_panic()s dependency on tpidr_el2 · c97e166e
      James Morse 提交于
      Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the
      host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and
      on VHE they can be the same register.
      
      KVM calls hyp_panic() when anything unexpected happens. This may occur
      while a guest owns the EL1 registers. KVM stashes the vcpu pointer in
      tpidr_el2, which it uses to find the host context in order to restore
      the host EL1 registers before parachuting into the host's panic().
      
      The host context is a struct kvm_cpu_context allocated in the per-cpu
      area, and mapped to hyp. Given the per-cpu offset for this CPU, this is
      easy to find. Change hyp_panic() to take a pointer to the
      struct kvm_cpu_context. Wrap these calls with an asm function that
      retrieves the struct kvm_cpu_context from the host's per-cpu area.
      
      Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during
      kvm init. (Later patches will make this unnecessary for VHE hosts)
      
      We print out the vcpu pointer as part of the panic message. Add a back
      reference to the 'running vcpu' in the host cpu context to preserve this.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NChristoffer Dall <cdall@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c97e166e
  7. 09 1月, 2018 3 次提交
  8. 08 1月, 2018 1 次提交
  9. 30 11月, 2017 1 次提交
  10. 06 11月, 2017 1 次提交
    • C
      KVM: arm/arm64: Move timer save/restore out of the hyp code · 688c50aa
      Christoffer Dall 提交于
      As we are about to be lazy with saving and restoring the timer
      registers, we prepare by moving all possible timer configuration logic
      out of the hyp code.  All virtual timer registers can be programmed from
      EL1 and since the arch timer is always a level triggered interrupt we
      can safely do this with interrupts disabled in the host kernel on the
      way to the guest without taking vtimer interrupts in the host kernel
      (yet).
      
      The downside is that the cntvoff register can only be programmed from
      hyp mode, so we jump into hyp mode and back to program it.  This is also
      safe, because the host kernel doesn't use the virtual timer in the KVM
      code.  It may add a little performance performance penalty, but only
      until following commits where we move this operation to vcpu load/put.
      Signed-off-by: NChristoffer Dall <cdall@linaro.org>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      688c50aa
  11. 03 11月, 2017 2 次提交
    • D
      arm64/sve: KVM: Prevent guests from using SVE · 17eed27b
      Dave Martin 提交于
      Until KVM has full SVE support, guests must not be allowed to
      execute SVE instructions.
      
      This patch enables the necessary traps, and also ensures that the
      traps are disabled again on exit from the guest so that the host
      can still use SVE if it wants to.
      
      On guest exit, high bits of the SVE Zn registers may have been
      clobbered as a side-effect the execution of FPSIMD instructions in
      the guest.  The existing KVM host FPSIMD restore code is not
      sufficient to restore these bits, so this patch explicitly marks
      the CPU as not containing cached vector state for any task, thus
      forcing a reload on the next return to userspace.  This is an
      interim measure, in advance of adding full SVE awareness to KVM.
      
      This marking of cached vector state in the CPU as invalid is done
      using __this_cpu_write(fpsimd_last_state, NULL) in fpsimd.c.  Due
      to the repeated use of this rather obscure operation, it makes
      sense to factor it out as a separate helper with a clearer name.
      This patch factors it out as fpsimd_flush_cpu_state(), and ports
      all callers to use it.
      
      As a side effect of this refactoring, a this_cpu_write() in
      fpsimd_cpu_pm_notifier() is changed to __this_cpu_write().  This
      should be fine, since cpu_pm_enter() is supposed to be called only
      with interrupts disabled.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      17eed27b
    • D
      arm64: KVM: Hide unsupported AArch64 CPU features from guests · 93390c0a
      Dave Martin 提交于
      Currently, a guest kernel sees the true CPU feature registers
      (ID_*_EL1) when it reads them using MRS instructions.  This means
      that the guest may observe features that are present in the
      hardware but the host doesn't understand or doesn't provide support
      for.  A guest may legimitately try to use such a feature as per the
      architecture, but use of the feature may trap instead of working
      normally, triggering undef injection into the guest.
      
      This is not a problem for the host, but the guest may go wrong when
      running on newer hardware than the host knows about.
      
      This patch hides from guest VMs any AArch64-specific CPU features
      that the host doesn't support, by exposing to the guest the
      sanitised versions of the registers computed by the cpufeatures
      framework, instead of the true hardware registers.  To achieve
      this, HCR_EL2.TID3 is now set for AArch64 guests, and emulation
      code is added to KVM to report the sanitised versions of the
      affected registers in response to MRS and register reads from
      userspace.
      
      The affected registers are removed from invariant_sys_regs[] (since
      the invariant_sys_regs handling is no longer quite correct for
      them) and added to sys_reg_desgs[], with appropriate access(),
      get_user() and set_user() methods.  No runtime vcpu storage is
      allocated for the registers: instead, they are read on demand from
      the cpufeatures framework.  This may need modification in the
      future if there is a need for userspace to customise the features
      visible to the guest.
      
      Attempts by userspace to write the registers are handled similarly
      to the current invariant_sys_regs handling: writes are permitted,
      but only if they don't attempt to change the value.  This is
      sufficient to support VM snapshot/restore from userspace.
      
      Because of the additional registers, restoring a VM on an older
      kernel may not work unless userspace knows how to handle the extra
      VM registers exposed to the KVM user ABI by this patch.
      
      Under the principle of least damage, this patch makes no attempt to
      handle any of the other registers currently in
      invariant_sys_regs[], or to emulate registers for AArch32: however,
      these could be handled in a similar way in future, as necessary.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      93390c0a
  12. 15 6月, 2017 1 次提交
  13. 16 5月, 2017 1 次提交
  14. 03 2月, 2017 1 次提交
    • W
      arm64: KVM: Save/restore the host SPE state when entering/leaving a VM · f85279b4
      Will Deacon 提交于
      The SPE buffer is virtually addressed, using the page tables of the CPU
      MMU. Unusually, this means that the EL0/1 page table may be live whilst
      we're executing at EL2 on non-VHE configurations. When VHE is in use,
      we can use the same property to profile the guest behind its back.
      
      This patch adds the relevant disabling and flushing code to KVM so that
      the host can make use of SPE without corrupting guest memory, and any
      attempts by a guest to use SPE will result in a trap.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f85279b4
  15. 09 12月, 2016 1 次提交
    • M
      arm64: KVM: pmu: Reset PMSELR_EL0.SEL to a sane value before entering the guest · 21cbe3cc
      Marc Zyngier 提交于
      The ARMv8 architecture allows the cycle counter to be configured
      by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
      hence accessing PMCCFILTR_EL0. But it disallows the use of
      PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
      PMXEVCNTR_EL0.
      
      Linux itself doesn't violate this rule, but we may end up with
      PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
      guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
      despite the guest not having done anything wrong.
      
      In order to avoid this unfortunate course of events (haha!), let's
      sanitize PMSELR_EL0 on guest entry. This ensures that the guest
      won't explode unexpectedly.
      
      Cc: stable@vger.kernel.org #4.6+
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      21cbe3cc
  16. 17 11月, 2016 1 次提交
    • S
      arm64: Support systems without FP/ASIMD · 82e0191a
      Suzuki K Poulose 提交于
      The arm64 kernel assumes that FP/ASIMD units are always present
      and accesses the FP/ASIMD specific registers unconditionally. This
      could cause problems when they are absent. This patch adds the
      support for kernel handling systems without FP/ASIMD by skipping the
      register access within the kernel. For kvm, we trap the accesses
      to FP/ASIMD and inject an undefined instruction exception to the VM.
      
      The callers of the exported kernel_neon_begin_partial() should
      make sure that the FP/ASIMD is supported.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      [catalin.marinas@arm.com: add comment on the ARM64_HAS_NO_FPSIMD conflict and the new location]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      82e0191a
  17. 22 9月, 2016 1 次提交
    • V
      arm64: KVM: Use static keys for selecting the GIC backend · 5a7a8426
      Vladimir Murzin 提交于
      Currently GIC backend is selected via alternative framework and this
      is fine. We are going to introduce vgic-v3 to 32-bit world and there
      we don't have patching framework in hand, so we can either check
      support for GICv3 every time we need to choose which backend to use or
      try to optimise it by using static keys. The later looks quite
      promising because we can share logic involved in selecting GIC backend
      between architectures if both uses static keys.
      
      This patch moves arm64 from alternative to static keys framework for
      selecting GIC backend. For that we embed static key into vgic_global
      and enable the key during vgic initialisation based on what has
      already been exposed by the host GIC driver.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      5a7a8426
  18. 08 9月, 2016 5 次提交
  19. 17 8月, 2016 1 次提交
  20. 04 7月, 2016 1 次提交
  21. 22 6月, 2016 1 次提交
  22. 01 3月, 2016 4 次提交