1. 26 7月, 2018 1 次提交
  2. 21 7月, 2018 1 次提交
    • M
      arm64: KVM: Cleanup tpidr_el2 init on non-VHE · 9bc03f1d
      Marc Zyngier 提交于
      When running on a non-VHE system, we initialize tpidr_el2 to
      contain the per-CPU offset required to reach per-cpu variables.
      
      Actually, we initialize it twice: the first time as part of the
      EL2 initialization, by copying tpidr_el1 into its el2 counterpart,
      and another time by calling into __kvm_set_tpidr_el2.
      
      It turns out that the first part is wrong, as it includes the
      distance between the kernel mapping and the linear mapping, while
      EL2 only cares about the linear mapping. This was the last vestige
      of the first per-cpu use of tpidr_el2 that came in with SDEI.
      The only caller then was hyp_panic(), and its now using the
      pc-relative get_host_ctxt() stuff, instead of kimage addresses
      from the literal pool.
      
      It is not a big deal, as we override it straight away, but it is
      slightly confusing. In order to clear said confusion, let's
      set this directly as part of the hyp-init code, and drop the
      ad-hoc HYP helper.
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      9bc03f1d
  3. 06 7月, 2018 1 次提交
  4. 01 6月, 2018 2 次提交
  5. 25 5月, 2018 7 次提交
    • D
      KVM: arm64: Invoke FPSIMD context switch trap from C · cf412b00
      Dave Martin 提交于
      The conversion of the FPSIMD context switch trap code to C has added
      some overhead to calling it, due to the need to save registers that
      the procedure call standard defines as caller-saved.
      
      So, perhaps it is no longer worth invoking this trap handler quite
      so early.
      
      Instead, we can invoke it from fixup_guest_exit(), with little
      likelihood of increasing the overhead much further.
      
      As a convenience, this patch gives __hyp_switch_fpsimd() the same
      return semantics fixup_guest_exit().  For now there is no
      possibility of a spurious FPSIMD trap, so the function always
      returns true, but this allows it to be tail-called with a single
      return statement.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      cf412b00
    • D
      KVM: arm64: Fold redundant exit code checks out of fixup_guest_exit() · 7846b311
      Dave Martin 提交于
      The entire tail of fixup_guest_exit() is contained in if statements
      of the form if (x && *exit_code == ARM_EXCEPTION_TRAP).  As a result,
      we can check just once and bail out of the function early, allowing
      the remaining if conditions to be simplified.
      
      The only awkward case is where *exit_code is changed to
      ARM_EXCEPTION_EL1_SERROR in the case of an illegal GICv2 CPU
      interface access: in that case, the GICv3 trap handling code is
      skipped using a goto.  This avoids pointlessly evaluating the
      static branch check for the GICv3 case, even though we can't have
      vgic_v2_cpuif_trap and vgic_v3_cpuif_trap true simultaneously
      unless we have a GICv3 and GICv2 on the host: that sounds stupid,
      but I haven't satisfied myself that it can't happen.
      
      No functional change.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      7846b311
    • D
      KVM: arm64: Remove redundant *exit_code changes in fpsimd_guest_exit() · ba4f4cb0
      Dave Martin 提交于
      In fixup_guest_exit(), there are a couple of cases where after
      checking what the exit code was, we assign it explicitly with the
      value it already had.
      
      Assuming this is not indicative of a bug, these assignments are not
      needed.
      
      This patch removes the redundant assignments, and simplifies some
      if-nesting that becomes trivial as a result.
      
      No functional change.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      ba4f4cb0
    • D
      KVM: arm64: Save host SVE context as appropriate · 85acda3b
      Dave Martin 提交于
      This patch adds SVE context saving to the hyp FPSIMD context switch
      path.  This means that it is no longer necessary to save the host
      SVE state in advance of entering the guest, when in use.
      
      In order to avoid adding pointless complexity to the code, VHE is
      assumed if SVE is in use.  VHE is an architectural prerequisite for
      SVE, so there is no good reason to turn CONFIG_ARM64_VHE off in
      kernels that support both SVE and KVM.
      
      Historically, software models exist that can expose the
      architecturally invalid configuration of SVE without VHE, so if
      this situation is detected at kvm_init() time then KVM will be
      disabled.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      85acda3b
    • D
      KVM: arm64: Optimise FPSIMD handling to reduce guest/host thrashing · e6b673b7
      Dave Martin 提交于
      This patch refactors KVM to align the host and guest FPSIMD
      save/restore logic with each other for arm64.  This reduces the
      number of redundant save/restore operations that must occur, and
      reduces the common-case IRQ blackout time during guest exit storms
      by saving the host state lazily and optimising away the need to
      restore the host state before returning to the run loop.
      
      Four hooks are defined in order to enable this:
      
       * kvm_arch_vcpu_run_map_fp():
         Called on PID change to map necessary bits of current to Hyp.
      
       * kvm_arch_vcpu_load_fp():
         Set up FP/SIMD for entering the KVM run loop (parse as
         "vcpu_load fp").
      
       * kvm_arch_vcpu_ctxsync_fp():
         Get FP/SIMD into a safe state for re-enabling interrupts after a
         guest exit back to the run loop.
      
         For arm64 specifically, this involves updating the host kernel's
         FPSIMD context tracking metadata so that kernel-mode NEON use
         will cause the vcpu's FPSIMD state to be saved back correctly
         into the vcpu struct.  This must be done before re-enabling
         interrupts because kernel-mode NEON may be used by softirqs.
      
       * kvm_arch_vcpu_put_fp():
         Save guest FP/SIMD state back to memory and dissociate from the
         CPU ("vcpu_put fp").
      
      Also, the arm64 FPSIMD context switch code is updated to enable it
      to save back FPSIMD state for a vcpu, not just current.  A few
      helpers drive this:
      
       * fpsimd_bind_state_to_cpu(struct user_fpsimd_state *fp):
         mark this CPU as having context fp (which may belong to a vcpu)
         currently loaded in its registers.  This is the non-task
         equivalent of the static function fpsimd_bind_to_cpu() in
         fpsimd.c.
      
       * task_fpsimd_save():
         exported to allow KVM to save the guest's FPSIMD state back to
         memory on exit from the run loop.
      
       * fpsimd_flush_state():
         invalidate any context's FPSIMD state that is currently loaded.
         Used to disassociate the vcpu from the CPU regs on run loop exit.
      
      These changes allow the run loop to enable interrupts (and thus
      softirqs that may use kernel-mode NEON) without having to save the
      guest's FPSIMD state eagerly.
      
      Some new vcpu_arch fields are added to make all this work.  Because
      host FPSIMD state can now be saved back directly into current's
      thread_struct as appropriate, host_cpu_context is no longer used
      for preserving the FPSIMD state.  However, it is still needed for
      preserving other things such as the host's system registers.  To
      avoid ABI churn, the redundant storage space in host_cpu_context is
      not removed for now.
      
      arch/arm is not addressed by this patch and continues to use its
      current save/restore logic.  It could provide implementations of
      the helpers later if desired.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      e6b673b7
    • D
      KVM: arm64: Repurpose vcpu_arch.debug_flags for general-purpose flags · fa89d31c
      Dave Martin 提交于
      In struct vcpu_arch, the debug_flags field is used to store
      debug-related flags about the vcpu state.
      
      Since we are about to add some more flags related to FPSIMD and
      SVE, it makes sense to add them to the existing flags field rather
      than adding new fields.  Since there is only one debug_flags flag
      defined so far, there is plenty of free space for expansion.
      
      In preparation for adding more flags, this patch renames the
      debug_flags field to simply "flags", and updates comments
      appropriately.
      
      The flag definitions are also moved to <asm/kvm_host.h>, since
      their presence in <asm/kvm_asm.h> was for purely historical
      reasons:  these definitions are not used from asm any more, and not
      very likely to be as more Hyp asm is migrated to C.
      
      KVM_ARM64_DEBUG_DIRTY_SHIFT has not been used since commit
      1ea66d27 ("arm64: KVM: Move away from the assembly version of
      the world switch"), so this patch gets rid of that too.
      
      No functional change.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      [maz: fixed minor conflict]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      fa89d31c
    • D
      KVM: arm64: Convert lazy FPSIMD context switch trap to C · ceda9fff
      Dave Martin 提交于
      To make the lazy FPSIMD context switch trap code easier to hack on,
      this patch converts it to C.
      
      This is not amazingly efficient, but the trap should typically only
      be taken once per host context switch.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      ceda9fff
  6. 04 5月, 2018 1 次提交
    • J
      arm64: vgic-v2: Fix proxying of cpuif access · b220244d
      James Morse 提交于
      Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
      and co, which check the endianness, which call into vcpu_read_sys_reg...
      which isn't mapped at EL2 (it was inlined before, and got moved OoL
      with the VHE optimizations).
      
      The result is of course a nice panic. Let's add some specialized
      cruft to keep the broken platforms that require this hack alive.
      
      But, this code used vcpu_data_guest_to_host(), which expected us to
      write the value to host memory, instead we have trapped the guest's
      read or write to an mmio-device, and are about to replay it using the
      host's readl()/writel() which also perform swabbing based on the host
      endianness. This goes wrong when both host and guest are big-endian,
      as readl()/writel() will undo the guest's swabbing, causing the
      big-endian value to be written to device-memory.
      
      What needs doing?
      A big-endian guest will have pre-swabbed data before storing, undo this.
      If its necessary for the host, writel() will re-swab it.
      
      For a read a big-endian guest expects to swab the data after the load.
      The hosts's readl() will correct for host endianness, giving us the
      device-memory's value in the register. For a big-endian guest, swab it
      as if we'd only done the load.
      
      For a little-endian guest, nothing needs doing as readl()/writel() leave
      the correct device-memory value in registers.
      
      Tested on Juno with that rarest of things: a big-endian 64K host.
      Based on a patch from Marc Zyngier.
      Reported-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Fixes: bf8feb39 ("arm64: KVM: vgic-v2: Add GICV access from HYP")
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      b220244d
  7. 12 4月, 2018 2 次提交
  8. 28 3月, 2018 1 次提交
  9. 20 3月, 2018 1 次提交
  10. 19 3月, 2018 23 次提交