1. 21 2月, 2019 31 次提交
  2. 12 2月, 2019 9 次提交
    • S
      KVM: VMX: Use vcpu->arch.regs directly when saving/loading guest state · d5589204
      Sean Christopherson 提交于
      ...now that all other references to struct vcpu_vmx have been removed.
      
      Note that 'vmx' still needs to be passed into the asm blob in _ASM_ARG1
      as it is consumed by vmx_update_host_rsp().  And similar to that code,
      use _ASM_ARG2 in the assembly code to prepare for moving to proper asm,
      while explicitly referencing the exact registers in the clobber list for
      clarity in the short term and to avoid additional precompiler games.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d5589204
    • S
      KVM: VMX: Don't save guest registers after VM-Fail · f78d0971
      Sean Christopherson 提交于
      A failed VM-Enter (obviously) didn't succeed, meaning the CPU never
      executed an instrunction in guest mode and so can't have changed the
      general purpose registers.
      
      In addition to saving some instructions in the VM-Fail case, this also
      provides a separate path entirely and thus an opportunity to propagate
      the fail condition to vmx->fail via register without introducing undue
      pain.  Using a register, as opposed to directly referencing vmx->fail,
      eliminates the need to pass the offset of 'fail', which will simplify
      moving the code to proper assembly in future patches.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f78d0971
    • S
      KVM: VMX: Invert the ordering of saving guest/host scratch reg at VM-Enter · 217aaff5
      Sean Christopherson 提交于
      Switching the ordering allows for an out-of-line path for VM-Fail
      that elides saving guest state but still shares the register clearing
      with the VM-Exit path.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      217aaff5
    • S
      KVM: VMX: Pass "launched" directly to the vCPU-run asm blob · c9afc58c
      Sean Christopherson 提交于
      ...and remove struct vcpu_vmx's temporary __launched variable.
      
      Eliminating __launched is a bonus, the real motivation is to get to the
      point where the only reference to struct vcpu_vmx in the asm code is
      to vcpu.arch.regs, which will simplify moving the blob to a proper asm
      file.  Note that also means this approach is deliberately different than
      what is used in nested_vmx_check_vmentry_hw().
      
      Use BL as it is a callee-save register in both 32-bit and 64-bit ABIs,
      i.e. it can't be modified by vmx_update_host_rsp(), to avoid having to
      temporarily save/restore the launched flag.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c9afc58c
    • S
      KVM: VMX: Update VMCS.HOST_RSP via helper C function · c09b03eb
      Sean Christopherson 提交于
      Providing a helper function to update HOST_RSP is visibly easier to
      read, and more importantly (for the future) eliminates two arguments to
      the VM-Enter assembly blob.  Reducing the number of arguments to the asm
      blob is for all intents and purposes a prerequisite to moving the code
      to a proper assembly routine.  It's not truly mandatory, but it greatly
      simplifies the future code, and the cost of the extra CALL+RET is
      negligible in the grand scheme.
      
      Note that although _ASM_ARG[1-3] can be used in the inline asm itself,
      the intput/output constraints need to be manually defined.  gcc will
      actually compile with _ASM_ARG[1-3] specified as constraints, but what
      it actually ends up doing with the bogus constraint is unknown.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c09b03eb
    • S
      KVM: VMX: Load/save guest CR2 via C code in __vmx_vcpu_run() · 47e97c09
      Sean Christopherson 提交于
      ...to eliminate its parameter and struct vcpu_vmx offset definition
      from the assembly blob.  Accessing CR2 from C versus assembly doesn't
      change the likelihood of taking a page fault (and modifying CR2) while
      it's loaded with the guest's value, so long as we don't do anything
      silly between accessing CR2 and VM-Enter/VM-Exit.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      47e97c09
    • S
      KVM: nVMX: Cache host_rsp on a per-VMCS basis · 5a878160
      Sean Christopherson 提交于
      Currently, host_rsp is cached on a per-vCPU basis, i.e. it's stored in
      struct vcpu_vmx.  In non-nested usage the caching is for all intents
      and purposes 100% effective, e.g. only the first VMLAUNCH needs to
      synchronize VMCS.HOST_RSP since the call stack to vmx_vcpu_run() is
      identical each and every time.  But when running a nested guest, KVM
      must invalidate the cache when switching the current VMCS as it can't
      guarantee the new VMCS has the same HOST_RSP as the previous VMCS.  In
      other words, the cache loses almost all of its efficacy when running a
      nested VM.
      
      Move host_rsp to struct vmcs_host_state, which is per-VMCS, so that it
      is cached on a per-VMCS basis and restores its 100% hit rate when
      nested VMs are in play.
      
      Note that the host_rsp cache for vmcs02 essentially "breaks" when
      nested early checks are enabled as nested_vmx_check_vmentry_hw() will
      see a different RSP at the time of its VM-Enter.  While it's possible
      to avoid even that VMCS.HOST_RSP synchronization, e.g. by employing a
      dedicated VM-Exit stack, there is little motivation for doing so as
      the overhead of two VMWRITEs (~55 cycles) is dwarfed by the overhead
      of the extra VMX transition (600+ cycles) and is a proverbial drop in
      the ocean relative to the total cost of a nested transtion (10s of
      thousands of cycles).
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5a878160
    • S
      KVM: nVMX: Let the compiler select the reg for holding HOST_RSP · fbda0fd3
      Sean Christopherson 提交于
      ...and provide an explicit name for the constraint.  Naming the input
      constraint makes the code self-documenting and also avoids the fragility
      of numerically referring to constraints, e.g. %4 breaks badly whenever
      the constraints are modified.
      
      Explicitly using RDX was inherited from vCPU-run, i.e. completely
      arbitrary.  Even vCPU-run doesn't truly need to explicitly use RDX, but
      doing so is more robust as vCPU-run needs tight control over its
      register usage.
      
      Note that while the naming "conflict" between host_rsp and HOST_RSP
      is slightly confusing, the former will be renamed slightly in a
      future patch, at which point HOST_RSP is absolutely what is desired.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fbda0fd3
    • S
      KVM: nVMX: Reference vmx->loaded_vmcs->launched directly · 74dfa278
      Sean Christopherson 提交于
      Temporarily propagating vmx->loaded_vmcs->launched to vmx->__launched
      is not functionally necessary, but rather was done historically to
      avoid passing both 'vmx' and 'loaded_vmcs' to the vCPU-run asm blob.
      Nested early checks inherited this behavior by virtue of copy+paste.
      
      A future patch will move HOST_RSP caching to be per-VMCS, i.e. store
      'host_rsp' in loaded VMCS.  Now that the reference to 'vmx->fail' is
      also gone from nested early checks, referencing 'loaded_vmcs' directly
      means we can drop the 'vmx' reference when introducing per-VMCS RSP
      caching.  And it means __launched can be dropped from struct vcpu_vmx
      if/when vCPU-run receives similar treatment.
      
      Note the use of a named register constraint for 'loaded_vmcs'.  Using
      RCX to hold 'vmx' was inherited from vCPU-run.  In the vCPU-run case,
      the scratch register needs to be explicitly defined as it is crushed
      when loading guest state, i.e. deferring to the compiler would corrupt
      the pointer.  Since nested early checks never loads guests state, it's
      a-ok to let the compiler pick any register.  Naming the constraint
      avoids the fragility of referencing constraints via %1, %2, etc.., which
      breaks horribly when modifying constraints, and generally makes the asm
      blob more readable.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      74dfa278