1. 18 6月, 2021 1 次提交
  2. 25 5月, 2021 1 次提交
  3. 07 5月, 2021 7 次提交
    • T
      KVM: SVM: Move GHCB unmapping to fix RCU warning · ce7ea0cf
      Tom Lendacky 提交于
      When an SEV-ES guest is running, the GHCB is unmapped as part of the
      vCPU run support. However, kvm_vcpu_unmap() triggers an RCU dereference
      warning with CONFIG_PROVE_LOCKING=y because the SRCU lock is released
      before invoking the vCPU run support.
      
      Move the GHCB unmapping into the prepare_guest_switch callback, which is
      invoked while still holding the SRCU lock, eliminating the RCU dereference
      warning.
      
      Fixes: 291bd20d ("KVM: SVM: Add initial support for a VMGEXIT VMEXIT")
      Reported-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Message-Id: <b2f9b79d15166f2c3e4375c0d9bc3268b7696455.1620332081.git.thomas.lendacky@amd.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ce7ea0cf
    • S
      KVM: x86: Prevent KVM SVM from loading on kernels with 5-level paging · 03ca4589
      Sean Christopherson 提交于
      Disallow loading KVM SVM if 5-level paging is supported.  In theory, NPT
      for L1 should simply work, but there unknowns with respect to how the
      guest's MAXPHYADDR will be handled by hardware.
      
      Nested NPT is more problematic, as running an L1 VMM that is using
      2-level page tables requires stacking single-entry PDP and PML4 tables in
      KVM's NPT for L2, as there are no equivalent entries in L1's NPT to
      shadow.  Barring hardware magic, for 5-level paging, KVM would need stack
      another layer to handle PML5.
      
      Opportunistically rename the lm_root pointer, which is used for the
      aforementioned stacking when shadowing 2-level L1 NPT, to pml4_root to
      call out that it's specifically for PML4.
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210505204221.1934471-1-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      03ca4589
    • S
      KVM: x86: Tie Intel and AMD behavior for MSR_TSC_AUX to guest CPU model · 61a05d44
      Sean Christopherson 提交于
      Squish the Intel and AMD emulation of MSR_TSC_AUX together and tie it to
      the guest CPU model instead of the host CPU behavior.  While not strictly
      necessary to avoid guest breakage, emulating cross-vendor "architecture"
      will provide consistent behavior for the guest, e.g. WRMSR fault behavior
      won't change if the vCPU is migrated to a host with divergent behavior.
      
      Note, the "new" kvm_is_supported_user_return_msr() checks do not add new
      functionality on either SVM or VMX.  On SVM, the equivalent was
      "tsc_aux_uret_slot < 0", and on VMX the check was buried in the
      vmx_find_uret_msr() call at the find_uret_msr label.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210504171734.1434054-15-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      61a05d44
    • S
      KVM: x86: Move uret MSR slot management to common x86 · e5fda4bb
      Sean Christopherson 提交于
      Now that SVM and VMX both probe MSRs before "defining" user return slots
      for them, consolidate the code for probe+define into common x86 and
      eliminate the odd behavior of having the vendor code define the slot for
      a given MSR.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210504171734.1434054-14-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e5fda4bb
    • S
      KVM: x86: Add support for RDPID without RDTSCP · 36fa06f9
      Sean Christopherson 提交于
      Allow userspace to enable RDPID for a guest without also enabling RDTSCP.
      Aside from checking for RDPID support in the obvious flows, VMX also needs
      to set ENABLE_RDTSCP=1 when RDPID is exposed.
      
      For the record, there is no known scenario where enabling RDPID without
      RDTSCP is desirable.  But, both AMD and Intel architectures allow for the
      condition, i.e. this is purely to make KVM more architecturally accurate.
      
      Fixes: 41cd02c6 ("kvm: x86: Expose RDPID in KVM_GET_SUPPORTED_CPUID")
      Cc: stable@vger.kernel.org
      Reported-by: NReiji Watanabe <reijiw@google.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210504171734.1434054-8-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      36fa06f9
    • S
      KVM: SVM: Probe and load MSR_TSC_AUX regardless of RDTSCP support in host · 0caa0a77
      Sean Christopherson 提交于
      Probe MSR_TSC_AUX whether or not RDTSCP is supported in the host, and
      if probing succeeds, load the guest's MSR_TSC_AUX into hardware prior to
      VMRUN.  Because SVM doesn't support interception of RDPID, RDPID cannot
      be disallowed in the guest (without resorting to binary translation).
      Leaving the host's MSR_TSC_AUX in hardware would leak the host's value to
      the guest if RDTSCP is not supported.
      
      Note, there is also a kernel bug that prevents leaking the host's value.
      The host kernel initializes MSR_TSC_AUX if and only if RDTSCP is
      supported, even though the vDSO usage consumes MSR_TSC_AUX via RDPID.
      I.e. if RDTSCP is not supported, there is no host value to leak.  But,
      if/when the host kernel bug is fixed, KVM would start leaking MSR_TSC_AUX
      in the case where hardware supports RDPID but RDTSCP is unavailable for
      whatever reason.
      
      Probing MSR_TSC_AUX will also allow consolidating the probe and define
      logic in common x86, and will make it simpler to condition the existence
      of MSR_TSX_AUX (from the guest's perspective) on RDTSCP *or* RDPID.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210504171734.1434054-7-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0caa0a77
    • S
      KVM: SVM: Inject #UD on RDTSCP when it should be disabled in the guest · 3b195ac9
      Sean Christopherson 提交于
      Intercept RDTSCP to inject #UD if RDTSC is disabled in the guest.
      
      Note, SVM does not support intercepting RDPID.  Unlike VMX's
      ENABLE_RDTSCP control, RDTSCP interception does not apply to RDPID.  This
      is a benign virtualization hole as the host kernel (incorrectly) sets
      MSR_TSC_AUX if RDTSCP is supported, and KVM loads the guest's MSR_TSC_AUX
      into hardware if RDTSCP is supported in the host, i.e. KVM will not leak
      the host's MSR_TSC_AUX to the guest.
      
      But, when the kernel bug is fixed, KVM will start leaking the host's
      MSR_TSC_AUX if RDPID is supported in hardware, but RDTSCP isn't available
      for whatever reason.  This leak will be remedied in a future commit.
      
      Fixes: 46896c73 ("KVM: svm: add support for RDTSCP")
      Cc: stable@vger.kernel.org
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210504171734.1434054-4-seanjc@google.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NReiji Watanabe <reijiw@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3b195ac9
  4. 06 5月, 2021 2 次提交
  5. 03 5月, 2021 1 次提交
    • M
      KVM: nSVM: fix few bugs in the vmcb02 caching logic · c74ad08f
      Maxim Levitsky 提交于
      * Define and use an invalid GPA (all ones) for init value of last
        and current nested vmcb physical addresses.
      
      * Reset the current vmcb12 gpa to the invalid value when leaving
        the nested mode, similar to what is done on nested vmexit.
      
      * Reset	the last seen vmcb12 address when disabling the nested SVM,
        as it relies on vmcb02 fields which are freed at that point.
      
      Fixes: 4995a368 ("KVM: SVM: Use a separate vmcb for the nested L2 guest")
      Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20210503125446.1353307-3-mlevitsk@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c74ad08f
  6. 26 4月, 2021 12 次提交
  7. 22 4月, 2021 1 次提交
    • N
      KVM: x86: Support KVM VMs sharing SEV context · 54526d1f
      Nathan Tempelman 提交于
      Add a capability for userspace to mirror SEV encryption context from
      one vm to another. On our side, this is intended to support a
      Migration Helper vCPU, but it can also be used generically to support
      other in-guest workloads scheduled by the host. The intention is for
      the primary guest and the mirror to have nearly identical memslots.
      
      The primary benefits of this are that:
      1) The VMs do not share KVM contexts (think APIC/MSRs/etc), so they
      can't accidentally clobber each other.
      2) The VMs can have different memory-views, which is necessary for post-copy
      migration (the migration vCPUs on the target need to read and write to
      pages, when the primary guest would VMEXIT).
      
      This does not change the threat model for AMD SEV. Any memory involved
      is still owned by the primary guest and its initial state is still
      attested to through the normal SEV_LAUNCH_* flows. If userspace wanted
      to circumvent SEV, they could achieve the same effect by simply attaching
      a vCPU to the primary VM.
      This patch deliberately leaves userspace in charge of the memslots for the
      mirror, as it already has the power to mess with them in the primary guest.
      
      This patch does not support SEV-ES (much less SNP), as it does not
      handle handing off attested VMSAs to the mirror.
      
      For additional context, we need a Migration Helper because SEV PSP
      migration is far too slow for our live migration on its own. Using
      an in-guest migrator lets us speed this up significantly.
      Signed-off-by: NNathan Tempelman <natet@google.com>
      Message-Id: <20210408223214.2582277-1-natet@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      54526d1f
  8. 20 4月, 2021 5 次提交
    • K
      KVM: SVM: Define actual size of IOPM and MSRPM tables · 47903dc1
      Krish Sadhukhan 提交于
      Define the actual size of the IOPM and MSRPM tables so that the actual size
      can be used when initializing them and when checking the consistency of their
      physical address.
      These #defines are placed in svm.h so that they can be shared.
      Suggested-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com>
      Message-Id: <20210412215611.110095-2-krish.sadhukhan@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      47903dc1
    • S
      KVM: SVM: Enhance and clean up the vmcb tracking comment in pre_svm_run() · 44f1b558
      Sean Christopherson 提交于
      Explicitly document why a vmcb must be marked dirty and assigned a new
      asid when it will be run on a different cpu.  The "what" is relatively
      obvious, whereas the "why" requires reading the APM and/or KVM code.
      
      Opportunistically remove a spurious period and several unnecessary
      newlines in the comment.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210406171811.4043363-5-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      44f1b558
    • S
      KVM: SVM: Drop vcpu_svm.vmcb_pa · d1788191
      Sean Christopherson 提交于
      Remove vmcb_pa from vcpu_svm and simply read current_vmcb->pa directly in
      the one path where it is consumed.  Unlike svm->vmcb, use of the current
      vmcb's address is very limited, as evidenced by the fact that its use
      can be trimmed to a single dereference.
      
      Opportunistically add a comment about using vmcb01 for VMLOAD/VMSAVE, at
      first glance using vmcb01 instead of vmcb_pa looks wrong.
      
      No functional change intended.
      
      Cc: Maxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210406171811.4043363-3-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d1788191
    • S
      KVM: SVM: Don't set current_vmcb->cpu when switching vmcb · 17e5e964
      Sean Christopherson 提交于
      Do not update the new vmcb's last-run cpu when switching to a different
      vmcb.  If the vCPU is migrated between its last run and a vmcb switch,
      e.g. for nested VM-Exit, then setting the cpu without marking the vmcb
      dirty will lead to KVM running the vCPU on a different physical cpu with
      stale clean bit settings.
      
                                vcpu->cpu    current_vmcb->cpu    hardware
        pre_svm_run()           cpu0         cpu0                 cpu0,clean
        kvm_arch_vcpu_load()    cpu1         cpu0                 cpu0,clean
        svm_switch_vmcb()       cpu1         cpu1                 cpu0,clean
        pre_svm_run()           cpu1         cpu1                 kaboom
      
      Simply delete the offending code; unlike VMX, which needs to update the
      cpu at switch time due to the need to do VMPTRLD, SVM only cares about
      which cpu last ran the vCPU.
      
      Fixes: af18fa77 ("KVM: nSVM: Track the physical cpu of the vmcb vmrun through the vmcb")
      Cc: Cathy Avery <cavery@redhat.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210406171811.4043363-2-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      17e5e964
    • T
      KVM: SVM: Make sure GHCB is mapped before updating · a3ba26ec
      Tom Lendacky 提交于
      Access to the GHCB is mainly in the VMGEXIT path and it is known that the
      GHCB will be mapped. But there are two paths where it is possible the GHCB
      might not be mapped.
      
      The sev_vcpu_deliver_sipi_vector() routine will update the GHCB to inform
      the caller of the AP Reset Hold NAE event that a SIPI has been delivered.
      However, if a SIPI is performed without a corresponding AP Reset Hold,
      then the GHCB might not be mapped (depending on the previous VMEXIT),
      which will result in a NULL pointer dereference.
      
      The svm_complete_emulated_msr() routine will update the GHCB to inform
      the caller of a RDMSR/WRMSR operation about any errors. While it is likely
      that the GHCB will be mapped in this situation, add a safe guard
      in this path to be certain a NULL pointer dereference is not encountered.
      
      Fixes: f1c6366e ("KVM: SVM: Add required changes to support intercepts under SEV-ES")
      Fixes: 647daca2 ("KVM: SVM: Add support for booting APs in an SEV-ES guest")
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Cc: stable@vger.kernel.org
      Message-Id: <a5d3ebb600a91170fc88599d5a575452b3e31036.1617979121.git.thomas.lendacky@amd.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a3ba26ec
  9. 17 4月, 2021 1 次提交
    • M
      KVM: nSVM: improve SYSENTER emulation on AMD · adc2a237
      Maxim Levitsky 提交于
      Currently to support Intel->AMD migration, if CPU vendor is GenuineIntel,
      we emulate the full 64 value for MSR_IA32_SYSENTER_{EIP|ESP}
      msrs, and we also emulate the sysenter/sysexit instruction in long mode.
      
      (Emulator does still refuse to emulate sysenter in 64 bit mode, on the
      ground that the code for that wasn't tested and likely has no users)
      
      However when virtual vmload/vmsave is enabled, the vmload instruction will
      update these 32 bit msrs without triggering their msr intercept,
      which will lead to having stale values in kvm's shadow copy of these msrs,
      which relies on the intercept to be up to date.
      
      Fix/optimize this by doing the following:
      
      1. Enable the MSR intercepts for SYSENTER MSRs iff vendor=GenuineIntel
         (This is both a tiny optimization and also ensures that in case
         the guest cpu vendor is AMD, the msrs will be 32 bit wide as
         AMD defined).
      
      2. Store only high 32 bit part of these msrs on interception and combine
         it with hardware msr value on intercepted read/writes
         iff vendor=GenuineIntel.
      
      3. Disable vmload/vmsave virtualization if vendor=GenuineIntel.
         (It is somewhat insane to set vendor=GenuineIntel and still enable
         SVM for the guest but well whatever).
         Then zero the high 32 bit parts when kvm intercepts and emulates vmload.
      
      Thanks a lot to Paulo Bonzini for helping me with fixing this in the most
      correct way.
      
      This patch fixes nested migration of 32 bit nested guests, that was
      broken because incorrect cached values of SYSENTER msrs were stored in
      the migration stream if L1 changed these msrs with
      vmload prior to L2 entry.
      Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20210401111928.996871-3-mlevitsk@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      adc2a237
  10. 18 3月, 2021 1 次提交
    • I
      x86: Fix various typos in comments · d9f6e12f
      Ingo Molnar 提交于
      Fix ~144 single-word typos in arch/x86/ code comments.
      
      Doing this in a single commit should reduce the churn.
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: linux-kernel@vger.kernel.org
      d9f6e12f
  11. 15 3月, 2021 8 次提交
    • S
      KVM: x86/mmu: Mark the PAE roots as decrypted for shadow paging · 4a98623d
      Sean Christopherson 提交于
      Set the PAE roots used as decrypted to play nice with SME when KVM is
      using shadow paging.  Explicitly skip setting the C-bit when loading
      CR3 for PAE shadow paging, even though it's completely ignored by the
      CPU.  The extra documentation is nice to have.
      
      Note, there are several subtleties at play with NPT.  In addition to
      legacy shadow paging, the PAE roots are used for SVM's NPT when either
      KVM is 32-bit (uses PAE paging) or KVM is 64-bit and shadowing 32-bit
      NPT.  However, 32-bit Linux, and thus KVM, doesn't support SME.  And
      64-bit KVM can happily set the C-bit in CR3.  This also means that
      keeping __sme_set(root) for 32-bit KVM when NPT is enabled is
      conceptually wrong, but functionally ok since SME is 64-bit only.
      Leave it as is to avoid unnecessary pollution.
      
      Fixes: d0ec49d4 ("kvm/x86/svm: Support Secure Memory Encryption within KVM")
      Cc: stable@vger.kernel.org
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210309224207.1218275-5-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4a98623d
    • S
      KVM: x86: Get active PCID only when writing a CR3 value · e83bc09c
      Sean Christopherson 提交于
      Retrieve the active PCID only when writing a guest CR3 value, i.e. don't
      get the PCID when using EPT or NPT.  The PCID is especially problematic
      for EPT as the bits have different meaning, and so the PCID and must be
      manually stripped, which is annoying and unnecessary.  And on VMX,
      getting the active PCID also involves reading the guest's CR3 and
      CR4.PCIDE, i.e. may add pointless VMREADs.
      
      Opportunistically rename the pgd/pgd_level params to root_hpa and
      root_level to better reflect their new roles.  Keep the function names,
      as "load the guest PGD" is still accurate/correct.
      
      Last, and probably least, pass root_hpa as a hpa_t/u64 instead of an
      unsigned long.  The EPTP holds a 64-bit value, even in 32-bit mode, so
      in theory EPT could support HIGHMEM for 32-bit KVM.  Never mind that
      doing so would require changing the MMU page allocators and reworking
      the MMU to use kmap().
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210305183123.3978098-2-seanjc@google.com>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e83bc09c
    • S
      KVM: x86/mmu: Stop using software available bits to denote MMIO SPTEs · 8120337a
      Sean Christopherson 提交于
      Stop tagging MMIO SPTEs with specific available bits and instead detect
      MMIO SPTEs by checking for their unique SPTE value.  The value is
      guaranteed to be unique on shadow paging and NPT as setting reserved
      physical address bits on any other type of SPTE would consistute a KVM
      bug.  Ditto for EPT, as creating a WX non-MMIO would also be a bug.
      
      Note, this approach is also future-compatibile with TDX, which will need
      to reflect MMIO EPT violations as #VEs into the guest.  To create an EPT
      violation instead of a misconfig, TDX EPTs will need to have RWX=0,  But,
      MMIO SPTEs will also be the only case where KVM clears SUPPRESS_VE, so
      MMIO SPTEs will still be guaranteed to have a unique value within a given
      MMU context.
      
      The main motivation is to make it easier to reason about which types of
      SPTEs use which available bits.  As a happy side effect, this frees up
      two more bits for storing the MMIO generation.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210225204749.1512652-11-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8120337a
    • C
      KVM: nSVM: Optimize vmcb12 to vmcb02 save area copies · 8173396e
      Cathy Avery 提交于
      Use the vmcb12 control clean field to determine which vmcb12.save
      registers were marked dirty in order to minimize register copies
      when switching from L1 to L2. Those vmcb12 registers marked as dirty need
      to be copied to L0's vmcb02 as they will be used to update the vmcb
      state cache for the L2 VMRUN.  In the case where we have a different
      vmcb12 from the last L2 VMRUN all vmcb12.save registers must be
      copied over to L2.save.
      
      Tested:
      kvm-unit-tests
      kvm selftests
      Fedora L1 L2
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NCathy Avery <cavery@redhat.com>
      Message-Id: <20210301200844.2000-1-cavery@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8173396e
    • B
      KVM: SVM: Add support for Virtual SPEC_CTRL · d00b99c5
      Babu Moger 提交于
      Newer AMD processors have a feature to virtualize the use of the
      SPEC_CTRL MSR. Presence of this feature is indicated via CPUID
      function 0x8000000A_EDX[20]: GuestSpecCtrl. Hypervisors are not
      required to enable this feature since it is automatically enabled on
      processors that support it.
      
      A hypervisor may wish to impose speculation controls on guest
      execution or a guest may want to impose its own speculation controls.
      Therefore, the processor implements both host and guest
      versions of SPEC_CTRL.
      
      When in host mode, the host SPEC_CTRL value is in effect and writes
      update only the host version of SPEC_CTRL. On a VMRUN, the processor
      loads the guest version of SPEC_CTRL from the VMCB. When the guest
      writes SPEC_CTRL, only the guest version is updated. On a VMEXIT,
      the guest version is saved into the VMCB and the processor returns
      to only using the host SPEC_CTRL for speculation control. The guest
      SPEC_CTRL is located at offset 0x2E0 in the VMCB.
      
      The effective SPEC_CTRL setting is the guest SPEC_CTRL setting or'ed
      with the hypervisor SPEC_CTRL setting. This allows the hypervisor to
      ensure a minimum SPEC_CTRL if desired.
      
      This support also fixes an issue where a guest may sometimes see an
      inconsistent value for the SPEC_CTRL MSR on processors that support
      this feature. With the current SPEC_CTRL support, the first write to
      SPEC_CTRL is intercepted and the virtualized version of the SPEC_CTRL
      MSR is not updated. When the guest reads back the SPEC_CTRL MSR, it
      will be 0x0, instead of the actual expected value. There isn’t a
      security concern here, because the host SPEC_CTRL value is or’ed with
      the Guest SPEC_CTRL value to generate the effective SPEC_CTRL value.
      KVM writes with the guest's virtualized SPEC_CTRL value to SPEC_CTRL
      MSR just before the VMRUN, so it will always have the actual value
      even though it doesn’t appear that way in the guest. The guest will
      only see the proper value for the SPEC_CTRL register if the guest was
      to write to the SPEC_CTRL register again. With Virtual SPEC_CTRL
      support, the save area spec_ctrl is properly saved and restored.
      So, the guest will always see the proper value when it is read back.
      Signed-off-by: NBabu Moger <babu.moger@amd.com>
      Message-Id: <161188100955.28787.11816849358413330720.stgit@bmoger-ubuntu>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d00b99c5
    • M
      KVM: nSVM: always use vmcb01 to for vmsave/vmload of guest state · cc3ed80a
      Maxim Levitsky 提交于
      This allows to avoid copying of these fields between vmcb01
      and vmcb02 on nested guest entry/exit.
      Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cc3ed80a
    • P
      KVM: SVM: move VMLOAD/VMSAVE to C code · fb0c4a4f
      Paolo Bonzini 提交于
      Thanks to the new macros that handle exception handling for SVM
      instructions, it is easier to just do the VMLOAD/VMSAVE in C.
      This is safe, as shown by the fact that the host reload is
      already done outside the assembly source.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fb0c4a4f
    • S
      KVM: SVM: Skip intercepted PAUSE instructions after emulation · c8781fea
      Sean Christopherson 提交于
      Skip PAUSE after interception to avoid unnecessarily re-executing the
      instruction in the guest, e.g. after regaining control post-yield.
      This is a benign bug as KVM disables PAUSE interception if filtering is
      off, including the case where pause_filter_count is set to zero.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210205005750.3841462-10-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c8781fea