1. 21 4月, 2020 9 次提交
    • P
      KVM: SVM: avoid infinite loop on NPF from bad address · e72436bc
      Paolo Bonzini 提交于
      When a nested page fault is taken from an address that does not have
      a memslot associated to it, kvm_mmu_do_page_fault returns RET_PF_EMULATE
      (via mmu_set_spte) and kvm_mmu_page_fault then invokes svm_need_emulation_on_page_fault.
      
      The default answer there is to return false, but in this case this just
      causes the page fault to be retried ad libitum.  Since this is not a
      fast path, and the only other case where it is taken is an erratum,
      just stick a kvm_vcpu_gfn_to_memslot check in there to detect the
      common case where the erratum is not happening.
      
      This fixes an infinite loop in the new set_memory_region_test.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e72436bc
    • W
      KVM: X86: Improve latency for single target IPI fastpath · a9ab13ff
      Wanpeng Li 提交于
      IPI and Timer cause the main MSRs write vmexits in cloud environment
      observation, let's optimize virtual IPI latency more aggressively to
      inject target IPI as soon as possible.
      
      Running kvm-unit-tests/vmexit.flat IPI testing on SKX server, disable
      adaptive advance lapic timer and adaptive halt-polling to avoid the
      interference, this patch can give another 7% improvement.
      
      w/o fastpath   -> x86.c fastpath      4238 -> 3543  16.4%
      x86.c fastpath -> vmx.c fastpath      3543 -> 3293     7%
      w/o fastpath   -> vmx.c fastpath      4238 -> 3293  22.3%
      
      Cc: Haiwei Li <lihaiwei@tencent.com>
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200410174703.1138-3-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a9ab13ff
    • U
      KVM: SVM: Use do_machine_check to pass MCE to the host · 1c164cb3
      Uros Bizjak 提交于
      Use do_machine_check instead of INT $12 to pass MCE to the host,
      the same approach VMX uses.
      
      On a related note, there is no reason to limit the use of do_machine_check
      to 64 bit targets, as is currently done for VMX. MCE handling works
      for both target families.
      
      The patch is only compile tested, for both, 64 and 32 bit targets,
      someone should test the passing of the exception by injecting
      some MCEs into the guest.
      
      For future non-RFC patch, kvm_machine_check should be moved to some
      appropriate header file.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NUros Bizjak <ubizjak@gmail.com>
      Message-Id: <20200411153627.3474710-1-ubizjak@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1c164cb3
    • S
      KVM: x86: Introduce KVM_REQ_TLB_FLUSH_CURRENT to flush current ASID · eeeb4f67
      Sean Christopherson 提交于
      Add KVM_REQ_TLB_FLUSH_CURRENT to allow optimized TLB flushing of VMX's
      EPTP/VPID contexts[*] from the KVM MMU and/or in a deferred manner, e.g.
      to flush L2's context during nested VM-Enter.
      
      Convert KVM_REQ_TLB_FLUSH to KVM_REQ_TLB_FLUSH_CURRENT in flows where
      the flush is directly associated with vCPU-scoped instruction emulation,
      i.e. MOV CR3 and INVPCID.
      
      Add a comment in vmx_vcpu_load_vmcs() above its KVM_REQ_TLB_FLUSH to
      make it clear that it deliberately requests a flush of all contexts.
      
      Service any pending flush request on nested VM-Exit as it's possible a
      nested VM-Exit could occur after requesting a flush for L2.  Add the
      same logic for nested VM-Enter even though it's _extremely_ unlikely
      for flush to be pending on nested VM-Enter, but theoretically possible
      (in the future) due to RSM (SMM) emulation.
      
      [*] Intel also has an Address Space Identifier (ASID) concept, e.g.
          EPTP+VPID+PCID == ASID, it's just not documented in the SDM because
          the rules of invalidation are different based on which piece of the
          ASID is being changed, i.e. whether the EPTP, VPID, or PCID context
          must be invalidated.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-25-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      eeeb4f67
    • S
      KVM: x86: Rename ->tlb_flush() to ->tlb_flush_all() · 7780938c
      Sean Christopherson 提交于
      Rename ->tlb_flush() to ->tlb_flush_all() in preparation for adding a
      new hook to flush only the current ASID/context.
      
      Opportunstically replace the comment in vmx_flush_tlb() that explains
      why it flushes all EPTP/VPID contexts with a comment explaining why it
      unconditionally uses INVEPT when EPT is enabled.  I.e. rely on the "all"
      part of the name to clarify why it does global INVEPT/INVVPID.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-23-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7780938c
    • S
      KVM: SVM: Document the ASID logic in svm_flush_tlb() · 4a41e43c
      Sean Christopherson 提交于
      Add a comment in svm_flush_tlb() to document why it flushes only the
      current ASID, even when it is invoked when flushing remote TLBs.
      
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-22-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4a41e43c
    • S
      KVM: SVM: Wire up ->tlb_flush_guest() directly to svm_flush_tlb() · 72b38320
      Sean Christopherson 提交于
      Use svm_flush_tlb() directly for kvm_x86_ops->tlb_flush_guest() now that
      the @invalidate_gpa param to ->tlb_flush() is gone, i.e. the wrapper for
      ->tlb_flush_guest() is no longer necessary.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-18-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      72b38320
    • S
      KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush() · f55ac304
      Sean Christopherson 提交于
      Drop @invalidate_gpa from ->tlb_flush() and kvm_vcpu_flush_tlb() now
      that all callers pass %true for said param, or ignore the param (SVM has
      an internal call to svm_flush_tlb() in svm_flush_tlb_guest that somewhat
      arbitrarily passes %false).
      
      Remove __vmx_flush_tlb() as it is no longer used.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-17-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f55ac304
    • S
      KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook · e64419d9
      Sean Christopherson 提交于
      Add a dedicated hook to handle flushing TLB entries on behalf of the
      guest, i.e. for a paravirtualized TLB flush, and use it directly instead
      of bouncing through kvm_vcpu_flush_tlb().
      
      For VMX, change the effective implementation implementation to never do
      INVEPT and flush only the current context, i.e. to always flush via
      INVVPID(SINGLE_CONTEXT).  The INVEPT performed by __vmx_flush_tlb() when
      @invalidate_gpa=false and enable_vpid=0 is unnecessary, as it will only
      flush guest-physical mappings; linear and combined mappings are flushed
      by VM-Enter when VPID is disabled, and changes in the guest pages tables
      do not affect guest-physical mappings.
      
      When EPT and VPID are enabled, doing INVVPID is not required (by Intel's
      architecture) to invalidate guest-physical mappings, i.e. TLB entries
      that cache guest-physical mappings can live across INVVPID as the
      mappings are associated with an EPTP, not a VPID.  The intent of
      @invalidate_gpa is to inform vmx_flush_tlb() that it must "invalidate
      gpa mappings", i.e. do INVEPT and not simply INVVPID.  Other than nested
      VPID handling, which now calls vpid_sync_context() directly, the only
      scenario where KVM can safely do INVVPID instead of INVEPT (when EPT is
      enabled) is if KVM is flushing TLB entries from the guest's perspective,
      i.e. is only required to invalidate linear mappings.
      
      For SVM, flushing TLB entries from the guest's perspective can be done
      by flushing the current ASID, as changes to the guest's page tables are
      associated only with the current ASID.
      
      Adding a dedicated ->tlb_flush_guest() paves the way toward removing
      @invalidate_gpa, which is a potentially dangerous control flag as its
      meaning is not exactly crystal clear, even for those who are familiar
      with the subtleties of what mappings Intel CPUs are/aren't allowed to
      keep across various invalidation scenarios.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-15-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e64419d9
  2. 16 4月, 2020 2 次提交
  3. 14 4月, 2020 1 次提交
  4. 03 4月, 2020 5 次提交
  5. 31 3月, 2020 3 次提交
  6. 25 3月, 2020 2 次提交
  7. 23 3月, 2020 1 次提交
    • T
      KVM: SVM: Issue WBINVD after deactivating an SEV guest · 2e2409af
      Tom Lendacky 提交于
      Currently, CLFLUSH is used to flush SEV guest memory before the guest is
      terminated (or a memory hotplug region is removed). However, CLFLUSH is
      not enough to ensure that SEV guest tagged data is flushed from the cache.
      
      With 33af3a7e ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
      original WBINVD was removed. This then exposed crashes at random times
      because of a cache flush race with a page that had both a hypervisor and
      a guest tag in the cache.
      
      Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
      svm_unregister_enc_region() function to ensure hotplug memory is flushed
      when removed. The DF_FLUSH can still be avoided at this point.
      
      Fixes: 33af3a7e ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Message-Id: <c8bf9087ca3711c5770bdeaafa3e45b717dc5ef4.1584720426.git.thomas.lendacky@amd.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2e2409af
  8. 21 3月, 2020 1 次提交
  9. 18 3月, 2020 1 次提交
  10. 17 3月, 2020 15 次提交