1. 04 2月, 2021 9 次提交
    • L
      KVM: vmx/pmu: Emulate legacy freezing LBRs on virtual PMI · e6209a3b
      Like Xu 提交于
      The current vPMU only supports Architecture Version 2. According to
      Intel SDM "17.4.7 Freezing LBR and Performance Counters on PMI", if
      IA32_DEBUGCTL.Freeze_LBR_On_PMI = 1, the LBR is frozen on the virtual
      PMI and the KVM would emulate to clear the LBR bit (bit 0) in
      IA32_DEBUGCTL. Also, guest needs to re-enable IA32_DEBUGCTL.LBR
      to resume recording branches.
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Message-Id: <20210201051039.255478-9-like.xu@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e6209a3b
    • L
      KVM: vmx/pmu: Pass-through LBR msrs when the guest LBR event is ACTIVE · 1b5ac322
      Like Xu 提交于
      In addition to DEBUGCTLMSR_LBR, any KVM trap caused by LBR msrs access
      will result in a creation of guest LBR event per-vcpu.
      
      If the guest LBR event is scheduled on with the corresponding vcpu context,
      KVM will pass-through all LBR records msrs to the guest. The LBR callstack
      mechanism implemented in the host could help save/restore the guest LBR
      records during the event context switches, which reduces a lot of overhead
      if we save/restore tens of LBR msrs (e.g. 32 LBR records entries) in the
      much more frequent VMX transitions.
      
      To avoid reclaiming LBR resources from any higher priority event on host,
      KVM would always check the exist of guest LBR event and its state before
      vm-entry as late as possible. A negative result would cancel the
      pass-through state, and it also prevents real registers accesses and
      potential data leakage. If host reclaims the LBR between two checks, the
      interception state and LBR records can be safely preserved due to native
      save/restore support from guest LBR event.
      
      The KVM emits a pr_warn() when the LBR hardware is unavailable to the
      guest LBR event. The administer is supposed to reminder users that the
      guest result may be inaccurate if someone is using LBR to record
      hypervisor on the host side.
      Suggested-by: NAndi Kleen <ak@linux.intel.com>
      Co-developed-by: NWei Wang <wei.w.wang@intel.com>
      Signed-off-by: NWei Wang <wei.w.wang@intel.com>
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Message-Id: <20210201051039.255478-7-like.xu@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1b5ac322
    • L
      KVM: vmx/pmu: Create a guest LBR event when vcpu sets DEBUGCTLMSR_LBR · 8e12911b
      Like Xu 提交于
      When vcpu sets DEBUGCTLMSR_LBR in the MSR_IA32_DEBUGCTLMSR, the KVM handler
      would create a guest LBR event which enables the callstack mode and none of
      hardware counter is assigned. The host perf would schedule and enable this
      event as usual but in an exclusive way.
      
      The guest LBR event will be released when the vPMU is reset but soon,
      the lazy release mechanism would be applied to this event like a vPMC.
      Suggested-by: NAndi Kleen <ak@linux.intel.com>
      Co-developed-by: NWei Wang <wei.w.wang@intel.com>
      Signed-off-by: NWei Wang <wei.w.wang@intel.com>
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Message-Id: <20210201051039.255478-6-like.xu@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8e12911b
    • L
      KVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled · c6462363
      Like Xu 提交于
      Usespace could set the bits [0, 5] of the IA32_PERF_CAPABILITIES
      MSR which tells about the record format stored in the LBR records.
      
      The LBR will be enabled on the guest if host perf supports LBR
      (checked via x86_perf_get_lbr()) and the vcpu model is compatible
      with the host one.
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Message-Id: <20210201051039.255478-4-like.xu@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c6462363
    • P
      KVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled · 9c9520ce
      Paolo Bonzini 提交于
      Usespace could set the bits [0, 5] of the IA32_PERF_CAPABILITIES
      MSR which tells about the record format stored in the LBR records.
      
      The LBR will be enabled on the guest if host perf supports LBR
      (checked via x86_perf_get_lbr()) and the vcpu model is compatible
      with the host one.
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Message-Id: <20210201051039.255478-4-like.xu@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9c9520ce
    • L
      KVM: x86/vmx: Make vmx_set_intercept_for_msr() non-static · 252e365e
      Like Xu 提交于
      To make code responsibilities clear, we may resue and invoke the
      vmx_set_intercept_for_msr() in other vmx-specific files (e.g. pmu_intel.c),
      so expose it to passthrough LBR msrs later.
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Message-Id: <20210201051039.255478-2-like.xu@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      252e365e
    • L
      KVM: VMX: read/write MSR_IA32_DEBUGCTLMSR from GUEST_IA32_DEBUGCTL · d855066f
      Like Xu 提交于
      SVM already has specific handlers of MSR_IA32_DEBUGCTLMSR in the
      svm_get/set_msr, so the x86 common part can be safely moved to VMX.
      This allows KVM to store the bits it supports in GUEST_IA32_DEBUGCTL.
      
      Add vmx_supported_debugctl() to refactor the throwing logic of #GP.
      Signed-off-by: NLike Xu <like.xu@linux.intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Message-Id: <20210108013704.134985-2-like.xu@linux.intel.com>
      [Merge parts of Chenyi Qiang's "KVM: X86: Expose bus lock debug exception
       to guest". - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d855066f
    • C
      KVM: VMX: Enable bus lock VM exit · fe6b6bc8
      Chenyi Qiang 提交于
      Virtual Machine can exploit bus locks to degrade the performance of
      system. Bus lock can be caused by split locked access to writeback(WB)
      memory or by using locks on uncacheable(UC) memory. The bus lock is
      typically >1000 cycles slower than an atomic operation within a cache
      line. It also disrupts performance on other cores (which must wait for
      the bus lock to be released before their memory operations can
      complete).
      
      To address the threat, bus lock VM exit is introduced to notify the VMM
      when a bus lock was acquired, allowing it to enforce throttling or other
      policy based mitigations.
      
      A VMM can enable VM exit due to bus locks by setting a new "Bus Lock
      Detection" VM-execution control(bit 30 of Secondary Processor-based VM
      execution controls). If delivery of this VM exit was preempted by a
      higher priority VM exit (e.g. EPT misconfiguration, EPT violation, APIC
      access VM exit, APIC write VM exit, exception bitmap exiting), bit 26 of
      exit reason in vmcs field is set to 1.
      
      In current implementation, the KVM exposes this capability through
      KVM_CAP_X86_BUS_LOCK_EXIT. The user can get the supported mode bitmap
      (i.e. off and exit) and enable it explicitly (disabled by default). If
      bus locks in guest are detected by KVM, exit to user space even when
      current exit reason is handled by KVM internally. Set a new field
      KVM_RUN_BUS_LOCK in vcpu->run->flags to inform the user space that there
      is a bus lock detected in guest.
      
      Document for Bus Lock VM exit is now available at the latest "Intel
      Architecture Instruction Set Extensions Programming Reference".
      
      Document Link:
      https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.htmlCo-developed-by: NXiaoyao Li <xiaoyao.li@intel.com>
      Signed-off-by: NXiaoyao Li <xiaoyao.li@intel.com>
      Signed-off-by: NChenyi Qiang <chenyi.qiang@intel.com>
      Message-Id: <20201106090315.18606-4-chenyi.qiang@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fe6b6bc8
    • S
      KVM: VMX: Convert vcpu_vmx.exit_reason to a union · 8e533240
      Sean Christopherson 提交于
      Convert vcpu_vmx.exit_reason from a u32 to a union (of size u32).  The
      full VM_EXIT_REASON field is comprised of a 16-bit basic exit reason in
      bits 15:0, and single-bit modifiers in bits 31:16.
      
      Historically, KVM has only had to worry about handling the "failed
      VM-Entry" modifier, which could only be set in very specific flows and
      required dedicated handling.  I.e. manually stripping the FAILED_VMENTRY
      bit was a somewhat viable approach.  But even with only a single bit to
      worry about, KVM has had several bugs related to comparing a basic exit
      reason against the full exit reason store in vcpu_vmx.
      
      Upcoming Intel features, e.g. SGX, will add new modifier bits that can
      be set on more or less any VM-Exit, as opposed to the significantly more
      restricted FAILED_VMENTRY, i.e. correctly handling everything in one-off
      flows isn't scalable.  Tracking exit reason in a union forces code to
      explicitly choose between consuming the full exit reason and the basic
      exit, and is a convenient way to document and access the modifiers.
      
      No functional change intended.
      
      Cc: Xiaoyao Li <xiaoyao.li@intel.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NChenyi Qiang <chenyi.qiang@intel.com>
      Message-Id: <20201106090315.18606-2-chenyi.qiang@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8e533240
  2. 02 2月, 2021 1 次提交
    • P
      KVM: x86: Allow guests to see MSR_IA32_TSX_CTRL even if tsx=off · 7131636e
      Paolo Bonzini 提交于
      Userspace that does not know about KVM_GET_MSR_FEATURE_INDEX_LIST
      will generally use the default value for MSR_IA32_ARCH_CAPABILITIES.
      When this happens and the host has tsx=on, it is possible to end up with
      virtual machines that have HLE and RTM disabled, but TSX_CTRL available.
      
      If the fleet is then switched to tsx=off, kvm_get_arch_capabilities()
      will clear the ARCH_CAP_TSX_CTRL_MSR bit and it will not be possible to
      use the tsx=off hosts as migration destinations, even though the guests
      do not have TSX enabled.
      
      To allow this migration, allow guests to write to their TSX_CTRL MSR,
      while keeping the host MSR unchanged for the entire life of the guests.
      This ensures that TSX remains disabled and also saves MSR reads and
      writes, and it's okay to do because with tsx=off we know that guests will
      not have the HLE and RTM features in their CPUID.  (If userspace sets
      bogus CPUID data, we do not expect HLE and RTM to work in guests anyway).
      
      Cc: stable@vger.kernel.org
      Fixes: cbbaa272 ("KVM: x86: fix presentation of TSX feature in ARCH_CAPABILITIES")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7131636e
  3. 26 1月, 2021 1 次提交
    • L
      kvm: tracing: Fix unmatched kvm_entry and kvm_exit events · d95df951
      Lorenzo Brescia 提交于
      On VMX, if we exit and then re-enter immediately without leaving
      the vmx_vcpu_run() function, the kvm_entry event is not logged.
      That means we will see one (or more) kvm_exit, without its (their)
      corresponding kvm_entry, as shown here:
      
       CPU-1979 [002] 89.871187: kvm_entry: vcpu 1
       CPU-1979 [002] 89.871218: kvm_exit:  reason MSR_WRITE
       CPU-1979 [002] 89.871259: kvm_exit:  reason MSR_WRITE
      
      It also seems possible for a kvm_entry event to be logged, but then
      we leave vmx_vcpu_run() right away (if vmx->emulation_required is
      true). In this case, we will have a spurious kvm_entry event in the
      trace.
      
      Fix these situations by moving trace_kvm_entry() inside vmx_vcpu_run()
      (where trace_kvm_exit() already is).
      
      A trace obtained with this patch applied looks like this:
      
       CPU-14295 [000] 8388.395387: kvm_entry: vcpu 0
       CPU-14295 [000] 8388.395392: kvm_exit:  reason MSR_WRITE
       CPU-14295 [000] 8388.395393: kvm_entry: vcpu 0
       CPU-14295 [000] 8388.395503: kvm_exit:  reason EXTERNAL_INTERRUPT
      
      Of course, not calling trace_kvm_entry() in common x86 code any
      longer means that we need to adjust the SVM side of things too.
      Signed-off-by: NLorenzo Brescia <lorenzo.brescia@edu.unito.it>
      Signed-off-by: NDario Faggioli <dfaggioli@suse.com>
      Message-Id: <160873470698.11652.13483635328769030605.stgit@Wayrath>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d95df951
  4. 08 1月, 2021 1 次提交
    • T
      KVM: SVM: Add support for booting APs in an SEV-ES guest · 647daca2
      Tom Lendacky 提交于
      Typically under KVM, an AP is booted using the INIT-SIPI-SIPI sequence,
      where the guest vCPU register state is updated and then the vCPU is VMRUN
      to begin execution of the AP. For an SEV-ES guest, this won't work because
      the guest register state is encrypted.
      
      Following the GHCB specification, the hypervisor must not alter the guest
      register state, so KVM must track an AP/vCPU boot. Should the guest want
      to park the AP, it must use the AP Reset Hold exit event in place of, for
      example, a HLT loop.
      
      First AP boot (first INIT-SIPI-SIPI sequence):
        Execute the AP (vCPU) as it was initialized and measured by the SEV-ES
        support. It is up to the guest to transfer control of the AP to the
        proper location.
      
      Subsequent AP boot:
        KVM will expect to receive an AP Reset Hold exit event indicating that
        the vCPU is being parked and will require an INIT-SIPI-SIPI sequence to
        awaken it. When the AP Reset Hold exit event is received, KVM will place
        the vCPU into a simulated HLT mode. Upon receiving the INIT-SIPI-SIPI
        sequence, KVM will make the vCPU runnable. It is again up to the guest
        to then transfer control of the AP to the proper location.
      
        To differentiate between an actual HLT and an AP Reset Hold, a new MP
        state is introduced, KVM_MP_STATE_AP_RESET_HOLD, which the vCPU is
        placed in upon receiving the AP Reset Hold exit event. Additionally, to
        communicate the AP Reset Hold exit event up to userspace (if needed), a
        new exit reason is introduced, KVM_EXIT_AP_RESET_HOLD.
      
      A new x86 ops function is introduced, vcpu_deliver_sipi_vector, in order
      to accomplish AP booting. For VMX, vcpu_deliver_sipi_vector is set to the
      original SIPI delivery function, kvm_vcpu_deliver_sipi_vector(). SVM adds
      a new function that, for non SEV-ES guests, invokes the original SIPI
      delivery function, kvm_vcpu_deliver_sipi_vector(), but for SEV-ES guests,
      implements the logic above.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Message-Id: <e8fbebe8eb161ceaabdad7c01a5859a78b424d5e.1609791600.git.thomas.lendacky@amd.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      647daca2
  5. 15 12月, 2020 3 次提交
  6. 12 12月, 2020 1 次提交
    • P
      KVM: x86: reinstate vendor-agnostic check on SPEC_CTRL cpuid bits · 39485ed9
      Paolo Bonzini 提交于
      Until commit e7c587da ("x86/speculation: Use synthetic bits for
      IBRS/IBPB/STIBP"), KVM was testing both Intel and AMD CPUID bits before
      allowing the guest to write MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD.
      Testing only Intel bits on VMX processors, or only AMD bits on SVM
      processors, fails if the guests are created with the "opposite" vendor
      as the host.
      
      While at it, also tweak the host CPU check to use the vendor-agnostic
      feature bit X86_FEATURE_IBPB, since we only care about the availability
      of the MSR on the host here and not about specific CPUID bits.
      
      Fixes: e7c587da ("x86/speculation: Use synthetic bits for IBRS/IBPB/STIBP")
      Cc: stable@vger.kernel.org
      Reported-by: NDenis V. Lunev <den@openvz.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      39485ed9
  7. 15 11月, 2020 8 次提交
    • J
      kvm: x86: Sink cpuid update into vendor-specific set_cr4 functions · 2259c17f
      Jim Mattson 提交于
      On emulated VM-entry and VM-exit, update the CPUID bits that reflect
      CR4.OSXSAVE and CR4.PKE.
      
      This fixes a bug where the CPUID bits could continue to reflect L2 CR4
      values after emulated VM-exit to L1. It also fixes a related bug where
      the CPUID bits could continue to reflect L1 CR4 values after emulated
      VM-entry to L2. The latter bug is mainly relevant to SVM, wherein
      CPUID is not a required intercept. However, it could also be relevant
      to VMX, because the code to conditionally update these CPUID bits
      assumes that the guest CPUID and the guest CR4 are always in sync.
      
      Fixes: 8eb3f87d ("KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exit")
      Fixes: 2acf923e ("KVM: VMX: Enable XSAVE/XRSTOR for guest")
      Fixes: b9baba86 ("KVM, pkeys: expose CPUID/CR4 to guest")
      Reported-by: NAbhiroop Dabral <adabral@paloaltonetworks.com>
      Signed-off-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NRicardo Koller <ricarkol@google.com>
      Reviewed-by: NPeter Shier <pshier@google.com>
      Cc: Haozhong Zhang <haozhong.zhang@intel.com>
      Cc: Dexuan Cui <dexuan.cui@intel.com>
      Cc: Huaitong Han <huaitong.han@intel.com>
      Message-Id: <20201029170648.483210-1-jmattson@google.com>
      2259c17f
    • P
      KVM: X86: Implement ring-based dirty memory tracking · fb04a1ed
      Peter Xu 提交于
      This patch is heavily based on previous work from Lei Cao
      <lei.cao@stratus.com> and Paolo Bonzini <pbonzini@redhat.com>. [1]
      
      KVM currently uses large bitmaps to track dirty memory.  These bitmaps
      are copied to userspace when userspace queries KVM for its dirty page
      information.  The use of bitmaps is mostly sufficient for live
      migration, as large parts of memory are be dirtied from one log-dirty
      pass to another.  However, in a checkpointing system, the number of
      dirty pages is small and in fact it is often bounded---the VM is
      paused when it has dirtied a pre-defined number of pages. Traversing a
      large, sparsely populated bitmap to find set bits is time-consuming,
      as is copying the bitmap to user-space.
      
      A similar issue will be there for live migration when the guest memory
      is huge while the page dirty procedure is trivial.  In that case for
      each dirty sync we need to pull the whole dirty bitmap to userspace
      and analyse every bit even if it's mostly zeros.
      
      The preferred data structure for above scenarios is a dense list of
      guest frame numbers (GFN).  This patch series stores the dirty list in
      kernel memory that can be memory mapped into userspace to allow speedy
      harvesting.
      
      This patch enables dirty ring for X86 only.  However it should be
      easily extended to other archs as well.
      
      [1] https://patchwork.kernel.org/patch/10471409/Signed-off-by: NLei Cao <lei.cao@stratus.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012222.5767-1-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fb04a1ed
    • P
      KVM: X86: Don't track dirty for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR] · ff5a983c
      Peter Xu 提交于
      Originally, we have three code paths that can dirty a page without
      vcpu context for X86:
      
        - init_rmode_identity_map
        - init_rmode_tss
        - kvmgt_rw_gpa
      
      init_rmode_identity_map and init_rmode_tss will be setup on
      destination VM no matter what (and the guest cannot even see them), so
      it does not make sense to track them at all.
      
      To do this, allow __x86_set_memory_region() to return the userspace
      address that just allocated to the caller.  Then in both of the
      functions we directly write to the userspace address instead of
      calling kvm_write_*() APIs.
      
      Another trivial change is that we don't need to explicitly clear the
      identity page table root in init_rmode_identity_map() because no
      matter what we'll write to the whole page with 4M huge page entries.
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012044.5151-4-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ff5a983c
    • P
      KVM: x86: fix apic_accept_events vs check_nested_events · 1c96dcce
      Paolo Bonzini 提交于
      vmx_apic_init_signal_blocked is buggy in that it returns true
      even in VMX non-root mode.  In non-root mode, however, INITs
      are not latched, they just cause a vmexit.  Previously,
      KVM was waiting for them to be processed when kvm_apic_accept_events
      and in the meanwhile it ate the SIPIs that the processor received.
      
      However, in order to implement the wait-for-SIPI activity state,
      KVM will have to process KVM_APIC_SIPI in vmx_check_nested_events,
      and it will not be possible anymore to disregard SIPIs in non-root
      mode as the code is currently doing.
      
      By calling kvm_x86_ops.nested_ops->check_events, we can force a vmexit
      (with the side-effect of latching INITs) before incorrectly injecting
      an INIT or SIPI in a guest, and therefore vmx_apic_init_signal_blocked
      can do the right thing.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1c96dcce
    • S
      KVM: x86: Return bool instead of int for CR4 and SREGS validity checks · ee69c92b
      Sean Christopherson 提交于
      Rework the common CR4 and SREGS checks to return a bool instead of an
      int, i.e. true/false instead of 0/-EINVAL, and add "is" to the name to
      clarify the polarity of the return value (which is effectively inverted
      by this change).
      
      No functional changed intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-6-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ee69c92b
    • S
      KVM: x86: Move vendor CR4 validity check to dedicated kvm_x86_ops hook · c2fe3cd4
      Sean Christopherson 提交于
      Split out VMX's checks on CR4.VMXE to a dedicated hook, .is_valid_cr4(),
      and invoke the new hook from kvm_valid_cr4().  This fixes an issue where
      KVM_SET_SREGS would return success while failing to actually set CR4.
      
      Fixing the issue by explicitly checking kvm_x86_ops.set_cr4()'s return
      in __set_sregs() is not a viable option as KVM has already stuffed a
      variety of vCPU state.
      
      Note, kvm_valid_cr4() and is_valid_cr4() have different return types and
      inverted semantics.  This will be remedied in a future patch.
      
      Fixes: 5e1746d6 ("KVM: nVMX: Allow setting the VMXE bit in CR4")
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-5-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c2fe3cd4
    • S
      KVM: VMX: Drop explicit 'nested' check from vmx_set_cr4() · a447e38a
      Sean Christopherson 提交于
      Drop vmx_set_cr4()'s explicit check on the 'nested' module param now
      that common x86 handles the check by incorporating VMXE into the CR4
      reserved bits, via kvm_cpu_caps.  X86_FEATURE_VMX is set in kvm_cpu_caps
      (by vmx_set_cpu_caps()), if and only if 'nested' is true.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-3-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a447e38a
    • S
      KVM: VMX: Drop guest CPUID check for VMXE in vmx_set_cr4() · d3a9e414
      Sean Christopherson 提交于
      Drop vmx_set_cr4()'s somewhat hidden guest_cpuid_has() check on VMXE now
      that common x86 handles the check by incorporating VMXE into the CR4
      reserved bits, i.e. in cr4_guest_rsvd_bits.  This fixes a bug where KVM
      incorrectly rejects KVM_SET_SREGS with CR4.VMXE=1 if it's executed
      before KVM_SET_CPUID{,2}.
      
      Fixes: 5e1746d6 ("KVM: nVMX: Allow setting the VMXE bit in CR4")
      Reported-by: NStas Sergeev <stsp@users.sourceforge.net>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-2-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d3a9e414
  8. 31 10月, 2020 2 次提交
  9. 24 10月, 2020 1 次提交
    • P
      KVM: vmx: rename pi_init to avoid conflict with paride · a3ff25fc
      Paolo Bonzini 提交于
      allyesconfig results in:
      
      ld: drivers/block/paride/paride.o: in function `pi_init':
      (.text+0x1340): multiple definition of `pi_init'; arch/x86/kvm/vmx/posted_intr.o:posted_intr.c:(.init.text+0x0): first defined here
      make: *** [Makefile:1164: vmlinux] Error 1
      
      because commit:
      
      commit 8888cdd0
      Author: Xiaoyao Li <xiaoyao.li@intel.com>
      Date:   Wed Sep 23 11:31:11 2020 -0700
      
          KVM: VMX: Extract posted interrupt support to separate files
      
      added another pi_init(), though one already existed in the paride code.
      Reported-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a3ff25fc
  10. 22 10月, 2020 4 次提交
  11. 20 10月, 2020 1 次提交
  12. 03 10月, 2020 1 次提交
  13. 29 9月, 2020 1 次提交
  14. 28 9月, 2020 6 次提交