1. 09 7月, 2020 6 次提交
  2. 15 6月, 2020 1 次提交
    • Q
      kvm/svm: disable KCSAN for svm_vcpu_run() · b95273f1
      Qian Cai 提交于
      For some reasons, running a simple qemu-kvm command with KCSAN will
      reset AMD hosts. It turns out svm_vcpu_run() could not be instrumented.
      Disable it for now.
      
       # /usr/libexec/qemu-kvm -name ubuntu-18.04-server-cloudimg -cpu host
      	-smp 2 -m 2G -hda ubuntu-18.04-server-cloudimg.qcow2
      
      === console output ===
      Kernel 5.6.0-next-20200408+ on an x86_64
      
      hp-dl385g10-05 login:
      
      <...host reset...>
      
      HPE ProLiant System BIOS A40 v1.20 (03/09/2018)
      (C) Copyright 1982-2018 Hewlett Packard Enterprise Development LP
      Early system initialization, please wait...
      Signed-off-by: NQian Cai <cai@lca.pw>
      Message-Id: <20200415153709.1559-1-cai@lca.pw>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b95273f1
  3. 11 6月, 2020 1 次提交
  4. 08 6月, 2020 1 次提交
    • P
      KVM: SVM: fix calls to is_intercept · fb7333df
      Paolo Bonzini 提交于
      is_intercept takes an INTERCEPT_* constant, not SVM_EXIT_*; because
      of this, the compiler was removing the body of the conditionals,
      as if is_intercept returned 0.
      
      This unveils a latent bug: when clearing the VINTR intercept,
      int_ctl must also be changed in the L1 VMCB (svm->nested.hsave),
      just like the intercept itself is also changed in the L1 VMCB.
      Otherwise V_IRQ remains set and, due to the VINTR intercept being clear,
      we get a spurious injection of a vector 0 interrupt on the next
      L2->L1 vmexit.
      Reported-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fb7333df
  5. 01 6月, 2020 12 次提交
  6. 28 5月, 2020 6 次提交
    • P
      KVM: SVM: always update CR3 in VMCB · 978ce583
      Paolo Bonzini 提交于
      svm_load_mmu_pgd is delaying the write of GUEST_CR3 to prepare_vmcs02 as
      an optimization, but this is only correct before the nested vmentry.
      If userspace is modifying CR3 with KVM_SET_SREGS after the VM has
      already been put in guest mode, the value of CR3 will not be updated.
      Remove the optimization, which almost never triggers anyway.
      This was was added in commit 689f3bf2 ("KVM: x86: unify callbacks
      to load paging root", 2020-03-16) just to keep the two vendor-specific
      modules closer, but we'll fix VMX too.
      
      Fixes: 689f3bf2 ("KVM: x86: unify callbacks to load paging root")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      978ce583
    • P
      KVM: nSVM: remove exit_required · bd279629
      Paolo Bonzini 提交于
      All events now inject vmexits before vmentry rather than after vmexit.  Therefore,
      exit_required is not set anymore and we can remove it.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      bd279629
    • P
      KVM: nSVM: inject exceptions via svm_check_nested_events · 7c86663b
      Paolo Bonzini 提交于
      This allows exceptions injected by the emulator to be properly delivered
      as vmexits.  The code also becomes simpler, because we can just let all
      L0-intercepted exceptions go through the usual path.  In particular, our
      emulation of the VMX #DB exit qualification is very much simplified,
      because the vmexit injection path can use kvm_deliver_exception_payload
      to update DR6.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7c86663b
    • P
      KVM: x86: enable event window in inject_pending_event · c9d40913
      Paolo Bonzini 提交于
      In case an interrupt arrives after nested.check_events but before the
      call to kvm_cpu_has_injectable_intr, we could end up enabling the interrupt
      window even if the interrupt is actually going to be a vmexit.  This is
      useless rather than harmful, but it really complicates reasoning about
      SVM's handling of the VINTR intercept.  We'd like to never bother with
      the VINTR intercept if V_INTR_MASKING=1 && INTERCEPT_INTR=1, because in
      that case there is no interrupt window and we can just exit the nested
      guest whenever we want.
      
      This patch moves the opening of the interrupt window inside
      inject_pending_event.  This consolidates the check for pending
      interrupt/NMI/SMI in one place, and makes KVM's usage of immediate
      exits more consistent, extending it beyond just nested virtualization.
      
      There are two functional changes here.  They only affect corner cases,
      but overall they simplify the inject_pending_event.
      
      - re-injection of still-pending events will also use req_immediate_exit
      instead of using interrupt-window intercepts.  This should have no impact
      on performance on Intel since it simply replaces an interrupt-window
      or NMI-window exit for a preemption-timer exit.  On AMD, which has no
      equivalent of the preemption time, it may incur some overhead but an
      actual effect on performance should only be visible in pathological cases.
      
      - kvm_arch_interrupt_allowed and kvm_vcpu_has_events will return true
      if an interrupt, NMI or SMI is blocked by nested_run_pending.  This
      makes sense because entering the VM will allow it to make progress
      and deliver the event.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c9d40913
    • S
      KVM: x86: Take an unsigned 32-bit int for has_emulated_msr()'s index · cb97c2d6
      Sean Christopherson 提交于
      Take a u32 for the index in has_emulated_msr() to match hardware, which
      treats MSR indices as unsigned 32-bit values.  Functionally, taking a
      signed int doesn't cause problems with the current code base, but could
      theoretically cause problems with 32-bit KVM, e.g. if the index were
      checked via a less-than statement, which would evaluate incorrectly for
      MSR indices with bit 31 set.
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200218234012.7110-3-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cb97c2d6
    • P
      KVM: x86: simplify is_mmio_spte · e7581cac
      Paolo Bonzini 提交于
      We can simply look at bits 52-53 to identify MMIO entries in KVM's page
      tables.  Therefore, there is no need to pass a mask to kvm_mmu_set_mmio_spte_mask.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e7581cac
  7. 16 5月, 2020 4 次提交
  8. 14 5月, 2020 9 次提交