1. 31 3月, 2020 5 次提交
  2. 26 3月, 2020 2 次提交
  3. 24 3月, 2020 4 次提交
    • S
      KVM: VMX: Gracefully handle faults on VMXON · 4f6ea0a8
      Sean Christopherson 提交于
      Gracefully handle faults on VMXON, e.g. #GP due to VMX being disabled by
      BIOS, instead of letting the fault crash the system.  Now that KVM uses
      cpufeatures to query support instead of reading MSR_IA32_FEAT_CTL
      directly, it's possible for a bug in a different subsystem to cause KVM
      to incorrectly attempt VMXON[*].  Crashing the system is especially
      annoying if the system is configured such that hardware_enable() will
      be triggered during boot.
      
      Oppurtunistically rename @addr to @vmxon_pointer and use a named param
      to reference it in the inline assembly.
      
      Print 0xdeadbeef in the ultra-"rare" case that reading MSR_IA32_FEAT_CTL
      also faults.
      
      [*] https://lkml.kernel.org/r/20200226231615.13664-1-sean.j.christopherson@intel.comSigned-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200321193751.24985-4-sean.j.christopherson@intel.com>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4f6ea0a8
    • S
      KVM: VMX: Fold loaded_vmcs_init() into alloc_loaded_vmcs() · d260f9ef
      Sean Christopherson 提交于
      Subsume loaded_vmcs_init() into alloc_loaded_vmcs(), its only remaining
      caller, and drop the VMCLEAR on the shadow VMCS, which is guaranteed to
      be NULL.  loaded_vmcs_init() was previously used by loaded_vmcs_clear(),
      but loaded_vmcs_clear() also subsumed loaded_vmcs_init() to properly
      handle smp_wmb() with respect to VMCLEAR.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200321193751.24985-3-sean.j.christopherson@intel.com>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d260f9ef
    • S
      KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support · 31603d4f
      Sean Christopherson 提交于
      VMCLEAR all in-use VMCSes during a crash, even if kdump's NMI shootdown
      interrupted a KVM update of the percpu in-use VMCS list.
      
      Because NMIs are not blocked by disabling IRQs, it's possible that
      crash_vmclear_local_loaded_vmcss() could be called while the percpu list
      of VMCSes is being modified, e.g. in the middle of list_add() in
      vmx_vcpu_load_vmcs().  This potential corner case was called out in the
      original commit[*], but the analysis of its impact was wrong.
      
      Skipping the VMCLEARs is wrong because it all but guarantees that a
      loaded, and therefore cached, VMCS will live across kexec and corrupt
      memory in the new kernel.  Corruption will occur because the CPU's VMCS
      cache is non-coherent, i.e. not snooped, and so the writeback of VMCS
      memory on its eviction will overwrite random memory in the new kernel.
      The VMCS will live because the NMI shootdown also disables VMX, i.e. the
      in-progress VMCLEAR will #UD, and existing Intel CPUs do not flush the
      VMCS cache on VMXOFF.
      
      Furthermore, interrupting list_add() and list_del() is safe due to
      crash_vmclear_local_loaded_vmcss() using forward iteration.  list_add()
      ensures the new entry is not visible to forward iteration unless the
      entire add completes, via WRITE_ONCE(prev->next, new).  A bad "prev"
      pointer could be observed if the NMI shootdown interrupted list_del() or
      list_add(), but list_for_each_entry() does not consume ->prev.
      
      In addition to removing the temporary disabling of VMCLEAR, open code
      loaded_vmcs_init() in __loaded_vmcs_clear() and reorder VMCLEAR so that
      the VMCS is deleted from the list only after it's been VMCLEAR'd.
      Deleting the VMCS before VMCLEAR would allow a race where the NMI
      shootdown could arrive between list_del() and vmcs_clear() and thus
      neither flow would execute a successful VMCLEAR.  Alternatively, more
      code could be moved into loaded_vmcs_init(), but that gets rather silly
      as the only other user, alloc_loaded_vmcs(), doesn't need the smp_wmb()
      and would need to work around the list_del().
      
      Update the smp_*() comments related to the list manipulation, and
      opportunistically reword them to improve clarity.
      
      [*] https://patchwork.kernel.org/patch/1675731/#3720461
      
      Fixes: 8f536b76 ("KVM: VMX: provide the vmclear function and a bitmap to support VMCLEAR in kdump")
      Cc: stable@vger.kernel.org
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200321193751.24985-2-sean.j.christopherson@intel.com>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      31603d4f
    • Z
      KVM: x86: Expose fast short REP MOV for supported cpuid · e3747407
      Zhenyu Wang 提交于
      For CPU supporting fast short REP MOV (XF86_FEATURE_FSRM) e.g Icelake,
      Tigerlake, expose it in KVM supported cpuid as well.
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      Message-Id: <20200323092236.3703-1-zhenyuw@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e3747407
  4. 18 3月, 2020 6 次提交
  5. 17 3月, 2020 23 次提交
    • U
      KVM: VMX: access regs array in vmenter.S in its natural order · bb03911f
      Uros Bizjak 提交于
      Registers in "regs" array are indexed as rax/rcx/rdx/.../rsi/rdi/r8/...
      Reorder access to "regs" array in vmenter.S to follow its natural order.
      Signed-off-by: NUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      bb03911f
    • V
      KVM: nVMX: properly handle errors in nested_vmx_handle_enlightened_vmptrld() · b6a0653a
      Vitaly Kuznetsov 提交于
      nested_vmx_handle_enlightened_vmptrld() fails in two cases:
      - when we fail to kvm_vcpu_map() the supplied GPA
      - when revision_id is incorrect.
      Genuine Hyper-V raises #UD in the former case (at least with *some*
      incorrect GPAs) and does VMfailInvalid() in the later. KVM doesn't do
      anything so L1 just gets stuck retrying the same faulty VMLAUNCH.
      
      nested_vmx_handle_enlightened_vmptrld() has two call sites:
      nested_vmx_run() and nested_get_vmcs12_pages(). The former needs to queue
      do much: the failure there happens after migration when L2 was running (and
      L1 did something weird like wrote to VP assist page from a different vCPU),
      just kill L1 with KVM_EXIT_INTERNAL_ERROR.
      Reported-by: NMiaohe Lin <linmiaohe@huawei.com>
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      [Squash kbuild autopatch. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b6a0653a
    • V
      KVM: nVMX: stop abusing need_vmcs12_to_shadow_sync for eVMCS mapping · e942dbf8
      Vitaly Kuznetsov 提交于
      When vmx_set_nested_state() happens, we may not have all the required
      data to map enlightened VMCS: e.g. HV_X64_MSR_VP_ASSIST_PAGE MSR may not
      yet be restored so we need a postponed action. Currently, we (ab)use
      need_vmcs12_to_shadow_sync/nested_sync_vmcs12_to_shadow() for that but
      this is not ideal:
      - We may not need to sync anything if L2 is running
      - It is hard to propagate errors from nested_sync_vmcs12_to_shadow()
       as we call it from vmx_prepare_switch_to_guest() which happens just
       before we do VMLAUNCH, the code is not ready to handle errors there.
      
      Move eVMCS mapping to nested_get_vmcs12_pages() and request
      KVM_REQ_GET_VMCS12_PAGES, it seems to be is less abusive in nature.
      It would probably be possible to introduce a specialized KVM_REQ_EVMCS_MAP
      but it is undesirable to propagate eVMCS specifics all the way up to x86.c
      
      Note, we don't need to request KVM_REQ_GET_VMCS12_PAGES from
      vmx_set_nested_state() directly as nested_vmx_enter_non_root_mode() already
      does that. Requesting KVM_REQ_GET_VMCS12_PAGES is done to document the
      (non-obvious) side-effect and to be future proof.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e942dbf8
    • M
      KVM: nSVM: Remove an obsolete comment. · 0b664653
      Miaohe Lin 提交于
      The function does not return bool anymore.
      Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0b664653
    • P
      KVM: X86: correct meaningless kvm_apicv_activated() check · 8e205a6b
      Paolo Bonzini 提交于
      After test_and_set_bit() for kvm->arch.apicv_inhibit_reasons, we will
      always get false when calling kvm_apicv_activated() because it's sure
      apicv_inhibit_reasons do not equal to 0.
      
      What the code wants to do, is check whether APICv was *already* active
      and if so skip the costly request; we can do this using cmpxchg.
      Reported-by: NMiaohe Lin <linmiaohe@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8e205a6b
    • O
      KVM: nVMX: Consolidate nested MTF checks to helper function · 212617db
      Oliver Upton 提交于
      commit 5ef8acbd ("KVM: nVMX: Emulate MTF when performing
      instruction emulation") introduced a helper to check the MTF
      VM-execution control in vmcs12. Change pre-existing check in
      nested_vmx_exit_reflected() to instead use the helper.
      Signed-off-by: NOliver Upton <oupton@google.com>
      Reviewed-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com>
      Reviewed-by: NMiaohe Lin <linmiaohe@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      212617db
    • W
      KVM: VMX: Micro-optimize vmexit time when not exposing PMU · 041bc42c
      Wanpeng Li 提交于
      PMU is not exposed to guest by most of products from cloud providers since the
      bad performance of PMU emulation and security concern. However, it calls
      perf_guest_switch_get_msrs() and clear_atomic_switch_msr() unconditionally
      even if PMU is not exposed to the guest before each vmentry.
      
      ~2% vmexit time reduced can be observed by kvm-unit-tests/vmexit.flat on my
      SKX server.
      
      Before patch:
      vmcall 1559
      
      After patch:
      vmcall 1529
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      041bc42c
    • S
      kvm: svm: Introduce GA Log tracepoint for AVIC · ab56f8e6
      Suravee Suthikulpanit 提交于
      GA Log tracepoint is useful when debugging AVIC performance
      issue as it can be used with perf to count the number of times
      IOMMU AVIC injects interrupts through the slow-path instead of
      directly inject interrupts to the target vcpu.
      Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ab56f8e6
    • P
      KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2 · 78f2145c
      Paolo Bonzini 提交于
      This patch reproduces for nSVM the change that was made for nVMX in
      commit b5861e5c ("KVM: nVMX: Fix loss of pending IRQ/NMI before
      entering L2").  While I do not have a test that breaks without it, I
      cannot see why it would not be necessary since all events are unblocked
      by VMRUN's setting of GIF back to 1.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      78f2145c
    • P
      KVM: nSVM: implement check_nested_events for interrupts · b518ba9f
      Paolo Bonzini 提交于
      The current implementation of physical interrupt delivery to a nested guest
      is quite broken.  It relies on svm_interrupt_allowed returning false if
      VINTR=1 so that the interrupt can be injected from enable_irq_window,
      but this does not work for guests that do not intercept HLT or that rely
      on clearing the host IF to block physical interrupts while L2 runs.
      
      This patch can be split in two logical parts, but including only
      one breaks tests so I am combining both changes together.
      
      The first and easiest is simply to return true for svm_interrupt_allowed
      if HF_VINTR_MASK is set and HIF is set.  This way the semantics of
      svm_interrupt_allowed are respected: svm_interrupt_allowed being false
      does not mean "call enable_irq_window", it means "interrupts cannot
      be injected now".
      
      After doing this, however, we need another place to inject the
      interrupt, and fortunately we already have one, check_nested_events,
      which nested SVM does not implement but which is meant exactly for this
      purpose.  It is called before interrupts are injected, and it can
      therefore do the L2->L1 switch while leaving inject_pending_event
      none the wiser.
      
      This patch was developed together with Cathy Avery, who wrote the
      test and did a lot of the initial debugging.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b518ba9f
    • P
      KVM: nSVM: ignore L1 interrupt window while running L2 with V_INTR_MASKING=1 · 64b5bd27
      Paolo Bonzini 提交于
      If a nested VM is started while an IRQ was pending and with
      V_INTR_MASKING=1, the behavior of the guest depends on host IF.  If it
      is 1, the VM should exit immediately, before executing the first
      instruction of the guest, because VMRUN sets GIF back to 1.
      
      If it is 0 and the host has VGIF, however, at the time of the VMRUN
      instruction L0 is running the guest with a pending interrupt window
      request.  This interrupt window request is completely irrelevant to
      L2, since IF only controls virtual interrupts, so this patch drops
      INTERCEPT_VINTR from the VMCB while running L2 under these circumstances.
      To simplify the code, both steps of enabling the interrupt window
      (setting the VINTR intercept and requesting a fake virtual interrupt
      in svm_inject_irq) are grouped in the svm_set_vintr function, and
      likewise for dismissing the interrupt window request in svm_clear_vintr.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      64b5bd27
    • P
      KVM: nSVM: do not change host intercepts while nested VM is running · b5ec2e02
      Paolo Bonzini 提交于
      Instead of touching the host intercepts so that the bitwise OR in
      recalc_intercepts just works, mask away uninteresting intercepts
      directly in recalc_intercepts.
      
      This is cleaner and keeps the logic in one place even for intercepts
      that can change even while L2 is running.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b5ec2e02
    • P
      KVM: x86: rename set_cr3 callback and related flags to load_mmu_pgd · 727a7e27
      Paolo Bonzini 提交于
      The set_cr3 callback is not setting the guest CR3, it is setting the
      root of the guest page tables, either shadow or two-dimensional.
      To make this clearer as well as to indicate that the MMU calls it
      via kvm_mmu_load_cr3, rename it to load_mmu_pgd.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      727a7e27
    • P
      KVM: x86: unify callbacks to load paging root · 689f3bf2
      Paolo Bonzini 提交于
      Similar to what kvm-intel.ko is doing, provide a single callback that
      merges svm_set_cr3, set_tdp_cr3 and nested_svm_set_tdp_cr3.
      
      This lets us unify the set_cr3 and set_tdp_cr3 entries in kvm_x86_ops.
      I'm doing that in this same patch because splitting it adds quite a bit
      of churn due to the need for forward declarations.  For the same reason
      the assignment to vcpu->arch.mmu->set_cr3 is moved to kvm_init_shadow_mmu
      from init_kvm_softmmu and nested_svm_init_mmu_context.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      689f3bf2
    • S
      KVM: x86: Refactor kvm_cpuid() param that controls out-of-range logic · f91af517
      Sean Christopherson 提交于
      Invert and rename the kvm_cpuid() param that controls out-of-range logic
      to better reflect the semantics of the affected callers, i.e. callers
      that bypass the out-of-range logic do so because they are looking up an
      exact guest CPUID entry, e.g. to query the maxphyaddr.
      
      Similarly, rename kvm_cpuid()'s internal "found" to "exact" to clarify
      that it tracks whether or not the exact requested leaf was found, as
      opposed to any usable leaf being found.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f91af517
    • S
      KVM: x86: Refactor out-of-range logic to contain the madness · 09c7431e
      Sean Christopherson 提交于
      Move all of the out-of-range logic into a single helper,
      get_out_of_range_cpuid_entry(), to avoid an extra lookup of CPUID.0.0
      and to provide a single location for documenting the out-of-range
      behavior.
      
      No functional change intended.
      
      Cc: Jim Mattson <jmattson@google.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      09c7431e
    • S
      KVM: x86: Fix CPUID range checks for Hypervisor and Centaur classes · 8d892311
      Sean Christopherson 提交于
      Rework the masking in the out-of-range CPUID logic to handle the
      Hypervisor sub-classes, as well as the Centaur class if the guest
      virtual CPU vendor is Centaur.
      
      Masking against 0x80000000 only handles basic and extended leafs, which
      results in Hypervisor range checks being performed against the basic
      CPUID class, and Centuar range checks being performed against the
      Extended class.  E.g. if CPUID.0x40000000.EAX returns 0x4000000A and
      there is no entry for CPUID.0x40000006, then function 0x40000006 would
      be incorrectly reported as out of bounds.
      
      While there is no official definition of what constitutes a class, the
      convention established for Hypervisor classes effectively uses bits 31:8
      as the mask by virtue of checking for different bases in increments of
      0x100, e.g. KVM advertises its CPUID functions starting at 0x40000100
      when HyperV features are advertised at the default base of 0x40000000.
      
      The bad range check doesn't cause functional problems for any known VMM
      because out-of-range semantics only come into play if the exact entry
      isn't found, and VMMs either support a very limited Hypervisor range,
      e.g. the official KVM range is 0x40000000-0x40000001 (effectively no
      room for undefined leafs) or explicitly defines gaps to be zero, e.g.
      Qemu explicitly creates zeroed entries up to the Centaur and Hypervisor
      limits (the latter comes into play when providing HyperV features).
      
      The bad behavior can be visually confirmed by dumping CPUID output in
      the guest when running Qemu with a stable TSC, as Qemu extends the limit
      of range 0x40000000 to 0x40000010 to advertise VMware's cpuid_freq,
      without defining zeroed entries for 0x40000002 - 0x4000000f.
      
      Note, documentation of Centaur/VIA CPUs is hard to come by.  Designating
      0xc0000000 - 0xcfffffff as the Centaur class is a best guess as to the
      behavior of a real Centaur/VIA CPU.
      
      Fixes: 43561123 ("kvm: x86: Improve emulation of CPUID leaves 0BH and 1FH")
      Cc: Jim Mattson <jmattson@google.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8d892311
    • S
      KVM x86: Extend AMD specific guest behavior to Hygon virtual CPUs · 23493d0a
      Sean Christopherson 提交于
      Extend guest_cpuid_is_amd() to cover Hygon virtual CPUs and rename it
      accordingly.  Hygon CPUs use an AMD-based core and so have the same
      basic behavior as AMD CPUs.
      
      Fixes: b8f4abb6 ("x86/kvm: Add Hygon Dhyana support to KVM")
      Cc: Pu Wen <puwen@hygon.cn>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      23493d0a
    • S
      KVM: x86: Add helpers to perform CPUID-based guest vendor check · 15608ed0
      Sean Christopherson 提交于
      Add helpers to provide CPUID-based guest vendor checks, i.e. to do the
      ugly register comparisons.  Use the new helpers to check for an AMD
      guest vendor in guest_cpuid_is_amd() as well as in the existing emulator
      flows.
      
      Using the new helpers fixes a _very_ theoretical bug where
      guest_cpuid_is_amd() would get a false positive on a non-AMD virtual CPU
      with a vendor string beginning with "Auth" due to the previous logic
      only checking EBX.  It also fixes a marginally less theoretically bug
      where guest_cpuid_is_amd() would incorrectly return false for a guest
      CPU with "AMDisbetter!" as its vendor string.
      
      Fixes: a0c0feb5 ("KVM: x86: reserve bit 8 of non-leaf PDPEs and PML4Es in 64-bit mode on AMD")
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      15608ed0
    • J
      KVM: x86: Trace the original requested CPUID function in kvm_cpuid() · b7fb8488
      Jan Kiszka 提交于
      Trace the requested CPUID function instead of the effective function,
      e.g. if the requested function is out-of-range and KVM is emulating an
      Intel CPU, as the intent of the tracepoint is to show if the output came
      from the actual leaf as opposed to the max basic leaf via redirection.
      
      Similarly, leave "found" as is, i.e. report that an entry was found if
      and only if the requested entry was found.
      
      Fixes: 43561123 ("kvm: x86: Improve emulation of CPUID leaves 0BH and 1FH")
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      [Sean: Drop "found" semantic change, reword changelong accordingly ]
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b7fb8488
    • P
      KVM: CPUID: add support for supervisor states · 408e9a31
      Paolo Bonzini 提交于
      Current CPUID 0xd enumeration code does not support supervisor
      states, because KVM only supports setting IA32_XSS to zero.
      Change it instead to use a new variable supported_xss, to be
      set from the hardware_setup callback which is in charge of CPU
      capabilities.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      408e9a31
    • S
      KVM: x86: Move nSVM CPUID 0x8000000A handling into common x86 code · 25703874
      Sean Christopherson 提交于
      Handle CPUID 0x8000000A in the main switch in __do_cpuid_func() and drop
      ->set_supported_cpuid() now that both VMX and SVM implementations are
      empty.  Like leaf 0x14 (Intel PT) and leaf 0x8000001F (SEV), leaf
      0x8000000A is is (obviously) vendor specific but can be queried in
      common code while respecting SVM's wishes by querying kvm_cpu_cap_has().
      Suggested-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      25703874
    • S
      KVM: nSVM: Advertise and enable NRIPS for L1 iff nrips is enabled · 4eb87460
      Sean Christopherson 提交于
      Set NRIPS in KVM capabilities if and only if nrips=true, which naturally
      incorporates the boot_cpu_has() check, and set nrips_enabled only if the
      KVM capability is enabled.
      
      Note, previously KVM would set nrips_enabled based purely on userspace
      input, but at worst that would cause KVM to propagate garbage into L1,
      i.e. userspace would simply be hosing its VM.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4eb87460