1. 17 3月, 2018 5 次提交
    • W
      KVM: X86: Provide a capability to disable MWAIT intercepts · 4d5422ce
      Wanpeng Li 提交于
      Allowing a guest to execute MWAIT without interception enables a guest
      to put a (physical) CPU into a power saving state, where it takes
      longer to return from than what may be desired by the host.
      
      Don't give a guest that power over a host by default. (Especially,
      since nothing prevents a guest from using MWAIT even when it is not
      advertised via CPUID.)
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Jan H. Schönherr <jschoenh@amazon.de>
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4d5422ce
    • L
      KVM: x86: Emulate only IN/OUT instructions when accessing VMware backdoor · 04789b66
      Liran Alon 提交于
      Access to VMware backdoor ports is done by one of the IN/OUT/INS/OUTS
      instructions. These ports must be allowed access even if TSS I/O
      permission bitmap don't allow it.
      
      To handle this, VMX/SVM will be changed in future commits
      to intercept #GP which was raised by such access and
      handle it by calling x86 emulator to emulate instruction.
      If it was one of these instructions, the x86 emulator already handles
      it correctly (Since commit "KVM: x86: Always allow access to VMware
      backdoor I/O ports") by not checking these ports against TSS I/O
      permission bitmap.
      
      One may wonder why checking for specific instructions is necessary
      as we can just forward all #GPs to the x86 emulator.
      There are multiple reasons for doing so:
      
      1. We don't want the x86 emulator to be reached easily
      by guest by just executing an instruction that raises #GP as that
      exposes the x86 emulator as a bigger attack surface.
      
      2. The x86 emulator is incomplete and therefore certain instructions
      that can cause #GP cannot be emulated. Such an example is "INT x"
      (opcode 0xcd) which reaches emulate_int() which can only emulate
      the instruction if vCPU is in real-mode.
      Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Reviewed-by: NNikita Leshenko <nikita.leshchenko@oracle.com>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      04789b66
    • L
      KVM: x86: Add emulation_type to not raise #UD on emulation failure · e2366171
      Liran Alon 提交于
      Next commits are going introduce support for accessing VMware backdoor
      ports even though guest's TSS I/O permissions bitmap doesn't allow
      access. This mimic VMware hypervisor behavior.
      
      In order to support this, next commits will change VMX/SVM to
      intercept #GP which was raised by such access and handle it by calling
      the x86 emulator to emulate instruction. Since commit "KVM: x86:
      Always allow access to VMware backdoor I/O ports", the x86 emulator
      handles access to these I/O ports by not checking these ports against
      the TSS I/O permission bitmap.
      
      However, there could be cases that CPU rasies a #GP on instruction
      that fails to be disassembled by the x86 emulator (Because of
      incomplete implementation for example).
      
      In those cases, we would like the #GP intercept to just forward #GP
      as-is to guest as if there was no intercept to begin with.
      However, current emulator code always queues #UD exception in case
      emulator fails (including disassembly failures) which is not what is
      wanted in this flow.
      
      This commit addresses this issue by adding a new emulation_type flag
      that will allow the #GP intercept handler to specify that it wishes
      to be aware when instruction emulation fails and doesn't want #UD
      exception to be queued.
      Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Reviewed-by: NNikita Leshenko <nikita.leshchenko@oracle.com>
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e2366171
    • S
      KVM: x86: add kvm_fast_pio() to consolidate fast PIO code · dca7f128
      Sean Christopherson 提交于
      Add kvm_fast_pio() to consolidate duplicate code in VMX and SVM.
      Unexport kvm_fast_pio_in() and kvm_fast_pio_out().
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      dca7f128
    • V
      x86/kvm/hyper-v: add reenlightenment MSRs support · a2e164e7
      Vitaly Kuznetsov 提交于
      Nested Hyper-V/Windows guest running on top of KVM will use TSC page
      clocksource in two cases:
      - L0 exposes invariant TSC (CPUID.80000007H:EDX[8]).
      - L0 provides Hyper-V Reenlightenment support (CPUID.40000003H:EAX[13]).
      
      Exposing invariant TSC effectively blocks migration to hosts with different
      TSC frequencies, providing reenlightenment support will be needed when we
      start migrating nested workloads.
      
      Implement rudimentary support for reenlightenment MSRs. For now, these are
      just read/write MSRs with no effect.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Reviewed-by: NRoman Kagan <rkagan@virtuozzo.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      a2e164e7
  2. 07 3月, 2018 1 次提交
    • R
      kvm: x86: hyperv: guest->host event signaling via eventfd · faeb7833
      Roman Kagan 提交于
      In Hyper-V, the fast guest->host notification mechanism is the
      SIGNAL_EVENT hypercall, with a single parameter of the connection ID to
      signal.
      
      Currently this hypercall incurs a user exit and requires the userspace
      to decode the parameters and trigger the notification of the potentially
      different I/O context.
      
      To avoid the costly user exit, process this hypercall and signal the
      corresponding eventfd in KVM, similar to ioeventfd.  The association
      between the connection id and the eventfd is established via the newly
      introduced KVM_HYPERV_EVENTFD ioctl, and maintained in an
      (srcu-protected) IDR.
      Signed-off-by: NRoman Kagan <rkagan@virtuozzo.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      [asm/hyperv.h changes approved by KY Srinivasan. - Radim]
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      faeb7833
  3. 02 3月, 2018 2 次提交
  4. 24 2月, 2018 1 次提交
  5. 01 2月, 2018 1 次提交
  6. 16 1月, 2018 1 次提交
  7. 14 12月, 2017 3 次提交
  8. 06 12月, 2017 2 次提交
    • R
      KVM: x86: fix APIC page invalidation · b1394e74
      Radim Krčmář 提交于
      Implementation of the unpinned APIC page didn't update the VMCS address
      cache when invalidation was done through range mmu notifiers.
      This became a problem when the page notifier was removed.
      
      Re-introduce the arch-specific helper and call it from ...range_start.
      Reported-by: NFabian Grünbichler <f.gruenbichler@proxmox.com>
      Fixes: 38b99173 ("kvm: vmx: Implement set_apic_access_page_addr")
      Fixes: 369ea824 ("mm/rmap: update to new mmu_notifier semantic v2")
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NAndrea Arcangeli <aarcange@redhat.com>
      Tested-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Tested-by: NFabian Grünbichler <f.gruenbichler@proxmox.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      b1394e74
    • R
      x86,kvm: move qemu/guest FPU switching out to vcpu_run · f775b13e
      Rik van Riel 提交于
      Currently, every time a VCPU is scheduled out, the host kernel will
      first save the guest FPU/xstate context, then load the qemu userspace
      FPU context, only to then immediately save the qemu userspace FPU
      context back to memory. When scheduling in a VCPU, the same extraneous
      FPU loads and saves are done.
      
      This could be avoided by moving from a model where the guest FPU is
      loaded and stored with preemption disabled, to a model where the
      qemu userspace FPU is swapped out for the guest FPU context for
      the duration of the KVM_RUN ioctl.
      
      This is done under the VCPU mutex, which is also taken when other
      tasks inspect the VCPU FPU context, so the code should already be
      safe for this change. That should come as no surprise, given that
      s390 already has this optimization.
      
      This can fix a bug where KVM calls get_user_pages while owning the
      FPU, and the file system ends up requesting the FPU again:
      
          [258270.527947]  __warn+0xcb/0xf0
          [258270.527948]  warn_slowpath_null+0x1d/0x20
          [258270.527951]  kernel_fpu_disable+0x3f/0x50
          [258270.527953]  __kernel_fpu_begin+0x49/0x100
          [258270.527955]  kernel_fpu_begin+0xe/0x10
          [258270.527958]  crc32c_pcl_intel_update+0x84/0xb0
          [258270.527961]  crypto_shash_update+0x3f/0x110
          [258270.527968]  crc32c+0x63/0x8a [libcrc32c]
          [258270.527975]  dm_bm_checksum+0x1b/0x20 [dm_persistent_data]
          [258270.527978]  node_prepare_for_write+0x44/0x70 [dm_persistent_data]
          [258270.527985]  dm_block_manager_write_callback+0x41/0x50 [dm_persistent_data]
          [258270.527988]  submit_io+0x170/0x1b0 [dm_bufio]
          [258270.527992]  __write_dirty_buffer+0x89/0x90 [dm_bufio]
          [258270.527994]  __make_buffer_clean+0x4f/0x80 [dm_bufio]
          [258270.527996]  __try_evict_buffer+0x42/0x60 [dm_bufio]
          [258270.527998]  dm_bufio_shrink_scan+0xc0/0x130 [dm_bufio]
          [258270.528002]  shrink_slab.part.40+0x1f5/0x420
          [258270.528004]  shrink_node+0x22c/0x320
          [258270.528006]  do_try_to_free_pages+0xf5/0x330
          [258270.528008]  try_to_free_pages+0xe9/0x190
          [258270.528009]  __alloc_pages_slowpath+0x40f/0xba0
          [258270.528011]  __alloc_pages_nodemask+0x209/0x260
          [258270.528014]  alloc_pages_vma+0x1f1/0x250
          [258270.528017]  do_huge_pmd_anonymous_page+0x123/0x660
          [258270.528021]  handle_mm_fault+0xfd3/0x1330
          [258270.528025]  __get_user_pages+0x113/0x640
          [258270.528027]  get_user_pages+0x4f/0x60
          [258270.528063]  __gfn_to_pfn_memslot+0x120/0x3f0 [kvm]
          [258270.528108]  try_async_pf+0x66/0x230 [kvm]
          [258270.528135]  tdp_page_fault+0x130/0x280 [kvm]
          [258270.528149]  kvm_mmu_page_fault+0x60/0x120 [kvm]
          [258270.528158]  handle_ept_violation+0x91/0x170 [kvm_intel]
          [258270.528162]  vmx_handle_exit+0x1ca/0x1400 [kvm_intel]
      
      No performance changes were detected in quick ping-pong tests on
      my 4 socket system, which is expected since an FPU+xstate load is
      on the order of 0.1us, while ping-ponging between CPUs is on the
      order of 20us, and somewhat noisy.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Suggested-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      [Fixed a bug where reset_vcpu called put_fpu without preceding load_fpu,
       which happened inside from KVM_CREATE_VCPU ioctl. - Radim]
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      f775b13e
  9. 05 12月, 2017 6 次提交
    • B
      KVM: SVM: Pin guest memory when SEV is active · 1e80fdc0
      Brijesh Singh 提交于
      The SEV memory encryption engine uses a tweak such that two identical
      plaintext pages at different location will have different ciphertext.
      So swapping or moving ciphertext of two pages will not result in
      plaintext being swapped. Relocating (or migrating) physical backing
      pages for a SEV guest will require some additional steps. The current SEV
      key management spec does not provide commands to swap or migrate (move)
      ciphertext pages. For now, we pin the guest memory registered through
      KVM_MEMORY_ENCRYPT_REG_REGION ioctl.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      1e80fdc0
    • B
      KVM: SVM: Add support for KVM_SEV_LAUNCH_UPDATE_DATA command · 89c50580
      Brijesh Singh 提交于
      The command is used for encrypting the guest memory region using the VM
      encryption key (VEK) created during KVM_SEV_LAUNCH_START.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Improvements-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      89c50580
    • B
      KVM: SVM: Add support for KVM_SEV_LAUNCH_START command · 59414c98
      Brijesh Singh 提交于
      The KVM_SEV_LAUNCH_START command is used to create a memory encryption
      context within the SEV firmware. In order to do so, the guest owner
      should provide the guest's policy, its public Diffie-Hellman (PDH) key
      and session information. The command implements the LAUNCH_START flow
      defined in SEV spec Section 6.2.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Improvements-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      59414c98
    • B
      KVM: SVM: Add KVM_SEV_INIT command · 1654efcb
      Brijesh Singh 提交于
      The command initializes the SEV platform context and allocates a new ASID
      for this guest from the SEV ASID pool. The firmware must be initialized
      before we issue any guest launch commands to create a new memory encryption
      context.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      1654efcb
    • B
      KVM: Introduce KVM_MEMORY_ENCRYPT_{UN,}REG_REGION ioctl · 69eaedee
      Brijesh Singh 提交于
      If hardware supports memory encryption then KVM_MEMORY_ENCRYPT_REG_REGION
      and KVM_MEMORY_ENCRYPT_UNREG_REGION ioctl's can be used by userspace to
      register/unregister the guest memory regions which may contain the encrypted
      data (e.g guest RAM, PCI BAR, SMRAM etc).
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Improvements-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      69eaedee
    • B
      KVM: Introduce KVM_MEMORY_ENCRYPT_OP ioctl · 5acc5c06
      Brijesh Singh 提交于
      If the hardware supports memory encryption then the
      KVM_MEMORY_ENCRYPT_OP ioctl can be used by qemu to issue a platform
      specific memory encryption commands.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      5acc5c06
  10. 17 11月, 2017 1 次提交
  11. 19 10月, 2017 1 次提交
  12. 12 10月, 2017 3 次提交
    • L
      KVM: nSVM: fix SMI injection in guest mode · 05cade71
      Ladi Prosek 提交于
      Entering SMM while running in guest mode wasn't working very well because several
      pieces of the vcpu state were left set up for nested operation.
      
      Some of the issues observed:
      
      * L1 was getting unexpected VM exits (using L1 interception controls but running
        in SMM execution environment)
      * MMU was confused (walk_mmu was still set to nested_mmu)
      * INTERCEPT_SMI was not emulated for L1 (KVM never injected SVM_EXIT_SMI)
      
      Intel SDM actually prescribes the logical processor to "leave VMX operation" upon
      entering SMM in 34.14.1 Default Treatment of SMI Delivery. AMD doesn't seem to
      document this but they provide fields in the SMM state-save area to stash the
      current state of SVM. What we need to do is basically get out of guest mode for
      the duration of SMM. All this completely transparent to L1, i.e. L1 is not given
      control and no L1 observable state changes.
      
      To avoid code duplication this commit takes advantage of the existing nested
      vmexit and run functionality, perhaps at the cost of efficiency. To get out of
      guest mode, nested_svm_vmexit is called, unchanged. Re-entering is performed using
      enter_svm_guest_mode.
      
      This commit fixes running Windows Server 2016 with Hyper-V enabled in a VM with
      OVMF firmware (OVMF_CODE-need-smm.fd).
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      05cade71
    • L
      KVM: x86: introduce ISA specific smi_allowed callback · 72d7b374
      Ladi Prosek 提交于
      Similar to NMI, there may be ISA specific reasons why an SMI cannot be
      injected into the guest. This commit adds a new smi_allowed callback to
      be implemented in following commits.
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      72d7b374
    • L
      KVM: x86: introduce ISA specific SMM entry/exit callbacks · 0234bf88
      Ladi Prosek 提交于
      Entering and exiting SMM may require ISA specific handling under certain
      circumstances. This commit adds two new callbacks with empty implementations.
      Actual functionality will be added in following commits.
      
      * pre_enter_smm() is to be called when injecting an SMM, before any
        SMM related vcpu state has been changed
      * pre_leave_smm() is to be called when emulating the RSM instruction,
        when the vcpu is in real mode and before any SMM related vcpu state
        has been restored
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0234bf88
  13. 26 9月, 2017 1 次提交
    • T
      x86/apic: Sanitize 32/64bit APIC callbacks · 64063505
      Thomas Gleixner 提交于
      The 32bit and the 64bit implementation of default_cpu_present_to_apicid()
      and default_check_phys_apicid_present() are exactly the same, but
      implemented and located differently.
      
      Move them to common apic code and get rid of the pointless difference.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213153.757329991@linutronix.de
      64063505
  14. 14 9月, 2017 1 次提交
  15. 13 9月, 2017 1 次提交
  16. 01 9月, 2017 1 次提交
  17. 25 8月, 2017 6 次提交
  18. 18 8月, 2017 1 次提交
  19. 10 8月, 2017 1 次提交
  20. 08 8月, 2017 1 次提交