1. 14 12月, 2017 6 次提交
  2. 06 12月, 2017 2 次提交
  3. 28 11月, 2017 2 次提交
    • W
      KVM: VMX: Fix vmx->nested freeing when no SMI handler · b7455825
      Wanpeng Li 提交于
      Reported by syzkaller:
      
         ------------[ cut here ]------------
         WARNING: CPU: 5 PID: 2939 at arch/x86/kvm/vmx.c:3844 free_loaded_vmcs+0x77/0x80 [kvm_intel]
         CPU: 5 PID: 2939 Comm: repro Not tainted 4.14.0+ #26
         RIP: 0010:free_loaded_vmcs+0x77/0x80 [kvm_intel]
         Call Trace:
          vmx_free_vcpu+0xda/0x130 [kvm_intel]
          kvm_arch_destroy_vm+0x192/0x290 [kvm]
          kvm_put_kvm+0x262/0x560 [kvm]
          kvm_vm_release+0x2c/0x30 [kvm]
          __fput+0x190/0x370
          task_work_run+0xa1/0xd0
          do_exit+0x4d2/0x13e0
          do_group_exit+0x89/0x140
          get_signal+0x318/0xb80
          do_signal+0x8c/0xb40
          exit_to_usermode_loop+0xe4/0x140
          syscall_return_slowpath+0x206/0x230
          entry_SYSCALL_64_fastpath+0x98/0x9a
      
      The syzkaller testcase will execute VMXON/VMLAUCH instructions, so the
      vmx->nested stuff is populated, it will also issue KVM_SMI ioctl. However,
      the testcase is just a simple c program and not be lauched by something
      like seabios which implements smi_handler. Commit 05cade71 (KVM: nSVM:
      fix SMI injection in guest mode) gets out of guest mode and set nested.vmxon
      to false for the duration of SMM according to SDM 34.14.1 "leave VMX
      operation" upon entering SMM. We can't alloc/free the vmx->nested stuff
      each time when entering/exiting SMM since it will induce more overhead. So
      the function vmx_pre_enter_smm() marks nested.vmxon false even if vmx->nested
      stuff is still populated. What it expected is em_rsm() can mark nested.vmxon
      to be true again. However, the smi_handler/rsm will not execute since there
      is no something like seabios in this scenario. The function free_nested()
      fails to free the vmx->nested stuff since the vmx->nested.vmxon is false
      which results in the above warning.
      
      This patch fixes it by also considering the no SMI handler case, luckily
      vmx->nested.smm.vmxon is marked according to the value of vmx->nested.vmxon
      in vmx_pre_enter_smm(), we can take advantage of it and free vmx->nested
      stuff when L1 goes down.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NLiran Alon <liran.alon@oracle.com>
      Fixes: 05cade71 (KVM: nSVM: fix SMI injection in guest mode)
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b7455825
    • W
      KVM: VMX: Fix rflags cache during vCPU reset · c37c2873
      Wanpeng Li 提交于
      Reported by syzkaller:
      
         *** Guest State ***
         CR0: actual=0x0000000080010031, shadow=0x0000000060000010, gh_mask=fffffffffffffff7
         CR4: actual=0x0000000000002061, shadow=0x0000000000000000, gh_mask=ffffffffffffe8f1
         CR3 = 0x000000002081e000
         RSP = 0x000000000000fffa  RIP = 0x0000000000000000
         RFLAGS=0x00023000         DR7 = 0x00000000000000
                ^^^^^^^^^^
         ------------[ cut here ]------------
         WARNING: CPU: 6 PID: 24431 at /home/kernel/linux/arch/x86/kvm//x86.c:7302 kvm_arch_vcpu_ioctl_run+0x651/0x2ea0 [kvm]
         CPU: 6 PID: 24431 Comm: reprotest Tainted: G        W  OE   4.14.0+ #26
         RIP: 0010:kvm_arch_vcpu_ioctl_run+0x651/0x2ea0 [kvm]
         RSP: 0018:ffff880291d179e0 EFLAGS: 00010202
         Call Trace:
          kvm_vcpu_ioctl+0x479/0x880 [kvm]
          do_vfs_ioctl+0x142/0x9a0
          SyS_ioctl+0x74/0x80
          entry_SYSCALL_64_fastpath+0x23/0x9a
      
      The failed vmentry is triggered by the following beautified testcase:
      
          #include <unistd.h>
          #include <sys/syscall.h>
          #include <string.h>
          #include <stdint.h>
          #include <linux/kvm.h>
          #include <fcntl.h>
          #include <sys/ioctl.h>
      
          long r[5];
          int main()
          {
              struct kvm_debugregs dr = { 0 };
      
              r[2] = open("/dev/kvm", O_RDONLY);
              r[3] = ioctl(r[2], KVM_CREATE_VM, 0);
              r[4] = ioctl(r[3], KVM_CREATE_VCPU, 7);
              struct kvm_guest_debug debug = {
                      .control = 0xf0403,
                      .arch = {
                              .debugreg[6] = 0x2,
                              .debugreg[7] = 0x2
                      }
              };
              ioctl(r[4], KVM_SET_GUEST_DEBUG, &debug);
              ioctl(r[4], KVM_RUN, 0);
          }
      
      which testcase tries to setup the processor specific debug
      registers and configure vCPU for handling guest debug events through
      KVM_SET_GUEST_DEBUG.  The KVM_SET_GUEST_DEBUG ioctl will get and set
      rflags in order to set TF bit if single step is needed. All regs' caches
      are reset to avail and GUEST_RFLAGS vmcs field is reset to 0x2 during vCPU
      reset. However, the cache of rflags is not reset during vCPU reset. The
      function vmx_get_rflags() returns an unreset rflags cache value since
      the cache is marked avail, it is 0 after boot. Vmentry fails if the
      rflags reserved bit 1 is 0.
      
      This patch fixes it by resetting both the GUEST_RFLAGS vmcs field and
      its cache to 0x2 during vCPU reset.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Tested-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c37c2873
  4. 17 11月, 2017 9 次提交
  5. 03 11月, 2017 2 次提交
  6. 21 10月, 2017 2 次提交
    • W
      KVM: VMX: Fix VPID capability detection · 61f1dd90
      Wanpeng Li 提交于
      In my setup, EPT is not exposed to L1, the VPID capability is exposed and
      can be observed by vmxcap tool in L1:
      INVVPID supported                        yes
      Individual-address INVVPID               yes
      Single-context INVVPID                   yes
      All-context INVVPID                      yes
      Single-context-retaining-globals INVVPID yes
      
      However, the module parameter of VPID observed in L1 is always N, the
      cpu_has_vmx_invvpid() check in L1 KVM fails since vmx_capability.vpid
      is 0 and it is not read from MSR due to EPT is not exposed.
      
      The VPID can be used to tag linear mappings when EPT is not enabled. However,
      current logic just detects VPID capability if EPT is enabled, this patch
      fixes it.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Jim Mattson <jmattson@google.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      61f1dd90
    • W
      KVM: nVMX: Fix EPT switching advertising · 575b3a2c
      Wanpeng Li 提交于
      I can use vmxcap tool to observe "EPTP Switching   yes" even if EPT is not
      exposed to L1.
      
      EPT switching is advertised unconditionally since it is emulated, however,
      it can be treated as an extended feature for EPT and it should not be
      advertised if EPT itself is not exposed. This patch fixes it.
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Jim Mattson <jmattson@google.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      575b3a2c
  7. 19 10月, 2017 1 次提交
  8. 12 10月, 2017 16 次提交