1. 14 12月, 2012 2 次提交
  2. 06 12月, 2012 1 次提交
  3. 02 12月, 2012 1 次提交
  4. 01 12月, 2012 2 次提交
    • W
      KVM: x86: Emulate IA32_TSC_ADJUST MSR · ba904635
      Will Auld 提交于
      CPUID.7.0.EBX[1]=1 indicates IA32_TSC_ADJUST MSR 0x3b is supported
      
      Basic design is to emulate the MSR by allowing reads and writes to a guest
      vcpu specific location to store the value of the emulated MSR while adding
      the value to the vmcs tsc_offset. In this way the IA32_TSC_ADJUST value will
      be included in all reads to the TSC MSR whether through rdmsr or rdtsc. This
      is of course as long as the "use TSC counter offsetting" VM-execution control
      is enabled as well as the IA32_TSC_ADJUST control.
      
      However, because hardware will only return the TSC + IA32_TSC_ADJUST +
      vmsc tsc_offset for a guest process when it does and rdtsc (with the correct
      settings) the value of our virtualized IA32_TSC_ADJUST must be stored in one
      of these three locations. The argument against storing it in the actual MSR
      is performance. This is likely to be seldom used while the save/restore is
      required on every transition. IA32_TSC_ADJUST was created as a way to solve
      some issues with writing TSC itself so that is not an option either.
      
      The remaining option, defined above as our solution has the problem of
      returning incorrect vmcs tsc_offset values (unless we intercept and fix, not
      done here) as mentioned above. However, more problematic is that storing the
      data in vmcs tsc_offset will have a different semantic effect on the system
      than does using the actual MSR. This is illustrated in the following example:
      
      The hypervisor set the IA32_TSC_ADJUST, then the guest sets it and a guest
      process performs a rdtsc. In this case the guest process will get
      TSC + IA32_TSC_ADJUST_hyperviser + vmsc tsc_offset including
      IA32_TSC_ADJUST_guest. While the total system semantics changed the semantics
      as seen by the guest do not and hence this will not cause a problem.
      Signed-off-by: NWill Auld <will.auld@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ba904635
    • W
      KVM: x86: Add code to track call origin for msr assignment · 8fe8ab46
      Will Auld 提交于
      In order to track who initiated the call (host or guest) to modify an msr
      value I have changed function call parameters along the call path. The
      specific change is to add a struct pointer parameter that points to (index,
      data, caller) information rather than having this information passed as
      individual parameters.
      
      The initial use for this capability is for updating the IA32_TSC_ADJUST msr
      while setting the tsc value. It is anticipated that this capability is
      useful for other tasks.
      Signed-off-by: NWill Auld <will.auld@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      8fe8ab46
  5. 28 11月, 2012 7 次提交
  6. 14 11月, 2012 3 次提交
  7. 13 11月, 2012 1 次提交
    • P
      KVM: x86: invalid opcode oops on SET_SREGS with OSXSAVE bit set (CVE-2012-4461) · 6d1068b3
      Petr Matousek 提交于
      On hosts without the XSAVE support unprivileged local user can trigger
      oops similar to the one below by setting X86_CR4_OSXSAVE bit in guest
      cr4 register using KVM_SET_SREGS ioctl and later issuing KVM_RUN
      ioctl.
      
      invalid opcode: 0000 [#2] SMP
      Modules linked in: tun ip6table_filter ip6_tables ebtable_nat ebtables
      ...
      Pid: 24935, comm: zoog_kvm_monito Tainted: G      D      3.2.0-3-686-pae
      EIP: 0060:[<f8b9550c>] EFLAGS: 00210246 CPU: 0
      EIP is at kvm_arch_vcpu_ioctl_run+0x92a/0xd13 [kvm]
      EAX: 00000001 EBX: 000f387e ECX: 00000000 EDX: 00000000
      ESI: 00000000 EDI: 00000000 EBP: ef5a0060 ESP: d7c63e70
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      Process zoog_kvm_monito (pid: 24935, ti=d7c62000 task=ed84a0c0
      task.ti=d7c62000)
      Stack:
       00000001 f70a1200 f8b940a9 ef5a0060 00000000 00200202 f8769009 00000000
       ef5a0060 000f387e eda5c020 8722f9c8 00015bae 00000000 ed84a0c0 ed84a0c0
       c12bf02d 0000ae80 ef7f8740 fffffffb f359b740 ef5a0060 f8b85dc1 0000ae80
      Call Trace:
       [<f8b940a9>] ? kvm_arch_vcpu_ioctl_set_sregs+0x2fe/0x308 [kvm]
      ...
       [<c12bfb44>] ? syscall_call+0x7/0xb
      Code: 89 e8 e8 14 ee ff ff ba 00 00 04 00 89 e8 e8 98 48 ff ff 85 c0 74
      1e 83 7d 48 00 75 18 8b 85 08 07 00 00 31 c9 8b 95 0c 07 00 00 <0f> 01
      d1 c7 45 48 01 00 00 00 c7 45 1c 01 00 00 00 0f ae f0 89
      EIP: [<f8b9550c>] kvm_arch_vcpu_ioctl_run+0x92a/0xd13 [kvm] SS:ESP
      0068:d7c63e70
      
      QEMU first retrieves the supported features via KVM_GET_SUPPORTED_CPUID
      and then sets them later. So guest's X86_FEATURE_XSAVE should be masked
      out on hosts without X86_FEATURE_XSAVE, making kvm_set_cr4 with
      X86_CR4_OSXSAVE fail. Userspaces that allow specifying guest cpuid with
      X86_FEATURE_XSAVE even on hosts that do not support it, might be
      susceptible to this attack from inside the guest as well.
      
      Allow setting X86_CR4_OSXSAVE bit only if host has XSAVE support.
      Signed-off-by: NPetr Matousek <pmatouse@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      6d1068b3
  8. 01 11月, 2012 1 次提交
    • X
      KVM: x86: fix vcpu->mmio_fragments overflow · 87da7e66
      Xiao Guangrong 提交于
      After commit b3356bf0 (KVM: emulator: optimize "rep ins" handling),
      the pieces of io data can be collected and write them to the guest memory
      or MMIO together
      
      Unfortunately, kvm splits the mmio access into 8 bytes and store them to
      vcpu->mmio_fragments. If the guest uses "rep ins" to move large data, it
      will cause vcpu->mmio_fragments overflow
      
      The bug can be exposed by isapc (-M isapc):
      
      [23154.818733] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC
      [ ......]
      [23154.858083] Call Trace:
      [23154.859874]  [<ffffffffa04f0e17>] kvm_get_cr8+0x1d/0x28 [kvm]
      [23154.861677]  [<ffffffffa04fa6d4>] kvm_arch_vcpu_ioctl_run+0xcda/0xe45 [kvm]
      [23154.863604]  [<ffffffffa04f5a1a>] ? kvm_arch_vcpu_load+0x17b/0x180 [kvm]
      
      Actually, we can use one mmio_fragment to store a large mmio access then
      split it when we pass the mmio-exit-info to userspace. After that, we only
      need two entries to store mmio info for the cross-mmio pages access
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      87da7e66
  9. 30 10月, 2012 1 次提交
  10. 18 10月, 2012 1 次提交
  11. 09 10月, 2012 2 次提交
  12. 23 9月, 2012 2 次提交
    • J
      KVM: x86: Fix guest debug across vcpu INIT reset · c8639010
      Jan Kiszka 提交于
      If we reset a vcpu on INIT, we so far overwrote dr7 as provided by
      KVM_SET_GUEST_DEBUG, and we also cleared switch_db_regs unconditionally.
      
      Fix this by saving the dr7 used for guest debugging and calculating the
      effective register value as well as switch_db_regs on any potential
      change. This will change to focus of the set_guest_debug vendor op to
      update_dp_bp_intercept.
      
      Found while trying to stop on start_secondary.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      c8639010
    • A
      KVM: Add resampling irqfds for level triggered interrupts · 7a84428a
      Alex Williamson 提交于
      To emulate level triggered interrupts, add a resample option to
      KVM_IRQFD.  When specified, a new resamplefd is provided that notifies
      the user when the irqchip has been resampled by the VM.  This may, for
      instance, indicate an EOI.  Also in this mode, posting of an interrupt
      through an irqfd only asserts the interrupt.  On resampling, the
      interrupt is automatically de-asserted prior to user notification.
      This enables level triggered interrupts to be posted and re-enabled
      from vfio with no userspace intervention.
      
      All resampling irqfds can make use of a single irq source ID, so we
      reserve a new one for this interface.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7a84428a
  13. 22 9月, 2012 1 次提交
    • S
      x86, kvm: fix kvm's usage of kernel_fpu_begin/end() · b1a74bf8
      Suresh Siddha 提交于
      Preemption is disabled between kernel_fpu_begin/end() and as such
      it is not a good idea to use these routines in kvm_load/put_guest_fpu()
      which can be very far apart.
      
      kvm_load/put_guest_fpu() routines are already called with
      preemption disabled and KVM already uses the preempt notifier to save
      the guest fpu state using kvm_put_guest_fpu().
      
      So introduce __kernel_fpu_begin/end() routines which don't touch
      preemption and use them instead of kernel_fpu_begin/end()
      for KVM's use model of saving/restoring guest FPU state.
      
      Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
      state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
      So no need to worry about it. For the traditional lazyFPU restore case,
      change the cr0.TS bit for the host state during vm-exit to be always clear
      and cr0.TS bit is set in the __vmx_load_host_state() when the FPU
      (guest FPU or the host task's FPU) state is not active. This ensures
      that the host/guest FPU state is properly saved, restored
      during context-switch and with interrupts (using irq_fpu_usable()) not
      stomping on the active FPU state.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1348164109.26695.338.camel@sbsiddha-desk.sc.intel.com
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b1a74bf8
  14. 20 9月, 2012 2 次提交
    • G
      KVM: optimize apic interrupt delivery · 1e08ec4a
      Gleb Natapov 提交于
      Most interrupt are delivered to only one vcpu. Use pre-build tables to
      find interrupt destination instead of looping through all vcpus. In case
      of logical mode loop only through vcpus in a logical cluster irq is sent
      to.
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      1e08ec4a
    • A
      KVM: MMU: Optimize pte permission checks · 97d64b78
      Avi Kivity 提交于
      walk_addr_generic() permission checks are a maze of branchy code, which is
      performed four times per lookup.  It depends on the type of access, efer.nxe,
      cr0.wp, cr4.smep, and in the near future, cr4.smap.
      
      Optimize this away by precalculating all variants and storing them in a
      bitmap.  The bitmap is recalculated when rarely-changing variables change
      (cr0, cr4) and is indexed by the often-changing variables (page fault error
      code, pte access permissions).
      
      The permission check is moved to the end of the loop, otherwise an SMEP
      fault could be reported as a false positive, when PDE.U=1 but PTE.U=0.
      Noted by Xiao Guangrong.
      
      The result is short, branch-free code.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      97d64b78
  15. 19 9月, 2012 1 次提交
  16. 18 9月, 2012 1 次提交
    • M
      KVM: make processes waiting on vcpu mutex killable · 9fc77441
      Michael S. Tsirkin 提交于
      vcpu mutex can be held for unlimited time so
      taking it with mutex_lock on an ioctl is wrong:
      one process could be passed a vcpu fd and
      call this ioctl on the vcpu used by another process,
      it will then be unkillable until the owner exits.
      
      Call mutex_lock_killable instead and return status.
      Note: mutex_lock_interruptible would be even nicer,
      but I am not sure all users are prepared to handle EINTR
      from these ioctls. They might misinterpret it as an error.
      
      Cleanup paths expect a vcpu that can't be used by
      any userspace so this will always succeed - catch bugs
      by calling BUG_ON.
      
      Catch callers that don't check return state by adding
      __must_check.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      9fc77441
  17. 10 9月, 2012 1 次提交
    • X
      KVM: fix error paths for failed gfn_to_page() calls · 4484141a
      Xiao Guangrong 提交于
      This bug was triggered:
      [ 4220.198458] BUG: unable to handle kernel paging request at fffffffffffffffe
      [ 4220.203907] IP: [<ffffffff81104d85>] put_page+0xf/0x34
      ......
      [ 4220.237326] Call Trace:
      [ 4220.237361]  [<ffffffffa03830d0>] kvm_arch_destroy_vm+0xf9/0x101 [kvm]
      [ 4220.237382]  [<ffffffffa036fe53>] kvm_put_kvm+0xcc/0x127 [kvm]
      [ 4220.237401]  [<ffffffffa03702bc>] kvm_vcpu_release+0x18/0x1c [kvm]
      [ 4220.237407]  [<ffffffff81145425>] __fput+0x111/0x1ed
      [ 4220.237411]  [<ffffffff8114550f>] ____fput+0xe/0x10
      [ 4220.237418]  [<ffffffff81063511>] task_work_run+0x5d/0x88
      [ 4220.237424]  [<ffffffff8104c3f7>] do_exit+0x2bf/0x7ca
      
      The test case:
      
      	printf(fmt, ##args);		\
      	exit(-1);} while (0)
      
      static int create_vm(void)
      {
      	int sys_fd, vm_fd;
      
      	sys_fd = open("/dev/kvm", O_RDWR);
      	if (sys_fd < 0)
      		die("open /dev/kvm fail.\n");
      
      	vm_fd = ioctl(sys_fd, KVM_CREATE_VM, 0);
      	if (vm_fd < 0)
      		die("KVM_CREATE_VM fail.\n");
      
      	return vm_fd;
      }
      
      static int create_vcpu(int vm_fd)
      {
      	int vcpu_fd;
      
      	vcpu_fd = ioctl(vm_fd, KVM_CREATE_VCPU, 0);
      	if (vcpu_fd < 0)
      		die("KVM_CREATE_VCPU ioctl.\n");
      	printf("Create vcpu.\n");
      	return vcpu_fd;
      }
      
      static void *vcpu_thread(void *arg)
      {
      	int vm_fd = (int)(long)arg;
      
      	create_vcpu(vm_fd);
      	return NULL;
      }
      
      int main(int argc, char *argv[])
      {
      	pthread_t thread;
      	int vm_fd;
      
      	(void)argc;
      	(void)argv;
      
      	vm_fd = create_vm();
      	pthread_create(&thread, NULL, vcpu_thread, (void *)(long)vm_fd);
      	printf("Exit.\n");
      	return 0;
      }
      
      It caused by release kvm->arch.ept_identity_map_addr which is the
      error page.
      
      The parent thread can send KILL signal to the vcpu thread when it was
      exiting which stops faulting pages and potentially allocating memory.
      So gfn_to_pfn/gfn_to_page may fail at this time
      
      Fixed by checking the page before it is used
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      4484141a
  18. 06 9月, 2012 4 次提交
  19. 05 9月, 2012 3 次提交
  20. 31 8月, 2012 1 次提交
  21. 28 8月, 2012 2 次提交