1. 01 12月, 2012 1 次提交
    • W
      KVM: x86: Add code to track call origin for msr assignment · 8fe8ab46
      Will Auld 提交于
      In order to track who initiated the call (host or guest) to modify an msr
      value I have changed function call parameters along the call path. The
      specific change is to add a struct pointer parameter that points to (index,
      data, caller) information rather than having this information passed as
      individual parameters.
      
      The initial use for this capability is for updating the IA32_TSC_ADJUST msr
      while setting the tsc value. It is anticipated that this capability is
      useful for other tasks.
      Signed-off-by: NWill Auld <will.auld@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      8fe8ab46
  2. 28 11月, 2012 2 次提交
  3. 22 10月, 2012 1 次提交
  4. 23 9月, 2012 1 次提交
    • J
      KVM: x86: Fix guest debug across vcpu INIT reset · c8639010
      Jan Kiszka 提交于
      If we reset a vcpu on INIT, we so far overwrote dr7 as provided by
      KVM_SET_GUEST_DEBUG, and we also cleared switch_db_regs unconditionally.
      
      Fix this by saving the dr7 used for guest debugging and calculating the
      effective register value as well as switch_db_regs on any potential
      change. This will change to focus of the set_guest_debug vendor op to
      update_dp_bp_intercept.
      
      Found while trying to stop on start_secondary.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      c8639010
  5. 17 9月, 2012 1 次提交
  6. 05 9月, 2012 1 次提交
  7. 06 8月, 2012 1 次提交
  8. 21 7月, 2012 1 次提交
  9. 12 7月, 2012 1 次提交
    • M
      KVM: VMX: Implement PCID/INVPCID for guests with EPT · ad756a16
      Mao, Junjie 提交于
      This patch handles PCID/INVPCID for guests.
      
      Process-context identifiers (PCIDs) are a facility by which a logical processor
      may cache information for multiple linear-address spaces so that the processor
      may retain cached information when software switches to a different linear
      address space. Refer to section 4.10.1 in IA32 Intel Software Developer's Manual
      Volume 3A for details.
      
      For guests with EPT, the PCID feature is enabled and INVPCID behaves as running
      natively.
      For guests without EPT, the PCID feature is disabled and INVPCID triggers #UD.
      Signed-off-by: NJunjie Mao <junjie.mao@intel.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      ad756a16
  10. 06 6月, 2012 1 次提交
  11. 17 4月, 2012 1 次提交
  12. 08 4月, 2012 1 次提交
  13. 08 3月, 2012 5 次提交
  14. 05 3月, 2012 2 次提交
  15. 02 3月, 2012 1 次提交
  16. 27 12月, 2011 1 次提交
  17. 30 10月, 2011 1 次提交
    • J
      KVM: SVM: Keep intercepting task switching with NPT enabled · f1c1da2b
      Jan Kiszka 提交于
      AMD processors apparently have a bug in the hardware task switching
      support when NPT is enabled. If the task switch triggers a NPF, we can
      get wrong EXITINTINFO along with that fault. On resume, spurious
      exceptions may then be injected into the guest.
      
      We were able to reproduce this bug when our guest triggered #SS and the
      handler were supposed to run over a separate task with not yet touched
      stack pages.
      
      Work around the issue by continuing to emulate task switches even in
      NPT mode.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      f1c1da2b
  18. 26 9月, 2011 6 次提交
    • J
      KVM: x86: Move kvm_trace_exit into atomic vmexit section · 1e2b1dd7
      Jan Kiszka 提交于
      This avoids that events causing the vmexit are recorded before the
      actual exit reason.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      1e2b1dd7
    • N
      KVM: SVM: Fix TSC MSR read in nested SVM · 45133eca
      Nadav Har'El 提交于
      When the TSC MSR is read by an L2 guest (when L1 allowed this MSR to be
      read without exit), we need to return L2's notion of the TSC, not L1's.
      
      The current code incorrectly returned L1 TSC, because svm_get_msr() was also
      used in x86.c where this was assumed, but now that these places call the new
      svm_read_l1_tsc(), the MSR read can be fixed.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Tested-by: NJoerg Roedel <joerg.roedel@amd.com>
      Acked-by: NJoerg Roedel <joerg.roedel@amd.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      45133eca
    • N
      KVM: L1 TSC handling · d5c1785d
      Nadav Har'El 提交于
      KVM assumed in several places that reading the TSC MSR returns the value for
      L1. This is incorrect, because when L2 is running, the correct TSC read exit
      emulation is to return L2's value.
      
      We therefore add a new x86_ops function, read_l1_tsc, to use in places that
      specifically need to read the L1 TSC, NOT the TSC of the current level of
      guest.
      
      Note that one change, of one line in kvm_arch_vcpu_load, is made redundant
      by a different patch sent by Zachary Amsden (and not yet applied):
      kvm_arch_vcpu_load() should not read the guest TSC, and if it didn't, of
      course we didn't have to change the call of kvm_get_msr() to read_l1_tsc().
      
      [avi: moved callback to kvm_x86_ops tsc block]
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Acked-by: NZachary Amsdem <zamsden@gmail.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d5c1785d
    • A
      KVM: MMU: Do not unconditionally read PDPTE from guest memory · e4e517b4
      Avi Kivity 提交于
      Architecturally, PDPTEs are cached in the PDPTRs when CR3 is reloaded.
      On SVM, it is not possible to implement this, but on VMX this is possible
      and was indeed implemented until nested SVM changed this to unconditionally
      read PDPTEs dynamically.  This has noticable impact when running PAE guests.
      
      Fix by changing the MMU to read PDPTRs from the cache, falling back to
      reading from memory for the nested MMU.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Tested-by: NJoerg Roedel <joerg.roedel@amd.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      e4e517b4
    • S
      KVM: Use __print_symbolic() for vmexit tracepoints · 0d460ffc
      Stefan Hajnoczi 提交于
      The vmexit tracepoints format the exit_reason to make it human-readable.
      Since the exit_reason depends on the instruction set (vmx or svm),
      formatting is handled with ftrace_print_symbols_seq() by referring to
      the appropriate exit reason table.
      
      However, the ftrace_print_symbols_seq() function is not meant to be used
      directly in tracepoints since it does not export the formatting table
      which userspace tools like trace-cmd and perf use to format traces.
      
      In practice perf dies when formatting vmexit-related events and
      trace-cmd falls back to printing the numeric value (with extra
      formatting code in the kvm plugin to paper over this limitation).  Other
      userspace consumers of vmexit-related tracepoints would be in similar
      trouble.
      
      To avoid significant changes to the kvm_exit tracepoint, this patch
      moves the vmx and svm exit reason tables into arch/x86/kvm/trace.h and
      selects the right table with __print_symbolic() depending on the
      instruction set.  Note that __print_symbolic() is designed for exporting
      the formatting table to userspace and allows trace-cmd and perf to work.
      Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      0d460ffc
    • S
      KVM: Record instruction set in all vmexit tracepoints · e097e5ff
      Stefan Hajnoczi 提交于
      The kvm_exit tracepoint recently added the isa argument to aid decoding
      exit_reason.  The semantics of exit_reason depend on the instruction set
      (vmx or svm) and the isa argument allows traces to be analyzed on other
      machines.
      
      Add the isa argument to kvm_nested_vmexit and kvm_nested_vmexit_inject
      so these tracepoints can also be self-describing.
      Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      e097e5ff
  19. 12 7月, 2011 1 次提交
    • N
      KVM: nVMX: Allow setting the VMXE bit in CR4 · 5e1746d6
      Nadav Har'El 提交于
      This patch allows the guest to enable the VMXE bit in CR4, which is a
      prerequisite to running VMXON.
      
      Whether to allow setting the VMXE bit now depends on the architecture (svm
      or vmx), so its checking has moved to kvm_x86_ops->set_cr4(). This function
      now returns an int: If kvm_x86_ops->set_cr4() returns 1, __kvm_set_cr4()
      will also return 1, and this will cause kvm_set_cr4() will throw a #GP.
      
      Turning on the VMXE bit is allowed only when the nested VMX feature is
      enabled, and turning it off is forbidden after a vmxon.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      5e1746d6
  20. 22 5月, 2011 2 次提交
  21. 11 5月, 2011 8 次提交