1. 19 6月, 2014 2 次提交
  2. 24 2月, 2014 1 次提交
  3. 12 12月, 2013 1 次提交
  4. 07 8月, 2013 1 次提交
  5. 22 4月, 2013 1 次提交
  6. 17 4月, 2013 1 次提交
  7. 14 3月, 2013 2 次提交
  8. 13 3月, 2013 1 次提交
  9. 08 3月, 2013 1 次提交
  10. 06 2月, 2013 1 次提交
  11. 29 1月, 2013 3 次提交
  12. 15 12月, 2012 1 次提交
  13. 14 12月, 2012 1 次提交
  14. 05 12月, 2012 1 次提交
  15. 21 9月, 2012 1 次提交
  16. 12 7月, 2012 1 次提交
    • M
      KVM: VMX: Implement PCID/INVPCID for guests with EPT · ad756a16
      Mao, Junjie 提交于
      This patch handles PCID/INVPCID for guests.
      
      Process-context identifiers (PCIDs) are a facility by which a logical processor
      may cache information for multiple linear-address spaces so that the processor
      may retain cached information when software switches to a different linear
      address space. Refer to section 4.10.1 in IA32 Intel Software Developer's Manual
      Volume 3A for details.
      
      For guests with EPT, the PCID feature is enabled and INVPCID behaves as running
      natively.
      For guests without EPT, the PCID feature is disabled and INVPCID triggers #UD.
      Signed-off-by: NJunjie Mao <junjie.mao@intel.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      ad756a16
  17. 05 6月, 2012 1 次提交
  18. 26 9月, 2011 1 次提交
  19. 12 7月, 2011 3 次提交
    • N
      KVM: nVMX: vmcs12 checks on nested entry · 7c177938
      Nadav Har'El 提交于
      This patch adds a bunch of tests of the validity of the vmcs12 fields,
      according to what the VMX spec and our implementation allows. If fields
      we cannot (or don't want to) honor are discovered, an entry failure is
      emulated.
      
      According to the spec, there are two types of entry failures: If the problem
      was in vmcs12's host state or control fields, the VMLAUNCH instruction simply
      fails. But a problem is found in the guest state, the behavior is more
      similar to that of an exit.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      7c177938
    • N
      KVM: nVMX: Exiting from L2 to L1 · 4704d0be
      Nadav Har'El 提交于
      This patch implements nested_vmx_vmexit(), called when the nested L2 guest
      exits and we want to run its L1 parent and let it handle this exit.
      
      Note that this will not necessarily be called on every L2 exit. L0 may decide
      to handle a particular exit on its own, without L1's involvement; In that
      case, L0 will handle the exit, and resume running L2, without running L1 and
      without calling nested_vmx_vmexit(). The logic for deciding whether to handle
      a particular exit in L1 or in L0, i.e., whether to call nested_vmx_vmexit(),
      will appear in a separate patch below.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      4704d0be
    • N
      KVM: nVMX: Success/failure of VMX instructions. · 0140caea
      Nadav Har'El 提交于
      VMX instructions specify success or failure by setting certain RFLAGS bits.
      This patch contains common functions to do this, and they will be used in
      the following patches which emulate the various VMX instructions.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      0140caea
  20. 12 1月, 2011 3 次提交
  21. 01 8月, 2010 4 次提交
  22. 19 5月, 2010 2 次提交
  23. 01 3月, 2010 4 次提交
  24. 03 12月, 2009 1 次提交
    • Z
      KVM: VMX: Add support for Pause-Loop Exiting · 4b8d54f9
      Zhai, Edwin 提交于
      New NHM processors will support Pause-Loop Exiting by adding 2 VM-execution
      control fields:
      PLE_Gap    - upper bound on the amount of time between two successive
                   executions of PAUSE in a loop.
      PLE_Window - upper bound on the amount of time a guest is allowed to execute in
                   a PAUSE loop
      
      If the time, between this execution of PAUSE and previous one, exceeds the
      PLE_Gap, processor consider this PAUSE belongs to a new loop.
      Otherwise, processor determins the the total execution time of this loop(since
      1st PAUSE in this loop), and triggers a VM exit if total time exceeds the
      PLE_Window.
      * Refer SDM volume 3b section 21.6.13 & 22.1.3.
      
      Pause-Loop Exiting can be used to detect Lock-Holder Preemption, where one VP
      is sched-out after hold a spinlock, then other VPs for same lock are sched-in
      to waste the CPU time.
      
      Our tests indicate that most spinlocks are held for less than 212 cycles.
      Performance tests show that with 2X LP over-commitment we can get +2% perf
      improvement for kernel build(Even more perf gain with more LPs).
      Signed-off-by: NZhai Edwin <edwin.zhai@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      4b8d54f9
  25. 10 9月, 2009 1 次提交