1. 03 11月, 2016 1 次提交
    • P
      KVM: x86: drop TSC offsetting kvm_x86_ops to fix KVM_GET/SET_CLOCK · ea26e4ec
      Paolo Bonzini 提交于
      Since commit a545ab6a ("kvm: x86: add tsc_offset field to struct
      kvm_vcpu_arch", 2016-09-07) the offset between host and L1 TSC is
      cached and need not be fished out of the VMCS or VMCB.  This means
      that we can implement adjust_tsc_offset_guest and read_l1_tsc
      entirely in generic code.  The simplification is particularly
      significant for VMX code, where vmx->nested.vmcs01_tsc_offset
      was duplicating what is now in vcpu->arch.tsc_offset.  Therefore
      the vmcs01_tsc_offset can be dropped completely.
      
      More importantly, this fixes KVM_GET_CLOCK/KVM_SET_CLOCK
      which, after commit 108b249c ("KVM: x86: introduce get_kvmclock_ns",
      2016-09-01) called read_l1_tsc while the VMCS was not loaded.
      It thus returned bogus values on Intel CPUs.
      
      Fixes: 108b249cReported-by: NRoman Kagan <rkagan@virtuozzo.com>
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ea26e4ec
  2. 20 9月, 2016 1 次提交
  3. 16 9月, 2016 2 次提交
  4. 08 9月, 2016 3 次提交
  5. 14 7月, 2016 8 次提交
  6. 24 6月, 2016 1 次提交
  7. 16 6月, 2016 3 次提交
    • Y
      kvm: vmx: hook preemption timer support · 64672c95
      Yunhong Jiang 提交于
      Hook the VMX preemption timer to the "hv timer" functionality added
      by the previous patch.  This includes: checking if the feature is
      supported, if the feature is broken on the CPU, the hooks to
      setup/clean the VMX preemption timer, arming the timer on vmentry
      and handling the vmexit.
      
      A module parameter states if the VMX preemption timer should be
      utilized.
      Signed-off-by: NYunhong Jiang <yunhong.jiang@intel.com>
      [Move hv_deadline_tsc to struct vcpu_vmx, use -1 as the "unset" value.
       Put all VMX bits here.  Enable it by default #yolo. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      64672c95
    • Y
      KVM: x86: support using the vmx preemption timer for tsc deadline timer · ce7a058a
      Yunhong Jiang 提交于
      The VMX preemption timer can be used to virtualize the TSC deadline timer.
      The VMX preemption timer is armed when the vCPU is running, and a VMExit
      will happen if the virtual TSC deadline timer expires.
      
      When the vCPU thread is blocked because of HLT, KVM will switch to use
      an hrtimer, and then go back to the VMX preemption timer when the vCPU
      thread is unblocked.
      
      This solution avoids the complex OS's hrtimer system, and the host
      timer interrupt handling cost, replacing them with a little math
      (for guest->host TSC and host TSC->preemption timer conversion)
      and a cheaper VMexit.  This benefits latency for isolated pCPUs.
      
      [A word about performance... Yunhong reported a 30% reduction in average
       latency from cyclictest.  I made a similar test with tscdeadline_latency
       from kvm-unit-tests, and measured
      
       - ~20 clock cycles loss (out of ~3200, so less than 1% but still
         statistically significant) in the worst case where the test halts
         just after programming the TSC deadline timer
      
       - ~800 clock cycles gain (25% reduction in latency) in the best case
         where the test busy waits.
      
       I removed the VMX bits from Yunhong's patch, to concentrate them in the
       next patch - Paolo]
      Signed-off-by: NYunhong Jiang <yunhong.jiang@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ce7a058a
    • S
      kvm: svm: Fix implicit declaration for __default_cpu_present_to_apicid() · 7d669f50
      Suravee Suthikulpanit 提交于
      The commit 8221c137 ("svm: Manage vcpu load/unload when enable AVIC")
      introduces a build error due to implicit function declaration
      when #ifdef CONFIG_X86_32 and #ifndef CONFIG_X86_LOCAL_APIC
      (as reported by Kbuild test robot i386-randconfig-x0-06121009).
      
      So, this patch introduces kvm_cpu_get_apicid() wrapper
      around __default_cpu_present_to_apicid() with additional
      handling if CONFIG_X86_LOCAL_APIC is not defined.
      Reported-by: Nkbuild test robot <fengguang.wu@intel.com>
      Fixes: commit 8221c137 ("svm: Manage vcpu load/unload when enable AVIC")
      Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7d669f50
  8. 19 5月, 2016 6 次提交
  9. 13 5月, 2016 1 次提交
    • C
      KVM: halt_polling: provide a way to qualify wakeups during poll · 3491caf2
      Christian Borntraeger 提交于
      Some wakeups should not be considered a sucessful poll. For example on
      s390 I/O interrupts are usually floating, which means that _ALL_ CPUs
      would be considered runnable - letting all vCPUs poll all the time for
      transactional like workload, even if one vCPU would be enough.
      This can result in huge CPU usage for large guests.
      This patch lets architectures provide a way to qualify wakeups if they
      should be considered a good/bad wakeups in regard to polls.
      
      For s390 the implementation will fence of halt polling for anything but
      known good, single vCPU events. The s390 implementation for floating
      interrupts does a wakeup for one vCPU, but the interrupt will be delivered
      by whatever CPU checks first for a pending interrupt. We prefer the
      woken up CPU by marking the poll of this CPU as "good" poll.
      This code will also mark several other wakeup reasons like IPI or
      expired timers as "good". This will of course also mark some events as
      not sucessful. As  KVM on z runs always as a 2nd level hypervisor,
      we prefer to not poll, unless we are really sure, though.
      
      This patch successfully limits the CPU usage for cases like uperf 1byte
      transactional ping pong workload or wakeup heavy workload like OLTP
      while still providing a proper speedup.
      
      This also introduced a new vcpu stat "halt_poll_no_tuning" that marks
      wakeups that are considered not good for polling.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: Radim Krčmář <rkrcmar@redhat.com> (for an earlier version)
      Cc: David Matlack <dmatlack@google.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      [Rename config symbol. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3491caf2
  10. 20 4月, 2016 1 次提交
  11. 01 4月, 2016 1 次提交
    • P
      KVM: x86: reduce default value of halt_poll_ns parameter · 14ebda33
      Paolo Bonzini 提交于
      Windows lets applications choose the frequency of the timer tick,
      and in Windows 10 the maximum rate was changed from 1024 Hz to
      2048 Hz.  Unfortunately, because of the way the Windows API
      works, most applications who need a higher rate than the default
      64 Hz will just do
      
         timeGetDevCaps(&tc, sizeof(tc));
         timeBeginPeriod(tc.wPeriodMin);
      
      and pick the maximum rate.  This causes very high CPU usage when
      playing media or games on Windows 10, even if the guest does not
      actually use the CPU very much, because the frequent timer tick
      causes halt_poll_ns to kick in.
      
      There is no really good solution, especially because Microsoft
      could sooner or later bump the limit to 4096 Hz, but for now
      the best we can do is lower a bit the upper limit for
      halt_poll_ns. :-(
      Reported-by: NJon Panozzo <jonp@lime-technology.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      14ebda33
  12. 22 3月, 2016 3 次提交
  13. 09 3月, 2016 1 次提交
  14. 08 3月, 2016 1 次提交
    • P
      KVM: MMU: simplify last_pte_bitmap · 6bb69c9b
      Paolo Bonzini 提交于
      Branch-free code is fun and everybody knows how much Avi loves it,
      but last_pte_bitmap takes it a bit to the extreme.  Since the code
      is simply doing a range check, like
      
      	(level == 1 ||
      	 ((gpte & PT_PAGE_SIZE_MASK) && level < N)
      
      we can make it branch-free without storing the entire truth table;
      it is enough to cache N.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6bb69c9b
  15. 03 3月, 2016 5 次提交
  16. 09 2月, 2016 1 次提交
  17. 09 1月, 2016 1 次提交