1. 04 3月, 2014 2 次提交
    • A
      x86: kvm: introduce periodic global clock updates · 332967a3
      Andrew Jones 提交于
      commit 0061d53d introduced a mechanism to execute a global clock
      update for a vm. We can apply this periodically in order to propagate
      host NTP corrections. Also, if all vcpus of a vm are pinned, then
      without an additional trigger, no guest NTP corrections can propagate
      either, as the current trigger is only vcpu cpu migration.
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      332967a3
    • A
      x86: kvm: rate-limit global clock updates · 7e44e449
      Andrew Jones 提交于
      When we update a vcpu's local clock it may pick up an NTP correction.
      We can't wait an indeterminate amount of time for other vcpus to pick
      up that correction, so commit 0061d53d introduced a global clock
      update. However, we can't request a global clock update on every vcpu
      load either (which is what happens if the tsc is marked as unstable).
      The solution is to rate-limit the global clock updates. Marcelo
      calculated that we should delay the global clock updates no more
      than 0.1s as follows:
      
      Assume an NTP correction c is applied to one vcpu, but not the other,
      then in n seconds the delta of the vcpu system_timestamps will be
      c * n. If we assume a correction of 500ppm (worst-case), then the two
      vcpus will diverge 50us in 0.1s, which is a considerable amount.
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7e44e449
  2. 24 2月, 2014 1 次提交
  3. 04 2月, 2014 1 次提交
  4. 17 1月, 2014 2 次提交
    • J
      KVM: SVM: Fix reading of DR6 · 73aaf249
      Jan Kiszka 提交于
      In contrast to VMX, SVM dose not automatically transfer DR6 into the
      VCPU's arch.dr6. So if we face a DR6 read, we must consult a new vendor
      hook to obtain the current value. And as SVM now picks the DR6 state
      from its VMCB, we also need a set callback in order to write updates of
      DR6 back.
      
      Fixes a regression of 020df079.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      73aaf249
    • V
      add support for Hyper-V reference time counter · e984097b
      Vadim Rozenfeld 提交于
      Signed-off: Peter Lieven <pl@kamp.de>
      Signed-off: Gleb Natapov
      Signed-off: Vadim Rozenfeld <vrozenfe@redhat.com>
      
      After some consideration I decided to submit only Hyper-V reference
      counters support this time. I will submit iTSC support as a separate
      patch as soon as it is ready.
      
      v1 -> v2
      1. mark TSC page dirty as suggested by
          Eric Northup <digitaleric@google.com> and Gleb
      2. disable local irq when calling get_kernel_ns,
          as it was done by Peter Lieven <pl@amp.de>
      3. move check for TSC page enable from second patch
          to this one.
      
      v3 -> v4
          Get rid of ref counter offset.
      
      v4 -> v5
          replace __copy_to_user with kvm_write_guest
          when updateing iTSC page.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e984097b
  5. 06 11月, 2013 2 次提交
  6. 31 10月, 2013 2 次提交
  7. 14 10月, 2013 1 次提交
  8. 03 10月, 2013 5 次提交
  9. 26 8月, 2013 1 次提交
  10. 07 8月, 2013 1 次提交
  11. 29 7月, 2013 1 次提交
  12. 20 7月, 2013 1 次提交
  13. 27 6月, 2013 5 次提交
  14. 26 6月, 2013 1 次提交
  15. 05 6月, 2013 2 次提交
    • X
      KVM: MMU: reclaim the zapped-obsolete page first · 365c8868
      Xiao Guangrong 提交于
      As Marcelo pointed out that
      | "(retention of large number of pages while zapping)
      | can be fatal, it can lead to OOM and host crash"
      
      We introduce a list, kvm->arch.zapped_obsolete_pages, to link all
      the pages which are deleted from the mmu cache but not actually
      freed. When page reclaiming is needed, we always zap this kind of
      pages first.
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      365c8868
    • X
      KVM: MMU: fast invalidate all pages · 5304b8d3
      Xiao Guangrong 提交于
      The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
      walk and zap all shadow pages one by one, also it need to zap all guest
      page's rmap and all shadow page's parent spte list. Particularly, things
      become worse if guest uses more memory or vcpus. It is not good for
      scalability
      
      In this patch, we introduce a faster way to invalidate all shadow pages.
      KVM maintains a global mmu invalid generation-number which is stored in
      kvm->arch.mmu_valid_gen and every shadow page stores the current global
      generation-number into sp->mmu_valid_gen when it is created
      
      When KVM need zap all shadow pages sptes, it just simply increase the
      global generation-number then reload root shadow pages on all vcpus.
      Vcpu will create a new shadow page table according to current kvm's
      generation-number. It ensures the old pages are not used any more.
      Then the obsolete pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
      are zapped by using lock-break technique
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      5304b8d3
  16. 03 5月, 2013 1 次提交
  17. 28 4月, 2013 2 次提交
  18. 27 4月, 2013 1 次提交
  19. 22 4月, 2013 1 次提交
  20. 17 4月, 2013 2 次提交
    • Y
      KVM: VMX: Add the deliver posted interrupt algorithm · a20ed54d
      Yang Zhang 提交于
      Only deliver the posted interrupt when target vcpu is running
      and there is no previous interrupt pending in pir.
      Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      a20ed54d
    • Y
      KVM: VMX: Enable acknowledge interupt on vmexit · a547c6db
      Yang Zhang 提交于
      The "acknowledge interrupt on exit" feature controls processor behavior
      for external interrupt acknowledgement. When this control is set, the
      processor acknowledges the interrupt controller to acquire the
      interrupt vector on VM exit.
      
      After enabling this feature, an interrupt which arrived when target cpu is
      running in vmx non-root mode will be handled by vmx handler instead of handler
      in idt. Currently, vmx handler only fakes an interrupt stack and jump to idt
      table to let real handler to handle it. Further, we will recognize the interrupt
      and only delivery the interrupt which not belong to current vcpu through idt table.
      The interrupt which belonged to current vcpu will be handled inside vmx handler.
      This will reduce the interrupt handle cost of KVM.
      
      Also, interrupt enable logic is changed if this feature is turnning on:
      Before this patch, hypervior call local_irq_enable() to enable it directly.
      Now IF bit is set on interrupt stack frame, and will be enabled on a return from
      interrupt handler if exterrupt interrupt exists. If no external interrupt, still
      call local_irq_enable() to enable it.
      
      Refer to Intel SDM volum 3, chapter 33.2.
      Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      a547c6db
  21. 14 4月, 2013 1 次提交
  22. 08 4月, 2013 2 次提交
  23. 02 4月, 2013 1 次提交
    • P
      pmu: prepare for migration support · afd80d85
      Paolo Bonzini 提交于
      In order to migrate the PMU state correctly, we need to restore the
      values of MSR_CORE_PERF_GLOBAL_STATUS (a read-only register) and
      MSR_CORE_PERF_GLOBAL_OVF_CTRL (which has side effects when written).
      We also need to write the full 40-bit value of the performance counter,
      which would only be possible with a v3 architectural PMU's full-width
      counter MSRs.
      
      To distinguish host-initiated writes from the guest's, pass the
      full struct msr_data to kvm_pmu_set_msr.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      afd80d85
  24. 20 3月, 2013 1 次提交