1. 17 1月, 2014 2 次提交
    • J
      KVM: SVM: Fix reading of DR6 · 73aaf249
      Jan Kiszka 提交于
      In contrast to VMX, SVM dose not automatically transfer DR6 into the
      VCPU's arch.dr6. So if we face a DR6 read, we must consult a new vendor
      hook to obtain the current value. And as SVM now picks the DR6 state
      from its VMCB, we also need a set callback in order to write updates of
      DR6 back.
      
      Fixes a regression of 020df079.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      73aaf249
    • V
      add support for Hyper-V reference time counter · e984097b
      Vadim Rozenfeld 提交于
      Signed-off: Peter Lieven <pl@kamp.de>
      Signed-off: Gleb Natapov
      Signed-off: Vadim Rozenfeld <vrozenfe@redhat.com>
      
      After some consideration I decided to submit only Hyper-V reference
      counters support this time. I will submit iTSC support as a separate
      patch as soon as it is ready.
      
      v1 -> v2
      1. mark TSC page dirty as suggested by
          Eric Northup <digitaleric@google.com> and Gleb
      2. disable local irq when calling get_kernel_ns,
          as it was done by Peter Lieven <pl@amp.de>
      3. move check for TSC page enable from second patch
          to this one.
      
      v3 -> v4
          Get rid of ref counter offset.
      
      v4 -> v5
          replace __copy_to_user with kvm_write_guest
          when updateing iTSC page.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e984097b
  2. 06 11月, 2013 2 次提交
  3. 31 10月, 2013 2 次提交
  4. 14 10月, 2013 1 次提交
  5. 03 10月, 2013 5 次提交
  6. 26 8月, 2013 1 次提交
  7. 07 8月, 2013 1 次提交
  8. 29 7月, 2013 1 次提交
  9. 20 7月, 2013 1 次提交
  10. 27 6月, 2013 5 次提交
  11. 26 6月, 2013 1 次提交
  12. 05 6月, 2013 2 次提交
    • X
      KVM: MMU: reclaim the zapped-obsolete page first · 365c8868
      Xiao Guangrong 提交于
      As Marcelo pointed out that
      | "(retention of large number of pages while zapping)
      | can be fatal, it can lead to OOM and host crash"
      
      We introduce a list, kvm->arch.zapped_obsolete_pages, to link all
      the pages which are deleted from the mmu cache but not actually
      freed. When page reclaiming is needed, we always zap this kind of
      pages first.
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      365c8868
    • X
      KVM: MMU: fast invalidate all pages · 5304b8d3
      Xiao Guangrong 提交于
      The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
      walk and zap all shadow pages one by one, also it need to zap all guest
      page's rmap and all shadow page's parent spte list. Particularly, things
      become worse if guest uses more memory or vcpus. It is not good for
      scalability
      
      In this patch, we introduce a faster way to invalidate all shadow pages.
      KVM maintains a global mmu invalid generation-number which is stored in
      kvm->arch.mmu_valid_gen and every shadow page stores the current global
      generation-number into sp->mmu_valid_gen when it is created
      
      When KVM need zap all shadow pages sptes, it just simply increase the
      global generation-number then reload root shadow pages on all vcpus.
      Vcpu will create a new shadow page table according to current kvm's
      generation-number. It ensures the old pages are not used any more.
      Then the obsolete pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
      are zapped by using lock-break technique
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      5304b8d3
  13. 03 5月, 2013 1 次提交
  14. 28 4月, 2013 2 次提交
  15. 27 4月, 2013 1 次提交
  16. 22 4月, 2013 1 次提交
  17. 17 4月, 2013 2 次提交
    • Y
      KVM: VMX: Add the deliver posted interrupt algorithm · a20ed54d
      Yang Zhang 提交于
      Only deliver the posted interrupt when target vcpu is running
      and there is no previous interrupt pending in pir.
      Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      a20ed54d
    • Y
      KVM: VMX: Enable acknowledge interupt on vmexit · a547c6db
      Yang Zhang 提交于
      The "acknowledge interrupt on exit" feature controls processor behavior
      for external interrupt acknowledgement. When this control is set, the
      processor acknowledges the interrupt controller to acquire the
      interrupt vector on VM exit.
      
      After enabling this feature, an interrupt which arrived when target cpu is
      running in vmx non-root mode will be handled by vmx handler instead of handler
      in idt. Currently, vmx handler only fakes an interrupt stack and jump to idt
      table to let real handler to handle it. Further, we will recognize the interrupt
      and only delivery the interrupt which not belong to current vcpu through idt table.
      The interrupt which belonged to current vcpu will be handled inside vmx handler.
      This will reduce the interrupt handle cost of KVM.
      
      Also, interrupt enable logic is changed if this feature is turnning on:
      Before this patch, hypervior call local_irq_enable() to enable it directly.
      Now IF bit is set on interrupt stack frame, and will be enabled on a return from
      interrupt handler if exterrupt interrupt exists. If no external interrupt, still
      call local_irq_enable() to enable it.
      
      Refer to Intel SDM volum 3, chapter 33.2.
      Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      a547c6db
  18. 14 4月, 2013 1 次提交
  19. 08 4月, 2013 2 次提交
  20. 02 4月, 2013 1 次提交
    • P
      pmu: prepare for migration support · afd80d85
      Paolo Bonzini 提交于
      In order to migrate the PMU state correctly, we need to restore the
      values of MSR_CORE_PERF_GLOBAL_STATUS (a read-only register) and
      MSR_CORE_PERF_GLOBAL_OVF_CTRL (which has side effects when written).
      We also need to write the full 40-bit value of the performance counter,
      which would only be possible with a v3 architectural PMU's full-width
      counter MSRs.
      
      To distinguish host-initiated writes from the guest's, pass the
      full struct msr_data to kvm_pmu_set_msr.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      afd80d85
  21. 20 3月, 2013 1 次提交
  22. 14 3月, 2013 2 次提交
  23. 13 3月, 2013 1 次提交
    • J
      KVM: x86: Rework INIT and SIPI handling · 66450a21
      Jan Kiszka 提交于
      A VCPU sending INIT or SIPI to some other VCPU races for setting the
      remote VCPU's mp_state. When we were unlucky, KVM_MP_STATE_INIT_RECEIVED
      was overwritten by kvm_emulate_halt and, thus, got lost.
      
      This introduces APIC events for those two signals, keeping them in
      kvm_apic until kvm_apic_accept_events is run over the target vcpu
      context. kvm_apic_has_events reports to kvm_arch_vcpu_runnable if there
      are pending events, thus if vcpu blocking should end.
      
      The patch comes with the side effect of effectively obsoleting
      KVM_MP_STATE_SIPI_RECEIVED. We still accept it from user space, but
      immediately translate it to KVM_MP_STATE_INIT_RECEIVED + KVM_APIC_SIPI.
      The vcpu itself will no longer enter the KVM_MP_STATE_SIPI_RECEIVED
      state. That also means we no longer exit to user space after receiving a
      SIPI event.
      
      Furthermore, we already reset the VCPU on INIT, only fixing up the code
      segment later on when SIPI arrives. Moreover, we fix INIT handling for
      the BSP: it never enter wait-for-SIPI but directly starts over on INIT.
      Tested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      66450a21
  24. 12 3月, 2013 1 次提交