1. 12 6月, 2013 1 次提交
  2. 05 6月, 2013 4 次提交
  3. 16 5月, 2013 1 次提交
  4. 08 5月, 2013 1 次提交
  5. 03 5月, 2013 1 次提交
  6. 30 4月, 2013 1 次提交
  7. 28 4月, 2013 2 次提交
  8. 27 4月, 2013 1 次提交
  9. 22 4月, 2013 3 次提交
  10. 17 4月, 2013 4 次提交
  11. 16 4月, 2013 1 次提交
  12. 14 4月, 2013 1 次提交
  13. 08 4月, 2013 1 次提交
  14. 07 4月, 2013 1 次提交
  15. 02 4月, 2013 1 次提交
    • P
      pmu: prepare for migration support · afd80d85
      Paolo Bonzini 提交于
      In order to migrate the PMU state correctly, we need to restore the
      values of MSR_CORE_PERF_GLOBAL_STATUS (a read-only register) and
      MSR_CORE_PERF_GLOBAL_OVF_CTRL (which has side effects when written).
      We also need to write the full 40-bit value of the performance counter,
      which would only be possible with a v3 architectural PMU's full-width
      counter MSRs.
      
      To distinguish host-initiated writes from the guest's, pass the
      full struct msr_data to kvm_pmu_set_msr.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      afd80d85
  16. 20 3月, 2013 2 次提交
  17. 19 3月, 2013 1 次提交
  18. 14 3月, 2013 1 次提交
    • T
      KVM: x86: Optimize mmio spte zapping when creating/moving memslot · 982b3394
      Takuya Yoshikawa 提交于
      When we create or move a memory slot, we need to zap mmio sptes.
      Currently, zap_all() is used for this and this is causing two problems:
       - extra page faults after zapping mmu pages
       - long mmu_lock hold time during zapping mmu pages
      
      For the latter, Marcelo reported a disastrous mmu_lock hold time during
      hot-plug, which made the guest unresponsive for a long time.
      
      This patch takes a simple way to fix these problems: do not zap mmu
      pages unless they are marked mmio cached.  On our test box, this took
      only 50us for the 4GB guest and we did not see ms of mmu_lock hold time
      any more.
      
      Note that we still need to do zap_all() for other cases.  So another
      work is also needed: Xiao's work may be the one.
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      982b3394
  19. 13 3月, 2013 1 次提交
    • J
      KVM: x86: Rework INIT and SIPI handling · 66450a21
      Jan Kiszka 提交于
      A VCPU sending INIT or SIPI to some other VCPU races for setting the
      remote VCPU's mp_state. When we were unlucky, KVM_MP_STATE_INIT_RECEIVED
      was overwritten by kvm_emulate_halt and, thus, got lost.
      
      This introduces APIC events for those two signals, keeping them in
      kvm_apic until kvm_apic_accept_events is run over the target vcpu
      context. kvm_apic_has_events reports to kvm_arch_vcpu_runnable if there
      are pending events, thus if vcpu blocking should end.
      
      The patch comes with the side effect of effectively obsoleting
      KVM_MP_STATE_SIPI_RECEIVED. We still accept it from user space, but
      immediately translate it to KVM_MP_STATE_INIT_RECEIVED + KVM_APIC_SIPI.
      The vcpu itself will no longer enter the KVM_MP_STATE_SIPI_RECEIVED
      state. That also means we no longer exit to user space after receiving a
      SIPI event.
      
      Furthermore, we already reset the VCPU on INIT, only fixing up the code
      segment later on when SIPI arrives. Moreover, we fix INIT handling for
      the BSP: it never enter wait-for-SIPI but directly starts over on INIT.
      Tested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      66450a21
  20. 12 3月, 2013 2 次提交
  21. 05 3月, 2013 3 次提交
  22. 27 2月, 2013 1 次提交
  23. 20 2月, 2013 1 次提交
  24. 11 2月, 2013 1 次提交
  25. 05 2月, 2013 1 次提交
  26. 29 1月, 2013 1 次提交
  27. 24 1月, 2013 1 次提交