1. 26 8月, 2013 2 次提交
  2. 07 8月, 2013 1 次提交
  3. 29 7月, 2013 3 次提交
  4. 18 7月, 2013 4 次提交
  5. 27 6月, 2013 2 次提交
    • Y
      kvm: Add a tracepoint write_tsc_offset · 489223ed
      Yoshihiro YUNOMAE 提交于
      Add a tracepoint write_tsc_offset for tracing TSC offset change.
      We want to merge ftrace's trace data of guest OSs and the host OS using
      TSC for timestamp in chronological order. We need "TSC offset" values for
      each guest when merge those because the TSC value on a guest is always the
      host TSC plus guest's TSC offset. If we get the TSC offset values, we can
      calculate the host TSC value for each guest events from the TSC offset and
      the event TSC value. The host TSC values of the guest events are used when we
      want to merge trace data of guests and the host in chronological order.
      (Note: the trace_clock of both the host and the guest must be set x86-tsc in
      this case)
      
      This tracepoint also records vcpu_id which can be used to merge trace data for
      SMP guests. A merge tool will read TSC offset for each vcpu, then the tool
      converts guest TSC values to host TSC values for each vcpu.
      
      TSC offset is stored in the VMCS by vmx_write_tsc_offset() or
      vmx_adjust_tsc_offset(). KVM executes the former function when a guest boots.
      The latter function is executed when kvm clock is updated. Only host can read
      TSC offset value from VMCS, so a host needs to output TSC offset value
      when TSC offset is changed.
      
      Since the TSC offset is not often changed, it could be overwritten by other
      frequent events while tracing. To avoid that, I recommend to use a special
      instance for getting this event:
      
      1. set a instance before booting a guest
       # cd /sys/kernel/debug/tracing/instances
       # mkdir tsc_offset
       # cd tsc_offset
       # echo x86-tsc > trace_clock
       # echo 1 > events/kvm/kvm_write_tsc_offset/enable
      
      2. boot a guest
      Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      489223ed
    • X
      KVM: MMU: fast invalidate all mmio sptes · f8f55942
      Xiao Guangrong 提交于
      This patch tries to introduce a very simple and scale way to invalidate
      all mmio sptes - it need not walk any shadow pages and hold mmu-lock
      
      KVM maintains a global mmio valid generation-number which is stored in
      kvm->memslots.generation and every mmio spte stores the current global
      generation-number into his available bits when it is created
      
      When KVM need zap all mmio sptes, it just simply increase the global
      generation-number. When guests do mmio access, KVM intercepts a MMIO #PF
      then it walks the shadow page table and get the mmio spte. If the
      generation-number on the spte does not equal the global generation-number,
      it will go to the normal #PF handler to update the mmio spte
      
      Since 19 bits are used to store generation-number on mmio spte, we zap all
      mmio sptes when the number is round
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f8f55942
  6. 26 6月, 2013 1 次提交
  7. 21 6月, 2013 1 次提交
  8. 18 6月, 2013 1 次提交
  9. 12 6月, 2013 1 次提交
  10. 05 6月, 2013 4 次提交
  11. 16 5月, 2013 1 次提交
  12. 08 5月, 2013 1 次提交
  13. 03 5月, 2013 1 次提交
  14. 30 4月, 2013 1 次提交
  15. 28 4月, 2013 2 次提交
  16. 27 4月, 2013 1 次提交
  17. 22 4月, 2013 3 次提交
  18. 17 4月, 2013 4 次提交
  19. 16 4月, 2013 1 次提交
  20. 14 4月, 2013 1 次提交
  21. 08 4月, 2013 1 次提交
  22. 07 4月, 2013 1 次提交
  23. 02 4月, 2013 1 次提交
    • P
      pmu: prepare for migration support · afd80d85
      Paolo Bonzini 提交于
      In order to migrate the PMU state correctly, we need to restore the
      values of MSR_CORE_PERF_GLOBAL_STATUS (a read-only register) and
      MSR_CORE_PERF_GLOBAL_OVF_CTRL (which has side effects when written).
      We also need to write the full 40-bit value of the performance counter,
      which would only be possible with a v3 architectural PMU's full-width
      counter MSRs.
      
      To distinguish host-initiated writes from the guest's, pass the
      full struct msr_data to kvm_pmu_set_msr.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      afd80d85
  24. 20 3月, 2013 1 次提交