1. 15 10月, 2013 1 次提交
  2. 03 10月, 2013 4 次提交
  3. 30 9月, 2013 1 次提交
    • P
      KVM: Convert kvm_lock back to non-raw spinlock · 2f303b74
      Paolo Bonzini 提交于
      In commit e935b837 ("KVM: Convert kvm_lock to raw_spinlock"),
      the kvm_lock was made a raw lock.  However, the kvm mmu_shrink()
      function tries to grab the (non-raw) mmu_lock within the scope of
      the raw locked kvm_lock being held.  This leads to the following:
      
      BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
      in_atomic(): 1, irqs_disabled(): 0, pid: 55, name: kswapd0
      Preemption disabled at:[<ffffffffa0376eac>] mmu_shrink+0x5c/0x1b0 [kvm]
      
      Pid: 55, comm: kswapd0 Not tainted 3.4.34_preempt-rt
      Call Trace:
       [<ffffffff8106f2ad>] __might_sleep+0xfd/0x160
       [<ffffffff817d8d64>] rt_spin_lock+0x24/0x50
       [<ffffffffa0376f3c>] mmu_shrink+0xec/0x1b0 [kvm]
       [<ffffffff8111455d>] shrink_slab+0x17d/0x3a0
       [<ffffffff81151f00>] ? mem_cgroup_iter+0x130/0x260
       [<ffffffff8111824a>] balance_pgdat+0x54a/0x730
       [<ffffffff8111fe47>] ? set_pgdat_percpu_threshold+0xa7/0xd0
       [<ffffffff811185bf>] kswapd+0x18f/0x490
       [<ffffffff81070961>] ? get_parent_ip+0x11/0x50
       [<ffffffff81061970>] ? __init_waitqueue_head+0x50/0x50
       [<ffffffff81118430>] ? balance_pgdat+0x730/0x730
       [<ffffffff81060d2b>] kthread+0xdb/0xe0
       [<ffffffff8106e122>] ? finish_task_switch+0x52/0x100
       [<ffffffff817e1e94>] kernel_thread_helper+0x4/0x10
       [<ffffffff81060c50>] ? __init_kthread_worker+0x
      
      After the previous patch, kvm_lock need not be a raw spinlock anymore,
      so change it back.
      Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Cc: kvm@vger.kernel.org
      Cc: gleb@redhat.com
      Cc: jan.kiszka@siemens.com
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2f303b74
  4. 28 8月, 2013 2 次提交
  5. 26 8月, 2013 2 次提交
  6. 07 8月, 2013 1 次提交
  7. 29 7月, 2013 3 次提交
  8. 18 7月, 2013 4 次提交
  9. 27 6月, 2013 2 次提交
    • Y
      kvm: Add a tracepoint write_tsc_offset · 489223ed
      Yoshihiro YUNOMAE 提交于
      Add a tracepoint write_tsc_offset for tracing TSC offset change.
      We want to merge ftrace's trace data of guest OSs and the host OS using
      TSC for timestamp in chronological order. We need "TSC offset" values for
      each guest when merge those because the TSC value on a guest is always the
      host TSC plus guest's TSC offset. If we get the TSC offset values, we can
      calculate the host TSC value for each guest events from the TSC offset and
      the event TSC value. The host TSC values of the guest events are used when we
      want to merge trace data of guests and the host in chronological order.
      (Note: the trace_clock of both the host and the guest must be set x86-tsc in
      this case)
      
      This tracepoint also records vcpu_id which can be used to merge trace data for
      SMP guests. A merge tool will read TSC offset for each vcpu, then the tool
      converts guest TSC values to host TSC values for each vcpu.
      
      TSC offset is stored in the VMCS by vmx_write_tsc_offset() or
      vmx_adjust_tsc_offset(). KVM executes the former function when a guest boots.
      The latter function is executed when kvm clock is updated. Only host can read
      TSC offset value from VMCS, so a host needs to output TSC offset value
      when TSC offset is changed.
      
      Since the TSC offset is not often changed, it could be overwritten by other
      frequent events while tracing. To avoid that, I recommend to use a special
      instance for getting this event:
      
      1. set a instance before booting a guest
       # cd /sys/kernel/debug/tracing/instances
       # mkdir tsc_offset
       # cd tsc_offset
       # echo x86-tsc > trace_clock
       # echo 1 > events/kvm/kvm_write_tsc_offset/enable
      
      2. boot a guest
      Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      489223ed
    • X
      KVM: MMU: fast invalidate all mmio sptes · f8f55942
      Xiao Guangrong 提交于
      This patch tries to introduce a very simple and scale way to invalidate
      all mmio sptes - it need not walk any shadow pages and hold mmu-lock
      
      KVM maintains a global mmio valid generation-number which is stored in
      kvm->memslots.generation and every mmio spte stores the current global
      generation-number into his available bits when it is created
      
      When KVM need zap all mmio sptes, it just simply increase the global
      generation-number. When guests do mmio access, KVM intercepts a MMIO #PF
      then it walks the shadow page table and get the mmio spte. If the
      generation-number on the spte does not equal the global generation-number,
      it will go to the normal #PF handler to update the mmio spte
      
      Since 19 bits are used to store generation-number on mmio spte, we zap all
      mmio sptes when the number is round
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f8f55942
  10. 26 6月, 2013 1 次提交
  11. 21 6月, 2013 1 次提交
  12. 18 6月, 2013 1 次提交
  13. 12 6月, 2013 1 次提交
  14. 05 6月, 2013 4 次提交
  15. 16 5月, 2013 1 次提交
  16. 08 5月, 2013 1 次提交
  17. 03 5月, 2013 1 次提交
  18. 30 4月, 2013 1 次提交
  19. 28 4月, 2013 2 次提交
  20. 27 4月, 2013 1 次提交
  21. 22 4月, 2013 3 次提交
  22. 17 4月, 2013 2 次提交