1. 18 2月, 2012 3 次提交
  2. 19 1月, 2012 1 次提交
  3. 13 1月, 2012 1 次提交
  4. 15 12月, 2011 1 次提交
  5. 13 12月, 2011 1 次提交
  6. 06 12月, 2011 4 次提交
  7. 05 12月, 2011 1 次提交
  8. 02 12月, 2011 1 次提交
  9. 08 11月, 2011 1 次提交
  10. 01 11月, 2011 1 次提交
  11. 27 10月, 2011 1 次提交
  12. 22 10月, 2011 5 次提交
  13. 19 10月, 2011 1 次提交
    • L
      runstate: Allow user to migrate twice · 8a9236f1
      Luiz Capitulino 提交于
      It should be a matter of allowing the transition POSTMIGRATE ->
      FINISH_MIGRATE, but it turns out that the VM won't do the
      transition the second time because it's already stopped.
      
      So this commit also adds vm_stop_force_state() which performs
      the transition even if the VM is already stopped.
      
      While there also allow other states to migrate.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      8a9236f1
  14. 20 9月, 2011 1 次提交
  15. 16 9月, 2011 3 次提交
  16. 02 9月, 2011 1 次提交
    • A
      main: force enabling of I/O thread · 12d4536f
      Anthony Liguori 提交于
      Enabling the I/O thread by default seems like an important part of declaring
      1.0.  Besides allowing true SMP support with KVM, the I/O thread means that the
      TCG VCPU doesn't have to multiplex itself with the I/O dispatch routines which
      currently requires a (racey) signal based alarm system.
      
      I know there have been concerns about performance.  I think so far the ones that
      have come up (virtio-net) are most likely due to secondary reasons like
      decreased batching.
      
      I think we ought to force enabling I/O thread early in 1.0 development and
      commit to resolving any lingering issues.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      12d4536f
  17. 23 8月, 2011 2 次提交
  18. 21 8月, 2011 1 次提交
  19. 24 7月, 2011 1 次提交
  20. 17 7月, 2011 2 次提交
  21. 27 6月, 2011 1 次提交
  22. 24 6月, 2011 1 次提交
  23. 20 6月, 2011 1 次提交
  24. 16 6月, 2011 1 次提交
  25. 15 4月, 2011 3 次提交
    • P
      qemu_next_deadline should not consider host-time timers · cb842c90
      Paolo Bonzini 提交于
      It is purely for icount-based virtual timers.  And now that we got the
      code right, rename the function to clarify the intended scope.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      cb842c90
    • P
      enable vm_clock to "warp" in the iothread+icount case · ab33fcda
      Paolo Bonzini 提交于
      The previous patch however is not enough, because if the virtual CPU
      goes to sleep waiting for a future timer interrupt to wake it up, qemu
      deadlocks.  The timer interrupt never comes because time is driven by
      icount, but the vCPU doesn't run any insns.
      
      You could say that VCPUs should never go to sleep in icount
      mode if there is a pending vm_clock timer; rather time should
      just warp to the next vm_clock event with no sleep ever taking place.
      Even better, you can sleep for some time related to the
      time left until the next event, to avoid that the warps are too visible
      externally; for example, you could be sending network packets continously
      instead of every 100ms.
      
      This is what this patch implements.  qemu_clock_warp is called: 1)
      whenever a vm_clock timer is adjusted, to ensure the warp_timer is
      synchronized; 2) at strategic points in the CPU thread, to make sure
      the insn counter is synchronized before the CPU starts running.
      In any case, the warp_timer is disabled while the CPU is running,
      because the insn counter will then be making progress on its own.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      ab33fcda
    • P
      really fix -icount in the iothread case · 3b2319a3
      Paolo Bonzini 提交于
      The correct fix for -icount is to consider the biggest difference
      between iothread and non-iothread modes.  In the traditional model,
      CPUs run _before_ the iothread calls select (or WaitForMultipleObjects
      for Win32).  In the iothread model, CPUs run while the iothread
      isn't holding the mutex, i.e. _during_ those same calls.
      
      So, the iothread should always block as long as possible to let
      the CPUs run smoothly---the timeout might as well be infinite---and
      either the OS or the CPU thread itself will let the iothread know
      when something happens.  At this point, the iothread wakes up and
      interrupts the CPU.
      
      This is exactly the approach that this patch takes: when cpu_exec_all
      returns in -icount mode, and it is because a vm_clock deadline has
      been met, it wakes up the iothread to process the timers.  This is
      really the "bulk" of fixing icount.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      3b2319a3