1. 16 9月, 2011 2 次提交
    • L
      RunState: Add additional states · f5bbfba1
      Luiz Capitulino 提交于
      Currently, only vm_start() and vm_stop() change the VM state.
      That's, the state is only changed when starting or stopping the VM.
      
      This commit adds the runstate_set() function, which makes it possible
      to also do state transitions when the VM is stopped or running.
      
      Additional states are also added and the current state is stored.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      f5bbfba1
    • L
      Replace the VMSTOP macros with a proper state type · 1dfb4dd9
      Luiz Capitulino 提交于
      Today, when notifying a VM state change with vm_state_notify(),
      we pass a VMSTOP macro as the 'reason' argument. This is not ideal
      because the VMSTOP macros tell why qemu stopped and not exactly
      what the current VM state is.
      
      One example to demonstrate this problem is that vm_start() calls
      vm_state_notify() with reason=0, which turns out to be VMSTOP_USER.
      
      This commit fixes that by replacing the VMSTOP macros with a proper
      state type called RunState.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      1dfb4dd9
  2. 02 9月, 2011 1 次提交
    • A
      main: force enabling of I/O thread · 12d4536f
      Anthony Liguori 提交于
      Enabling the I/O thread by default seems like an important part of declaring
      1.0.  Besides allowing true SMP support with KVM, the I/O thread means that the
      TCG VCPU doesn't have to multiplex itself with the I/O dispatch routines which
      currently requires a (racey) signal based alarm system.
      
      I know there have been concerns about performance.  I think so far the ones that
      have come up (virtio-net) are most likely due to secondary reasons like
      decreased batching.
      
      I think we ought to force enabling I/O thread early in 1.0 development and
      commit to resolving any lingering issues.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      12d4536f
  3. 23 8月, 2011 2 次提交
  4. 21 8月, 2011 1 次提交
  5. 24 7月, 2011 1 次提交
  6. 17 7月, 2011 2 次提交
  7. 27 6月, 2011 1 次提交
  8. 24 6月, 2011 1 次提交
  9. 20 6月, 2011 1 次提交
  10. 16 6月, 2011 1 次提交
  11. 15 4月, 2011 3 次提交
    • P
      qemu_next_deadline should not consider host-time timers · cb842c90
      Paolo Bonzini 提交于
      It is purely for icount-based virtual timers.  And now that we got the
      code right, rename the function to clarify the intended scope.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      cb842c90
    • P
      enable vm_clock to "warp" in the iothread+icount case · ab33fcda
      Paolo Bonzini 提交于
      The previous patch however is not enough, because if the virtual CPU
      goes to sleep waiting for a future timer interrupt to wake it up, qemu
      deadlocks.  The timer interrupt never comes because time is driven by
      icount, but the vCPU doesn't run any insns.
      
      You could say that VCPUs should never go to sleep in icount
      mode if there is a pending vm_clock timer; rather time should
      just warp to the next vm_clock event with no sleep ever taking place.
      Even better, you can sleep for some time related to the
      time left until the next event, to avoid that the warps are too visible
      externally; for example, you could be sending network packets continously
      instead of every 100ms.
      
      This is what this patch implements.  qemu_clock_warp is called: 1)
      whenever a vm_clock timer is adjusted, to ensure the warp_timer is
      synchronized; 2) at strategic points in the CPU thread, to make sure
      the insn counter is synchronized before the CPU starts running.
      In any case, the warp_timer is disabled while the CPU is running,
      because the insn counter will then be making progress on its own.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      ab33fcda
    • P
      really fix -icount in the iothread case · 3b2319a3
      Paolo Bonzini 提交于
      The correct fix for -icount is to consider the biggest difference
      between iothread and non-iothread modes.  In the traditional model,
      CPUs run _before_ the iothread calls select (or WaitForMultipleObjects
      for Win32).  In the iothread model, CPUs run while the iothread
      isn't holding the mutex, i.e. _during_ those same calls.
      
      So, the iothread should always block as long as possible to let
      the CPUs run smoothly---the timeout might as well be infinite---and
      either the OS or the CPU thread itself will let the iothread know
      when something happens.  At this point, the iothread wakes up and
      interrupts the CPU.
      
      This is exactly the approach that this patch takes: when cpu_exec_all
      returns in -icount mode, and it is because a vm_clock deadline has
      been met, it wakes up the iothread to process the timers.  This is
      really the "bulk" of fixing icount.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com>
      3b2319a3
  12. 21 3月, 2011 1 次提交
  13. 17 3月, 2011 2 次提交
  14. 16 3月, 2011 2 次提交
  15. 13 3月, 2011 13 次提交
  16. 14 2月, 2011 6 次提交