1. 07 2月, 2010 2 次提交
  2. 06 2月, 2010 1 次提交
  3. 05 2月, 2010 2 次提交
  4. 04 2月, 2010 1 次提交
    • S
      kvm: Flush coalesced MMIO buffer periodly · 62a2744c
      Sheng Yang 提交于
      The default action of coalesced MMIO is, cache the writing in buffer, until:
      1. The buffer is full.
      2. Or the exit to QEmu due to other reasons.
      
      But this would result in a very late writing in some condition.
      1. The each time write to MMIO content is small.
      2. The writing interval is big.
      3. No need for input or accessing other devices frequently.
      
      This issue was observed in a experimental embbed system. The test image
      simply print "test" every 1 seconds. The output in QEmu meets expectation,
      but the output in KVM is delayed for seconds.
      
      Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
      handler. By this way, We don't need vcpu explicit exit to QEmu to
      handle this issue.
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      62a2744c
  5. 27 1月, 2010 1 次提交
  6. 20 12月, 2009 1 次提交
  7. 19 12月, 2009 2 次提交
  8. 06 12月, 2009 1 次提交
  9. 30 11月, 2009 1 次提交
  10. 15 10月, 2009 1 次提交
  11. 05 10月, 2009 3 次提交
  12. 02 10月, 2009 2 次提交
  13. 12 9月, 2009 2 次提交
  14. 03 9月, 2009 1 次提交
  15. 28 8月, 2009 1 次提交
  16. 26 8月, 2009 1 次提交
  17. 24 8月, 2009 1 次提交
    • A
      Unbreak large mem support by removing kqemu · 4a1418e0
      Anthony Liguori 提交于
      kqemu introduces a number of restrictions on the i386 target.  The worst is that
      it prevents large memory from working in the default build.
      
      Furthermore, kqemu is fundamentally flawed in a number of ways.  It relies on
      the TSC as a time source which will not be reliable on a multiple processor
      system in userspace.  Since most modern processors are multicore, this severely
      limits the utility of kqemu.
      
      kvm is a viable alternative for people looking to accelerate qemu and has the
      benefit of being supported by the upstream Linux kernel.  If someone can
      implement work arounds to remove the restrictions introduced by kqemu, I'm
      happy to avoid and/or revert this patch.
      
      N.B. kqemu will still function in the 0.11 series but this patch removes it from
      the 0.12 series.
      
      Paul, please Ack or Nack this patch.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      4a1418e0
  18. 01 8月, 2009 1 次提交
  19. 28 7月, 2009 2 次提交
  20. 21 7月, 2009 1 次提交
  21. 17 7月, 2009 2 次提交
  22. 30 6月, 2009 1 次提交
  23. 22 6月, 2009 1 次提交
  24. 17 6月, 2009 4 次提交
  25. 16 6月, 2009 1 次提交
  26. 04 6月, 2009 1 次提交
    • N
      fix gdbstub support for multiple threads in usermode, v3 · 1e9fa730
      Nathan Froyd 提交于
      When debugging multi-threaded programs, QEMU's gdb stub would report the
      correct number of threads (the qfThreadInfo and qsThreadInfo packets).
      However, the stub was unable to actually switch between threads (the T
      packet), since it would report every thread except the first as being
      dead.  Furthermore, the stub relied upon cpu_index as a reliable means
      of assigning IDs to the threads.  This was a bad idea; if you have this
      sequence of events:
      
      initial thread created
      new thread #1
      new thread #2
      thread #1 exits
      new thread #3
      
      thread #3 will have the same cpu_index as thread #1, which would confuse
      GDB.  (This problem is partly due to the remote protocol not having a
      good way to send thread creation/destruction events.)
      
      We fix this by using the host thread ID for the identifier passed to GDB
      when debugging a multi-threaded userspace program.  The thread ID might
      wrap, but the same sort of problems with wrapping thread IDs would come
      up with debugging programs natively, so this doesn't represent a
      problem.
      Signed-off-by: NNathan Froyd <froydnj@codesourcery.com>
      1e9fa730
  27. 22 5月, 2009 2 次提交