1. 05 10月, 2012 1 次提交
  2. 21 9月, 2012 1 次提交
  3. 09 8月, 2012 1 次提交
  4. 04 8月, 2012 1 次提交
    • C
      Fixes related to processing of qemu's -numa option · ee785fed
      Chegu Vinod 提交于
      The -numa option to qemu is used to create [fake] numa nodes
      and expose them to the guest OS instance.
      
      There are a couple of issues with the -numa option:
      
      a) Max VCPU's that can be specified for a guest while using
         the qemu's -numa option is 64. Due to a typecasting issue
         when the number of VCPUs is > 32 the VCPUs don't show up
         under the specified [fake] numa nodes.
      
      b) KVM currently has support for 160VCPUs per guest. The
         qemu's -numa option has only support for upto 64VCPUs
         per guest.
      This patch addresses these two issues.
      
      Below are examples of (a) and (b)
      
      a) >32 VCPUs are specified with the -numa option:
      
      /usr/local/bin/qemu-system-x86_64 \
      -enable-kvm \
      71:01:01 \
      -net tap,ifname=tap0,script=no,downscript=no \
      -vnc :4
      
      ...
      Upstream qemu :
      --------------
      
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      6 nodes
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9 32 33 34 35 36 37 38 39 40 41
      node 0 size: 131072 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49 50 51
      node 1 size: 131072 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29 52 53 54 55 56 57 58 59
      node 2 size: 131072 MB
      node 3 cpus: 30
      node 3 size: 131072 MB
      node 4 cpus:
      node 4 size: 131072 MB
      node 5 cpus: 31
      node 5 size: 131072 MB
      
      With the patch applied :
      -----------------------
      
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      6 nodes
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9
      node 0 size: 131072 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19
      node 1 size: 131072 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29
      node 2 size: 131072 MB
      node 3 cpus: 30 31 32 33 34 35 36 37 38 39
      node 3 size: 131072 MB
      node 4 cpus: 40 41 42 43 44 45 46 47 48 49
      node 4 size: 131072 MB
      node 5 cpus: 50 51 52 53 54 55 56 57 58 59
      node 5 size: 131072 MB
      
      b) >64 VCPUs specified with -numa option:
      
      /usr/local/bin/qemu-system-x86_64 \
      -enable-kvm \
      -cpu Westmere,+rdtscp,+pdpe1gb,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+d-vnc :4
      
      ...
      
      Upstream qemu :
      --------------
      
      only 63 CPUs in NUMA mode supported.
      only 64 CPUs in NUMA mode supported.
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      8 nodes
      node 0 cpus: 6 7 8 9 38 39 40 41 70 71 72 73
      node 0 size: 65536 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49 50 51 74 75 76 77 78 79
      node 1 size: 65536 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29 52 53 54 55 56 57 58 59 60 61
      node 2 size: 65536 MB
      node 3 cpus: 30 62
      node 3 size: 65536 MB
      node 4 cpus:
      node 4 size: 65536 MB
      node 5 cpus:
      node 5 size: 65536 MB
      node 6 cpus: 31 63
      node 6 size: 65536 MB
      node 7 cpus: 0 1 2 3 4 5 32 33 34 35 36 37 64 65 66 67 68 69
      node 7 size: 65536 MB
      
      With the patch applied :
      -----------------------
      
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      8 nodes
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9
      node 0 size: 65536 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19
      node 1 size: 65536 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29
      node 2 size: 65536 MB
      node 3 cpus: 30 31 32 33 34 35 36 37 38 39
      node 3 size: 65536 MB
      node 4 cpus: 40 41 42 43 44 45 46 47 48 49
      node 4 size: 65536 MB
      node 5 cpus: 50 51 52 53 54 55 56 57 58 59
      node 5 size: 65536 MB
      node 6 cpus: 60 61 62 63 64 65 66 67 68 69
      node 6 size: 65536 MB
      node 7 cpus: 70 71 72 73 74 75 76 77 78 79
      Signed-off-by: NChegu Vinod &lt;chegu_vinod@hp.com&gt;, Jim Hull &lt;jim.hull@hp.com&gt;, Craig Hada <craig.hada@hp.com>
      Tested-by: NEduardo Habkost <ehabkost@redhat.com>
      Reviewed-by: NEduardo Habkost <ehabkost@redhat.com>
      Signed-off-by: NBlue Swirl <blauwirbel@gmail.com>
      ee785fed
  5. 03 8月, 2012 3 次提交
  6. 21 7月, 2012 1 次提交
  7. 13 4月, 2012 1 次提交
  8. 30 3月, 2012 2 次提交
    • P
      qtest: add clock management · 8156be56
      Paolo Bonzini 提交于
      This patch combines qtest and -icount together to turn the vm_clock
      into a source that can be fully managed by the client.  To this end new
      commands clock_step and clock_set are added.  Hooking them with libqtest
      is left as an exercise to the reader.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      8156be56
    • A
      qtest: add test framework · c7f0f3b1
      Anthony Liguori 提交于
      The idea behind qtest is pretty simple.  Instead of executing a CPU via TCG or
      KVM, rely on an external process to send events to the device model that the CPU
      would normally generate.
      
      qtest presents itself as an accelerator.  In addition, a new option is added to
      establish a qtest server (-qtest) that takes a character device.  This is what
      allows the external process to send CPU events to the device model.
      
      qtest uses a simple line based protocol to send the events.  Documentation of
      that protocol is in qtest.c.
      
      I considered reusing the monitor for this job.  Adding interrupts would be a bit
      difficult.  In addition, logging would also be difficult.
      
      qtest has extensive logging support.  All protocol commands are logged with
      time stamps using a new command line option (-qtest-log).  Logging is important
      since ultimately, this is a feature for debugging.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      c7f0f3b1
  9. 15 3月, 2012 1 次提交
  10. 18 2月, 2012 4 次提交
  11. 19 1月, 2012 1 次提交
  12. 13 1月, 2012 1 次提交
  13. 15 12月, 2011 1 次提交
  14. 13 12月, 2011 1 次提交
  15. 06 12月, 2011 4 次提交
  16. 05 12月, 2011 1 次提交
  17. 02 12月, 2011 1 次提交
  18. 08 11月, 2011 1 次提交
  19. 01 11月, 2011 1 次提交
  20. 27 10月, 2011 1 次提交
  21. 22 10月, 2011 5 次提交
  22. 19 10月, 2011 1 次提交
    • L
      runstate: Allow user to migrate twice · 8a9236f1
      Luiz Capitulino 提交于
      It should be a matter of allowing the transition POSTMIGRATE ->
      FINISH_MIGRATE, but it turns out that the VM won't do the
      transition the second time because it's already stopped.
      
      So this commit also adds vm_stop_force_state() which performs
      the transition even if the VM is already stopped.
      
      While there also allow other states to migrate.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      8a9236f1
  23. 20 9月, 2011 1 次提交
  24. 16 9月, 2011 3 次提交
  25. 02 9月, 2011 1 次提交
    • A
      main: force enabling of I/O thread · 12d4536f
      Anthony Liguori 提交于
      Enabling the I/O thread by default seems like an important part of declaring
      1.0.  Besides allowing true SMP support with KVM, the I/O thread means that the
      TCG VCPU doesn't have to multiplex itself with the I/O dispatch routines which
      currently requires a (racey) signal based alarm system.
      
      I know there have been concerns about performance.  I think so far the ones that
      have come up (virtio-net) are most likely due to secondary reasons like
      decreased batching.
      
      I think we ought to force enabling I/O thread early in 1.0 development and
      commit to resolving any lingering issues.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      12d4536f