1. 07 12月, 2012 1 次提交
  2. 06 12月, 2012 1 次提交
  3. 02 11月, 2012 1 次提交
  4. 29 10月, 2012 1 次提交
    • Z
      Add USB option in machine options · 094b287f
      zhlcindy@gmail.com 提交于
      When -usb option is used, global varible usb_enabled is set.
      And all the plaform will create one USB controller according
      to this variable. In fact, global varibles make code hard
      to read.
      
      So this patch is to remove global variable usb_enabled and
      add USB option in machine options. All the plaforms will get
      USB option value from machine options.
      
      USB option of machine options will be set either by:
        * -usb
        * -machine type=pseries,usb=on
      
      Both these ways can work now. They both set USB option in
      machine options. In the future, the first way will be removed.
      Signed-off-by: NLi Zhang <zhlcindy@linux.vnet.ibm.com>
      Acked-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      094b287f
  5. 07 10月, 2012 1 次提交
  6. 26 9月, 2012 2 次提交
  7. 27 8月, 2012 1 次提交
  8. 24 8月, 2012 1 次提交
  9. 17 8月, 2012 1 次提交
    • D
      Allow QEMUMachine to override reset sequencing · be522029
      David Gibson 提交于
      qemu_system_reset() function always performs the same basic actions on
      all machines.  This includes running all the reset handler hooks,
      however the order in which these will run is not always easily predictable.
      
      This patch splits the core of qemu_system_reset() - the invocation of
      the reset handlers - out into a new qemu_devices_reset() function.
      qemu_system_reset() will usually call qemu_devices_reset(), but that
      can be now overriden by a new reset method in the QEMUMachine
      structure.
      
      Individual machines can use this reset method, if necessary, to
      perform any extra, machine specific initializations which have to
      occur before or after the bulk of the reset handlers.  It's expected
      that the method will call qemu_devices_reset() at some point, but if
      the machine has really strange ordering requirements between devices
      resets it could even override that with it's own reset sequence (with
      great care, obviously).
      
      For a specific example of when this might be needed: a number of
      machines (but not PC) load images specified with -kernel or -initrd
      directly into the machine RAM before booting the guest.  This mostly
      works at the moment, but to make this actually safe requires that this
      load occurs after peripheral devices are reset - otherwise they could
      have active DMAs in progress which would clobber the in memory images.
      Some machines (notably pseries) also have other entry conditions which
      need to be set up as the last thing before executing in guest space -
      some of this could be considered "emulated firmware" in the sense that
      the actions of the firmware are emulated directly by qemu rather than
      by executing a firmware image within the guest.  When the platform's
      firmware to OS interface is sufficiently well specified, this saves
      time both in implementing the "firmware" and executing it.
      
      aliguori: don't unconditionally dereference current_machine
      Reviewed-by: NAndreas Färber <afaerber@suse.de>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      be522029
  10. 04 8月, 2012 1 次提交
    • C
      Fixes related to processing of qemu's -numa option · ee785fed
      Chegu Vinod 提交于
      The -numa option to qemu is used to create [fake] numa nodes
      and expose them to the guest OS instance.
      
      There are a couple of issues with the -numa option:
      
      a) Max VCPU's that can be specified for a guest while using
         the qemu's -numa option is 64. Due to a typecasting issue
         when the number of VCPUs is > 32 the VCPUs don't show up
         under the specified [fake] numa nodes.
      
      b) KVM currently has support for 160VCPUs per guest. The
         qemu's -numa option has only support for upto 64VCPUs
         per guest.
      This patch addresses these two issues.
      
      Below are examples of (a) and (b)
      
      a) >32 VCPUs are specified with the -numa option:
      
      /usr/local/bin/qemu-system-x86_64 \
      -enable-kvm \
      71:01:01 \
      -net tap,ifname=tap0,script=no,downscript=no \
      -vnc :4
      
      ...
      Upstream qemu :
      --------------
      
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      6 nodes
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9 32 33 34 35 36 37 38 39 40 41
      node 0 size: 131072 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49 50 51
      node 1 size: 131072 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29 52 53 54 55 56 57 58 59
      node 2 size: 131072 MB
      node 3 cpus: 30
      node 3 size: 131072 MB
      node 4 cpus:
      node 4 size: 131072 MB
      node 5 cpus: 31
      node 5 size: 131072 MB
      
      With the patch applied :
      -----------------------
      
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      6 nodes
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9
      node 0 size: 131072 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19
      node 1 size: 131072 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29
      node 2 size: 131072 MB
      node 3 cpus: 30 31 32 33 34 35 36 37 38 39
      node 3 size: 131072 MB
      node 4 cpus: 40 41 42 43 44 45 46 47 48 49
      node 4 size: 131072 MB
      node 5 cpus: 50 51 52 53 54 55 56 57 58 59
      node 5 size: 131072 MB
      
      b) >64 VCPUs specified with -numa option:
      
      /usr/local/bin/qemu-system-x86_64 \
      -enable-kvm \
      -cpu Westmere,+rdtscp,+pdpe1gb,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+d-vnc :4
      
      ...
      
      Upstream qemu :
      --------------
      
      only 63 CPUs in NUMA mode supported.
      only 64 CPUs in NUMA mode supported.
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      8 nodes
      node 0 cpus: 6 7 8 9 38 39 40 41 70 71 72 73
      node 0 size: 65536 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49 50 51 74 75 76 77 78 79
      node 1 size: 65536 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29 52 53 54 55 56 57 58 59 60 61
      node 2 size: 65536 MB
      node 3 cpus: 30 62
      node 3 size: 65536 MB
      node 4 cpus:
      node 4 size: 65536 MB
      node 5 cpus:
      node 5 size: 65536 MB
      node 6 cpus: 31 63
      node 6 size: 65536 MB
      node 7 cpus: 0 1 2 3 4 5 32 33 34 35 36 37 64 65 66 67 68 69
      node 7 size: 65536 MB
      
      With the patch applied :
      -----------------------
      
      QEMU 1.1.50 monitor - type 'help' for more information
      (qemu) info numa
      8 nodes
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9
      node 0 size: 65536 MB
      node 1 cpus: 10 11 12 13 14 15 16 17 18 19
      node 1 size: 65536 MB
      node 2 cpus: 20 21 22 23 24 25 26 27 28 29
      node 2 size: 65536 MB
      node 3 cpus: 30 31 32 33 34 35 36 37 38 39
      node 3 size: 65536 MB
      node 4 cpus: 40 41 42 43 44 45 46 47 48 49
      node 4 size: 65536 MB
      node 5 cpus: 50 51 52 53 54 55 56 57 58 59
      node 5 size: 65536 MB
      node 6 cpus: 60 61 62 63 64 65 66 67 68 69
      node 6 size: 65536 MB
      node 7 cpus: 70 71 72 73 74 75 76 77 78 79
      Signed-off-by: NChegu Vinod &lt;chegu_vinod@hp.com&gt;, Jim Hull &lt;jim.hull@hp.com&gt;, Craig Hada <craig.hada@hp.com>
      Tested-by: NEduardo Habkost <ehabkost@redhat.com>
      Reviewed-by: NEduardo Habkost <ehabkost@redhat.com>
      Signed-off-by: NBlue Swirl <blauwirbel@gmail.com>
      ee785fed
  11. 29 6月, 2012 1 次提交
  12. 15 3月, 2012 2 次提交
    • L
      qapi: Convert migrate · e1c37d0e
      Luiz Capitulino 提交于
      The migrate command is one of those commands where HMP and QMP completely
      mix up together. This made the conversion to the QAPI (which separates the
      command into QMP and HMP parts) a bit difficult.
      
      The first important change to be noticed is that this commit completes the
      removal of the Monitor object from migration code, started by the previous
      commit.
      
      Another important and tricky change is about supporting the non-detached
      mode. That is, if the user doesn't pass '-d' the migrate command will lock
      the monitor and will only release it when migration is finished.
      
      To support this in the new HMP command (hmp_migrate()), it is necessary
      to create a timer which runs every second and checks if the migration is
      still active. If it is, the timer callback will re-schedule itself to run
      one second in the future. If the migration has already finished, the
      monitor lock is released and the user can use it normally.
      
      All these changes should be transparent to the user.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      e1c37d0e
    • L
      Purge migration of (almost) everything to do with monitors · 539de124
      Luiz Capitulino 提交于
      The Monitor object is passed back and forth within the migration/savevm
      code so that it can print errors and progress to the user.
      
      However, that approach assumes a HMP monitor, being completely invalid
      in QMP.
      
      This commit drops almost every single usage of the Monitor object, all
      monitor_printf() calls have been converted into DPRINTF() ones.
      
      There are a few remaining Monitor objects, those are going to be dropped
      by the next commit.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      539de124
  13. 25 2月, 2012 3 次提交
  14. 02 2月, 2012 1 次提交
  15. 19 1月, 2012 1 次提交
  16. 04 1月, 2012 1 次提交
    • A
      Add generic drive hotplugging · dd97aa8a
      Alexander Graf 提交于
      The monitor command for hotplugging is in i386 specific code. This is just
      plain wrong, as S390 just learned how to do hotplugging too and needs to
      get drives for that.
      
      So let's add a generic copy to generic code that handles drive_add in a
      way that doesn't have pci dependencies. All pci specific code can then
      be handled in a pci specific function.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      
      ---
      
      v1 -> v2:
      
        - align generic drive_add to pci specific one
        - rework to split between generic and pci code
      
      v2 -> v3:
      
        - remove comment
      dd97aa8a
  17. 06 12月, 2011 1 次提交
  18. 22 10月, 2011 1 次提交
  19. 19 10月, 2011 1 次提交
    • L
      runstate: Allow user to migrate twice · 8a9236f1
      Luiz Capitulino 提交于
      It should be a matter of allowing the transition POSTMIGRATE ->
      FINISH_MIGRATE, but it turns out that the VM won't do the
      transition the second time because it's already stopped.
      
      So this commit also adds vm_stop_force_state() which performs
      the transition even if the VM is already stopped.
      
      While there also allow other states to migrate.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      8a9236f1
  20. 04 10月, 2011 3 次提交
    • L
      qapi: Convert query-status · 1fa9a5e4
      Luiz Capitulino 提交于
      Please, note that the RunState type as defined in sysemu.h and its
      runstate_as_string() function are being dropped in favor of the
      RunState type generated by the QAPI.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      1fa9a5e4
    • L
      RunState: Rename enum values as generated by the QAPI · 0461d5a6
      Luiz Capitulino 提交于
      Next commit will convert the query-status command to use the
      RunState type as generated by the QAPI.
      
      In order to "transparently" replace the current enum by the QAPI
      one, we have to make some changes to some enum values.
      
      As the changes are simple renames, I'll do them in one shot. The
      changes are:
      
       - Rename the prefix from RSTATE_ to RUN_STATE_
       - RUN_STATE_SAVEVM to RUN_STATE_SAVE_VM
       - RUN_STATE_IN_MIGRATE to RUN_STATE_INMIGRATE
       - RUN_STATE_PANICKED to RUN_STATE_INTERNAL_ERROR
       - RUN_STATE_POST_MIGRATE to RUN_STATE_POSTMIGRATE
       - RUN_STATE_PRE_LAUNCH to RUN_STATE_PRELAUNCH
       - RUN_STATE_PRE_MIGRATE to RUN_STATE_PREMIGRATE
       - RUN_STATE_RESTORE to RUN_STATE_RESTORE_VM
       - RUN_STATE_PRE_MIGRATE to RUN_STATE_FINISH_MIGRATE
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      0461d5a6
    • L
      RunState: Drop the RSTATE_NO_STATE value · c4d11e38
      Luiz Capitulino 提交于
      The QAPI framework won't generate it, so we need to get rid of it.
      
      In order to do that, this commit makes RSTATE_PRE_LAUNCH the initial
      state and change qemu_vmstop_requested() to use RSTATE_MAX.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      c4d11e38
  21. 16 9月, 2011 6 次提交
  22. 23 8月, 2011 1 次提交
  23. 29 7月, 2011 1 次提交
    • W
      showing a splash picture when start · 3d3b8303
      wayne 提交于
          Added options to let qemu transfer two configuration files to bios:
      "bootsplash.bmp" and "etc/boot-menu-wait", which could be specified by command
          -boot splash=P,splash-time=T
      P is jpg/bmp file name or an absolute path, T have a max value of 0xffff, unit
      is ms. With these two options, if user invoke qemu with menu=on option, then
      a splash picture would be showed in a given time. For example:
          qemu -boot menu=on,splash=/root/boot.bmp,splash-time=5000
      would make boot.bmp shown as a brand with 5 seconds in the booting up process.
      This feature need the new seabios's support, which could be got from git.
      Signed-off-by: NWayne Xia <xiawenc@linux.vnet.ibm.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      3d3b8303
  24. 16 6月, 2011 1 次提交
  25. 08 5月, 2011 1 次提交
  26. 16 4月, 2011 4 次提交