1. 31 5月, 2017 1 次提交
  2. 23 5月, 2017 1 次提交
    • E
      shutdown: Prepare for use of an enum in reset/shutdown_request · aedbe192
      Eric Blake 提交于
      We want to track why a guest was shutdown; in particular, being able
      to tell the difference between a guest request (such as ACPI request)
      and host request (such as SIGINT) will prove useful to libvirt.
      Since all requests eventually end up changing shutdown_requested in
      vl.c, the logical change is to make that value track the reason,
      rather than its current 0/1 contents.
      
      Since command-line options control whether a reset request is turned
      into a shutdown request instead, the same treatment is given to
      reset_requested.
      
      This patch adds an internal enum ShutdownCause that describes reasons
      that a shutdown can be requested, and changes qemu_system_reset() to
      pass the reason through, although for now nothing is actually changed
      with regards to what gets reported.  The enum could be exported via
      QAPI at a later date, if deemed necessary, but for now, there has not
      been a request to expose that much detail to end clients.
      
      For the most part, we turn 0 into SHUTDOWN_CAUSE_NONE, and 1 into
      SHUTDOWN_CAUSE_HOST_ERROR; the only specific case where we have enough
      information right now to use a different value is when we are reacting
      to a host signal.  It will take a further patch to edit all call-sites
      that can trigger a reset or shutdown request to properly pass in any
      other reasons; this patch includes TODOs to point such places out.
      
      qemu_system_reset() trades its 'bool report' parameter for a
      'ShutdownCause reason', with all non-zero values having the same
      effect; this lets us get rid of the weird #defines for VMRESET_*
      as synonyms for bools.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20170515214114.15442-3-eblake@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      aedbe192
  3. 19 5月, 2017 5 次提交
  4. 17 5月, 2017 2 次提交
  5. 11 5月, 2017 2 次提交
    • K
      block: New BdrvChildRole.activate() for blk_resume_after_migration() · 4417ab7a
      Kevin Wolf 提交于
      Instead of manually calling blk_resume_after_migration() in migration
      code after doing bdrv_invalidate_cache_all(), integrate the BlockBackend
      activation with cache invalidation into a single function. This is
      achieved with a new callback in BdrvChildRole that is called by
      bdrv_invalidate_cache_all().
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      4417ab7a
    • K
      migration: Unify block node activation error handling · ace21a58
      Kevin Wolf 提交于
      Migration code activates all block driver nodes on the destination when
      the migration completes. It does so by calling
      bdrv_invalidate_cache_all() and blk_resume_after_migration(). There is
      one code path for precopy and one for postcopy migration, resulting in
      four function calls, which used to have three different failure modes.
      
      This patch unifies the behaviour so that failure to activate all block
      nodes is non-fatal, but the error message is logged and the VM isn't
      automatically started. 'cont' will retry activating the block nodes.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      ace21a58
  6. 04 5月, 2017 5 次提交
  7. 27 4月, 2017 1 次提交
  8. 21 4月, 2017 4 次提交
  9. 28 2月, 2017 3 次提交
  10. 14 2月, 2017 1 次提交
  11. 06 2月, 2017 4 次提交
  12. 28 1月, 2017 1 次提交
  13. 25 1月, 2017 1 次提交
  14. 10 1月, 2017 1 次提交
    • P
      migration: allow to prioritize save state entries · f37bc036
      Peter Xu 提交于
      During migration, save state entries are saved/loaded without a specific
      order - we just traverse the savevm_state.handlers list and do it one by
      one. This might not be enough.
      
      There are requirements that we need to load specific device's vmstate
      first before others. For example, VT-d IOMMU contains DMA address
      remapping information, which is required by all the PCI devices to do
      address translations. We need to make sure IOMMU's device state is
      loaded before the rest of the PCI devices, so that DMA address
      translation can work properly.
      
      This patch provide a VMStateDescription.priority value to allow specify
      the priority of the saved states. The loadvm operation will be done with
      those devices with higher vmsd priority.
      
      Before this patch, we are possibly achieving the ordering requirement by
      an assumption that the ordering will be the same with the ordering that
      objects are created. A better way is to mark it out explicitly in the
      VMStateDescription table, like what this patch does.
      
      Current ordering logic is still naive and slow, but after all that's not
      a critical path so IMO it's a workable solution for now.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      f37bc036
  15. 27 10月, 2016 1 次提交
  16. 24 10月, 2016 1 次提交
  17. 13 10月, 2016 1 次提交
  18. 13 7月, 2016 2 次提交
    • L
      hmp: show all of snapshot info on every block dev in output of 'info snapshots' · 0c204cc8
      Lin Ma 提交于
      Currently, the output of 'info snapshots' shows fully available snapshots.
      It's opaque, hides some snapshot information to users. It's not convenient
      if users want to know more about all of snapshot information on every block
      device via monitor.
      
      Follow Kevin's and Max's proposals, The patch makes the output more detailed:
      (qemu) info snapshots
      List of snapshots present on all disks:
       ID        TAG                 VM SIZE                DATE       VM CLOCK
       --        checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813
      
      List of partial (non-loadable) snapshots on 'drive_image1':
       ID        TAG                 VM SIZE                DATE       VM CLOCK
       1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
      Signed-off-by: NLin Ma <lma@suse.com>
      Message-id: 1467869164-26688-3-git-send-email-lma@suse.com
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      0c204cc8
    • L
      hmp: use snapshot name to determine whether a snapshot is 'fully available' · 3a1ee711
      Lin Ma 提交于
      Currently qemu uses snapshot id to determine whether a snapshot is fully
      available, It causes incorrect output in some scenario.
      
      For instance:
      (qemu) info block
      drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2
      (qcow2)
          Cache mode:       writeback
      
      drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2
      (qcow2)
          Cache mode:       writeback
      (qemu)
      (qemu) info snapshots
      There is no snapshot available.
      (qemu)
      (qemu) snapshot_blkdev_internal drive_image1 snap1
      (qemu)
      (qemu) info snapshots
      There is no suitable snapshot available
      (qemu)
      (qemu) savevm checkpoint-1
      (qemu)
      (qemu) info snapshots
      ID        TAG                 VM SIZE                DATE       VM CLOCK
      1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
      (qemu)
      
      $ qemu-img snapshot -l disk0.qcow2
      Snapshot list:
      ID        TAG                 VM SIZE                DATE       VM CLOCK
      1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
      2         checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813
      
      $ qemu-img snapshot -l disk1.qcow2
      Snapshot list:
      ID        TAG                 VM SIZE                DATE       VM CLOCK
      1         checkpoint-1              0 2016-05-22 16:58:07   00:02:06.813
      
      The patch uses snapshot name instead of snapshot id to determine whether a
      snapshot is fully available and uses '--' instead of snapshot id in output
      because the snapshot id is not guaranteed to be the same on all images.
      For instance:
      (qemu) info snapshots
      List of snapshots present on all disks:
       ID        TAG                 VM SIZE                DATE       VM CLOCK
       --        checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813
      Signed-off-by: NLin Ma <lma@suse.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: 1467869164-26688-2-git-send-email-lma@suse.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      3a1ee711
  19. 17 6月, 2016 2 次提交
  20. 13 6月, 2016 1 次提交