1. 06 2月, 2017 1 次提交
  2. 28 1月, 2017 1 次提交
  3. 25 1月, 2017 1 次提交
  4. 10 1月, 2017 1 次提交
    • P
      migration: allow to prioritize save state entries · f37bc036
      Peter Xu 提交于
      During migration, save state entries are saved/loaded without a specific
      order - we just traverse the savevm_state.handlers list and do it one by
      one. This might not be enough.
      
      There are requirements that we need to load specific device's vmstate
      first before others. For example, VT-d IOMMU contains DMA address
      remapping information, which is required by all the PCI devices to do
      address translations. We need to make sure IOMMU's device state is
      loaded before the rest of the PCI devices, so that DMA address
      translation can work properly.
      
      This patch provide a VMStateDescription.priority value to allow specify
      the priority of the saved states. The loadvm operation will be done with
      those devices with higher vmsd priority.
      
      Before this patch, we are possibly achieving the ordering requirement by
      an assumption that the ordering will be the same with the ordering that
      objects are created. A better way is to mark it out explicitly in the
      VMStateDescription table, like what this patch does.
      
      Current ordering logic is still naive and slow, but after all that's not
      a critical path so IMO it's a workable solution for now.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      f37bc036
  5. 27 10月, 2016 1 次提交
  6. 24 10月, 2016 1 次提交
  7. 13 10月, 2016 1 次提交
  8. 13 7月, 2016 2 次提交
    • L
      hmp: show all of snapshot info on every block dev in output of 'info snapshots' · 0c204cc8
      Lin Ma 提交于
      Currently, the output of 'info snapshots' shows fully available snapshots.
      It's opaque, hides some snapshot information to users. It's not convenient
      if users want to know more about all of snapshot information on every block
      device via monitor.
      
      Follow Kevin's and Max's proposals, The patch makes the output more detailed:
      (qemu) info snapshots
      List of snapshots present on all disks:
       ID        TAG                 VM SIZE                DATE       VM CLOCK
       --        checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813
      
      List of partial (non-loadable) snapshots on 'drive_image1':
       ID        TAG                 VM SIZE                DATE       VM CLOCK
       1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
      Signed-off-by: NLin Ma <lma@suse.com>
      Message-id: 1467869164-26688-3-git-send-email-lma@suse.com
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      0c204cc8
    • L
      hmp: use snapshot name to determine whether a snapshot is 'fully available' · 3a1ee711
      Lin Ma 提交于
      Currently qemu uses snapshot id to determine whether a snapshot is fully
      available, It causes incorrect output in some scenario.
      
      For instance:
      (qemu) info block
      drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2
      (qcow2)
          Cache mode:       writeback
      
      drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2
      (qcow2)
          Cache mode:       writeback
      (qemu)
      (qemu) info snapshots
      There is no snapshot available.
      (qemu)
      (qemu) snapshot_blkdev_internal drive_image1 snap1
      (qemu)
      (qemu) info snapshots
      There is no suitable snapshot available
      (qemu)
      (qemu) savevm checkpoint-1
      (qemu)
      (qemu) info snapshots
      ID        TAG                 VM SIZE                DATE       VM CLOCK
      1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
      (qemu)
      
      $ qemu-img snapshot -l disk0.qcow2
      Snapshot list:
      ID        TAG                 VM SIZE                DATE       VM CLOCK
      1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
      2         checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813
      
      $ qemu-img snapshot -l disk1.qcow2
      Snapshot list:
      ID        TAG                 VM SIZE                DATE       VM CLOCK
      1         checkpoint-1              0 2016-05-22 16:58:07   00:02:06.813
      
      The patch uses snapshot name instead of snapshot id to determine whether a
      snapshot is fully available and uses '--' instead of snapshot id in output
      because the snapshot id is not guaranteed to be the same on all images.
      For instance:
      (qemu) info snapshots
      List of snapshots present on all disks:
       ID        TAG                 VM SIZE                DATE       VM CLOCK
       --        checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813
      Signed-off-by: NLin Ma <lma@suse.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: 1467869164-26688-2-git-send-email-lma@suse.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      3a1ee711
  9. 17 6月, 2016 2 次提交
  10. 13 6月, 2016 1 次提交
  11. 26 5月, 2016 3 次提交
  12. 24 5月, 2016 1 次提交
    • G
      savevm: fail if migration blockers are present · 24f3902b
      Greg Kurz 提交于
      QEMU has currently two ways to prevent migration to occur:
      - migration blocker when it depends on runtime state
      - VMStateDescription.unmigratable when migration is not supported at all
      
      This patch gathers all the logic into a single function to be called from
      both the savevm and the migrate paths.
      
      This fixes a bug with 9p, at least, where savevm would succeed and the
      following would happen in the guest after loadvm:
      
      $ ls /host
      ls: cannot access /host: Protocol error
      
      With this patch:
      
      (qemu) savevm foo
      Migration is disabled when VirtFS export path '/' is mounted in the guest
      using mount_tag 'host'
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <146239057139.11271.9011797645454781543.stgit@bahia.huguette.org>
      
      [Update subject according to Paolo's suggestion - Amit]
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      24f3902b
  13. 23 5月, 2016 2 次提交
  14. 19 5月, 2016 1 次提交
  15. 23 3月, 2016 1 次提交
  16. 11 3月, 2016 2 次提交
  17. 28 2月, 2016 1 次提交
  18. 26 2月, 2016 2 次提交
    • D
      migration (postcopy): move bdrv_invalidate_cache_all of of coroutine context · ea6a55bc
      Denis V. Lunev 提交于
      There is a possibility to hit an assert in qcow2_get_specific_info that
      s->qcow_version is undefined. This happens when VM in starting from
      suspended state, i.e. it processes incoming migration, and in the same
      time 'info block' is called.
      
      The problem is that qcow2_invalidate_cache() closes the image and
      memset()s BDRVQcowState in the middle.
      
      The patch moves processing of bdrv_invalidate_cache_all out of
      coroutine context for postcopy migration to avoid that. This function
      is called with the following stack:
        process_incoming_migration_co
        qemu_loadvm_state
        qemu_loadvm_state_main
        loadvm_process_command
        loadvm_postcopy_handle_run
      Signed-off-by: NDenis V. Lunev <den@openvz.org>
      Tested-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      CC: Paolo Bonzini <pbonzini@redhat.com>
      CC: Juan Quintela <quintela@redhat.com>
      CC: Amit Shah <amit.shah@redhat.com>
      Message-Id: <1456304019-10507-3-git-send-email-den@openvz.org>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      ea6a55bc
    • W
      migration: reorder code to make it symmetric · bdf46d64
      Wei Yang 提交于
      In qemu_savevm_state_complete_precopy(), it iterates on each device to add
      a json object and transfer related status to destination, while the order
      of the last two steps could be refined.
      
      Current order:
      
          json_start_object()
          	save_section_header()
          	vmstate_save()
          json_end_object()
          	save_section_footer()
      
      After the change:
      
          json_start_object()
          	save_section_header()
          	vmstate_save()
          	save_section_footer()
          json_end_object()
      
      This patch reorder the code to to make it symmetric. No functional change.
      Signed-off-by: NWei Yang <richard.weiyang@gmail.com>
      Reviewed-by: NAmit Shah <amit.shah@redhat.com>
      Message-Id: <1454626230-16334-1-git-send-email-richard.weiyang@gmail.com>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      bdf46d64
  19. 05 2月, 2016 3 次提交
  20. 29 1月, 2016 1 次提交
  21. 13 1月, 2016 4 次提交
  22. 19 11月, 2015 7 次提交