1. 26 5月, 2016 3 次提交
  2. 24 5月, 2016 2 次提交
    • G
      migration: regain control of images when migration fails to complete · fe904ea8
      Greg Kurz 提交于
      We currently have an error path during migration that can cause
      the source QEMU to abort:
      
      migration_thread()
        migration_completion()
          runstate_is_running() ----------------> true if guest is running
          bdrv_inactivate_all() ----------------> inactivate images
          qemu_savevm_state_complete_precopy()
           ... qemu_fflush()
                 socket_writev_buffer() --------> error because destination fails
               qemu_fflush() -------------------> set error on migration stream
        migration_completion() -----------------> set migrate state to FAILED
      migration_thread() -----------------------> break migration loop
        vm_start() -----------------------------> restart guest with inactive
                                                  images
      
      and you get:
      
      qemu-system-ppc64: socket_writev_buffer: Got err=104 for (32768/18446744073709551615)
      qemu-system-ppc64: /home/greg/Work/qemu/qemu-master/block/io.c:1342:bdrv_co_do_pwritev: Assertion `!(bs->open_flags & 0x0800)' failed.
      Aborted (core dumped)
      
      If we try postcopy with a similar scenario, we also get the writev error
      message but QEMU leaves the guest paused because entered_postcopy is true.
      
      We could possibly do the same with precopy and leave the guest paused.
      But since the historical default for migration errors is to restart the
      source, this patch adds a call to bdrv_invalidate_cache_all() instead.
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Message-Id: <146357896785.6003.11983081732454362715.stgit@bahia.huguette.org>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      fe904ea8
    • G
      savevm: fail if migration blockers are present · 24f3902b
      Greg Kurz 提交于
      QEMU has currently two ways to prevent migration to occur:
      - migration blocker when it depends on runtime state
      - VMStateDescription.unmigratable when migration is not supported at all
      
      This patch gathers all the logic into a single function to be called from
      both the savevm and the migrate paths.
      
      This fixes a bug with 9p, at least, where savevm would succeed and the
      following would happen in the guest after loadvm:
      
      $ ls /host
      ls: cannot access /host: Protocol error
      
      With this patch:
      
      (qemu) savevm foo
      Migration is disabled when VirtFS export path '/' is mounted in the guest
      using mount_tag 'host'
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <146239057139.11271.9011797645454781543.stgit@bahia.huguette.org>
      
      [Update subject according to Paolo's suggestion - Amit]
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      24f3902b
  3. 23 5月, 2016 4 次提交
  4. 19 5月, 2016 2 次提交
  5. 18 5月, 2016 1 次提交
  6. 23 3月, 2016 2 次提交
    • V
      util: move declarations out of qemu-common.h · f348b6d1
      Veronia Bahaa 提交于
      Move declarations out of qemu-common.h for functions declared in
      utils/ files: e.g. include/qemu/path.h for utils/path.c.
      Move inline functions out of qemu-common.h and into new files (e.g.
      include/qemu/bcd.h)
      Signed-off-by: NVeronia Bahaa <veroniabahaa@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f348b6d1
    • M
      include/qemu/osdep.h: Don't include qapi/error.h · da34e65c
      Markus Armbruster 提交于
      Commit 57cb38b3 included qapi/error.h into qemu/osdep.h to get the
      Error typedef.  Since then, we've moved to include qemu/osdep.h
      everywhere.  Its file comment explains: "To avoid getting into
      possible circular include dependencies, this file should not include
      any other QEMU headers, with the exceptions of config-host.h,
      compiler.h, os-posix.h and os-win32.h, all of which are doing a
      similar job to this file and are under similar constraints."
      qapi/error.h doesn't do a similar job, and it doesn't adhere to
      similar constraints: it includes qapi-types.h.  That's in excess of
      100KiB of crap most .c files don't actually need.
      
      Add the typedef to qemu/typedefs.h, and include that instead of
      qapi/error.h.  Include qapi/error.h in .c files that need it and don't
      get it now.  Include qapi-types.h in qom/object.h for uint16List.
      
      Update scripts/clean-includes accordingly.  Update it further to match
      reality: replace config.h by config-target.h, add sysemu/os-posix.h,
      sysemu/os-win32.h.  Update the list of includes in the qemu/osdep.h
      comment quoted above similarly.
      
      This reduces the number of objects depending on qapi/error.h from "all
      of them" to less than a third.  Unfortunately, the number depending on
      qapi-types.h shrinks only a little.  More work is needed for that one.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      [Fix compilation without the spice devel packages. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      da34e65c
  7. 11 3月, 2016 5 次提交
  8. 08 3月, 2016 1 次提交
  9. 28 2月, 2016 1 次提交
  10. 26 2月, 2016 4 次提交
  11. 25 2月, 2016 1 次提交
  12. 23 2月, 2016 1 次提交
    • D
      Postcopy+spice: Pass spice migration data earlier · b82fc321
      Dr. David Alan Gilbert 提交于
      Spice hooks the migration status changes to figure out when to
      transmit information to the new spice server; but the migration
      status in postcopy doesn't quite fit - the destination starts
      running before the end of the source migration.
      
      It's not a case of hanging off the migration status change to
      postcopy-active either, since that happens before we stop the
      guest CPU.
      
      Fix it by sending a notify just after sending the device state,
      and adding a flag that can be tested by the notify receiver.
      
      Symptom:
         spice handover doesn't work with the error:
         red_worker.c:11540:display_channel_wait_for_migrate_data: timeout
      Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NAmit Shah <amit.shah@redhat.com>
      Message-id: 1456161452-25318-1-git-send-email-dgilbert@redhat.com
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      b82fc321
  13. 22 2月, 2016 1 次提交
  14. 16 2月, 2016 1 次提交
  15. 11 2月, 2016 1 次提交
  16. 09 2月, 2016 1 次提交
    • S
      memory: RCU ram_list.dirty_memory[] for safe RAM hotplug · 5b82b703
      Stefan Hajnoczi 提交于
      Although accesses to ram_list.dirty_memory[] use atomics so multiple
      threads can safely dirty the bitmap, the data structure is not fully
      thread-safe yet.
      
      This patch handles the RAM hotplug case where ram_list.dirty_memory[] is
      grown.  ram_list.dirty_memory[] is change from a regular bitmap to an
      RCU array of pointers to fixed-size bitmap blocks.  Threads can continue
      accessing bitmap blocks while the array is being extended.  See the
      comments in the code for an in-depth explanation of struct
      DirtyMemoryBlocks.
      
      I have tested that live migration with virtio-blk dataplane works.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <1453728801-5398-2-git-send-email-stefanha@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5b82b703
  17. 05 2月, 2016 6 次提交
  18. 29 1月, 2016 1 次提交
  19. 22 1月, 2016 1 次提交
  20. 20 1月, 2016 1 次提交
    • K
      block: Inactivate BDS when migration completes · 76b1c7fe
      Kevin Wolf 提交于
      So far, live migration with shared storage meant that the image is in a
      not-really-ready don't-touch-me state on the destination while the
      source is still actively using it, but after completing the migration,
      the image was fully opened on both sides. This is bad.
      
      This patch adds a block driver callback to inactivate images on the
      source before completing the migration. Inactivation means that it goes
      to a state as if it was just live migrated to the qemu instance on the
      source (i.e. BDRV_O_INACTIVE is set). You're then supposed to continue
      either on the source or on the destination, which takes ownership of the
      image.
      
      A typical migration looks like this now with respect to disk images:
      
      1. Destination qemu is started, the image is opened with
         BDRV_O_INACTIVE. The image is fully opened on the source.
      
      2. Migration is about to complete. The source flushes the image and
         inactivates it. Now both sides have the image opened with
         BDRV_O_INACTIVE and are expecting the other side to still modify it.
      
      3. One side (the destination on success) continues and calls
         bdrv_invalidate_all() in order to take ownership of the image again.
         This removes BDRV_O_INACTIVE on the resuming side; the flag remains
         set on the other side.
      
      This ensures that the same image isn't written to by both instances
      (unless both are resumed, but then you get what you deserve). This is
      important because .bdrv_close for non-BDRV_O_INACTIVE images could write
      to the image file, which is definitely forbidden while another host is
      using the image.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      76b1c7fe