1. 19 5月, 2016 6 次提交
  2. 12 5月, 2016 9 次提交
  3. 30 3月, 2016 4 次提交
  4. 17 3月, 2016 4 次提交
  5. 14 3月, 2016 1 次提交
    • K
      hmp: 'drive_add -n' for creating a node without BB · abb21ac3
      Kevin Wolf 提交于
      This patch adds an option to the drive_add HMP command to create only a
      BlockDriverState without a BlockBackend on top.
      
      The motivation for this is that libvirt needs to specify options to a
      migration target (specifically, detect-zeroes). drive-mirror doesn't
      allow specifying options, and the proper way to do this is to create the
      target BDS separately with blockdev-add (where you can specify options)
      and then use blockdev-mirror to that BDS.
      
      However, libvirt can't use blockdev-add as long as it is still
      experimental, and we're expecting that it will still take some time, so
      we need to resort to drive_add.
      
      The problem with drive_add is that so far it always created a BB, and
      BDSes with a BB can't be used as a mirroring target as long as we don't
      support multiple BBs per BDS - and while we're working towards that
      goal, it's another thing that will still take some time.
      
      So to achieve the goal, the simplest solution to provide the
      functionality now without adding one-off options to the mirror QMP
      commands is to extend drive_add to create nodes without BBs.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      abb21ac3
  6. 03 2月, 2016 5 次提交
  7. 20 1月, 2016 1 次提交
    • K
      block: Inactivate BDS when migration completes · 76b1c7fe
      Kevin Wolf 提交于
      So far, live migration with shared storage meant that the image is in a
      not-really-ready don't-touch-me state on the destination while the
      source is still actively using it, but after completing the migration,
      the image was fully opened on both sides. This is bad.
      
      This patch adds a block driver callback to inactivate images on the
      source before completing the migration. Inactivation means that it goes
      to a state as if it was just live migrated to the qemu instance on the
      source (i.e. BDRV_O_INACTIVE is set). You're then supposed to continue
      either on the source or on the destination, which takes ownership of the
      image.
      
      A typical migration looks like this now with respect to disk images:
      
      1. Destination qemu is started, the image is opened with
         BDRV_O_INACTIVE. The image is fully opened on the source.
      
      2. Migration is about to complete. The source flushes the image and
         inactivates it. Now both sides have the image opened with
         BDRV_O_INACTIVE and are expecting the other side to still modify it.
      
      3. One side (the destination on success) continues and calls
         bdrv_invalidate_all() in order to take ownership of the image again.
         This removes BDRV_O_INACTIVE on the resuming side; the flag remains
         set on the other side.
      
      This ensures that the same image isn't written to by both instances
      (unless both are resumed, but then you get what you deserve). This is
      important because .bdrv_close for non-BDRV_O_INACTIVE images could write
      to the image file, which is definitely forbidden while another host is
      using the image.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      76b1c7fe
  8. 22 12月, 2015 1 次提交
  9. 18 12月, 2015 6 次提交
  10. 17 12月, 2015 1 次提交
  11. 12 11月, 2015 2 次提交