1. 13 1月, 2015 1 次提交
    • V
      block: fix spoiling all dirty bitmaps by mirror and migration · c4237dfa
      Vladimir Sementsov-Ogievskiy 提交于
      Mirror and migration use dirty bitmaps for their purposes, and since
      commit [block: per caller dirty bitmap] they use their own bitmaps, not
      the global one. But they use old functions bdrv_set_dirty and
      bdrv_reset_dirty, which change all dirty bitmaps.
      
      Named dirty bitmaps series by Fam and Snow are affected: mirroring and
      migration will spoil all (not related to this mirroring or migration)
      named dirty bitmaps.
      
      This patch fixes this by adding bdrv_set_dirty_bitmap and
      bdrv_reset_dirty_bitmap, which change concrete bitmap. Also, to prevent
      such mistakes in future, old functions bdrv_(set,reset)_dirty are made
      static, for internal block usage.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@parallels.com>
      CC: John Snow <jsnow@redhat.com>
      CC: Fam Zheng <famz@redhat.com>
      CC: Denis V. Lunev <den@openvz.org>
      CC: Stefan Hajnoczi <stefanha@redhat.com>
      CC: Kevin Wolf <kwolf@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Message-id: 1417081246-3593-1-git-send-email-vsementsov@parallels.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      c4237dfa
  2. 16 12月, 2014 2 次提交
  3. 12 12月, 2014 1 次提交
  4. 20 10月, 2014 3 次提交
  5. 20 8月, 2014 1 次提交
    • M
      block: Use g_new() & friends where that makes obvious sense · 5839e53b
      Markus Armbruster 提交于
      g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
      for two reasons.  One, it catches multiplication overflowing size_t.
      Two, it returns T * rather than void *, which lets the compiler catch
      more type errors.
      
      Patch created with Coccinelle, with two manual changes on top:
      
      * Add const to bdrv_iterate_format() to keep the types straight
      
      * Convert the allocation in bdrv_drop_intermediate(), which Coccinelle
        inexplicably misses
      
      Coccinelle semantic patch:
      
          @@
          type T;
          @@
          -g_malloc(sizeof(T))
          +g_new(T, 1)
          @@
          type T;
          @@
          -g_try_malloc(sizeof(T))
          +g_try_new(T, 1)
          @@
          type T;
          @@
          -g_malloc0(sizeof(T))
          +g_new0(T, 1)
          @@
          type T;
          @@
          -g_try_malloc0(sizeof(T))
          +g_try_new0(T, 1)
          @@
          type T;
          expression n;
          @@
          -g_malloc(sizeof(T) * (n))
          +g_new(T, n)
          @@
          type T;
          expression n;
          @@
          -g_try_malloc(sizeof(T) * (n))
          +g_try_new(T, n)
          @@
          type T;
          expression n;
          @@
          -g_malloc0(sizeof(T) * (n))
          +g_new0(T, n)
          @@
          type T;
          expression n;
          @@
          -g_try_malloc0(sizeof(T) * (n))
          +g_try_new0(T, n)
          @@
          type T;
          expression p, n;
          @@
          -g_realloc(p, sizeof(T) * (n))
          +g_renew(T, p, n)
          @@
          type T;
          expression p, n;
          @@
          -g_try_realloc(p, sizeof(T) * (n))
          +g_try_renew(T, p, n)
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      5839e53b
  6. 15 8月, 2014 1 次提交
  7. 18 7月, 2014 1 次提交
  8. 04 6月, 2014 1 次提交
    • C
      block: fix wrong order in live block migration setup · 1ac362cd
      chai wen 提交于
      The function init_blk_migration is better to be called before
      set_dirty_tracking as the reasons below.
      
      If we want to track dirty blocks via dirty_maps on a BlockDriverState
      when doing live block-migration, its correspoding 'BlkMigDevState' should be
      added to block_mig_state.bmds_list first for subsequent processing.
      Otherwise set_dirty_tracking will do nothing on an empty list than allocating
      dirty_bitmaps for them. And bdrv_get_dirty_count will access the
      bmds->dirty_maps directly, then there would be a segfault triggered.
      
      If the set_dirty_tracking fails, qemu_savevm_state_cancel will handle
      the cleanup of init_blk_migration automatically.
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: Nchai wen <chaiw.fnst@cn.fujitsu.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      1ac362cd
  9. 28 5月, 2014 1 次提交
    • F
      block: Replace in_use with operation blocker · 3718d8ab
      Fam Zheng 提交于
      This drops BlockDriverState.in_use with op_blockers:
      
        - Call bdrv_op_block_all in place of bdrv_set_in_use(bs, 1).
      
        - Call bdrv_op_unblock_all in place of bdrv_set_in_use(bs, 0).
      
        - Check bdrv_op_is_blocked() in place of bdrv_in_use(bs).
      
          The specific types are used, e.g. in place of starting block backup,
          bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP, ...).
      
          There is one exception in block_job_create, where
          bdrv_op_blocker_is_empty() is used, because we don't know the operation
          type here. This doesn't matter because in a few commits away we will drop
          the check and move it to callers that _do_ know the type.
      
        - Check bdrv_op_blocker_is_empty() in place of assert(!bs->in_use).
      
      Note: there is only bdrv_op_block_all and bdrv_op_unblock_all callers at
      this moment. So although the checks are specific to op types, this
      changes can still be seen as identical logic with previously with
      in_use. The difference is error message are improved because of blocker
      error info.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      3718d8ab
  10. 22 4月, 2014 1 次提交
  11. 29 11月, 2013 1 次提交
    • F
      block: per caller dirty bitmap · e4654d2d
      Fam Zheng 提交于
      Previously a BlockDriverState has only one dirty bitmap, so only one
      caller (e.g. a block job) can keep track of writing. This changes the
      dirty bitmap to a list and creates a BdrvDirtyBitmap for each caller, the
      lifecycle is managed with these new functions:
      
          bdrv_create_dirty_bitmap
          bdrv_release_dirty_bitmap
      
      Where BdrvDirtyBitmap is a linked list wrapper structure of HBitmap.
      
      In place of bdrv_set_dirty_tracking, a BdrvDirtyBitmap pointer argument
      is added to these functions, since each caller has its own dirty bitmap:
      
          bdrv_get_dirty
          bdrv_dirty_iter_init
          bdrv_get_dirty_count
      
      bdrv_set_dirty and bdrv_reset_dirty prototypes are unchanged but will
      internally walk the list of all dirty bitmaps and set them one by one.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      e4654d2d
  12. 28 11月, 2013 2 次提交
  13. 06 9月, 2013 1 次提交
  14. 19 7月, 2013 1 次提交
  15. 11 3月, 2013 7 次提交
  16. 13 2月, 2013 1 次提交
    • S
      block-migration: fix pending() and iterate() return values · 6aaa9dae
      Stefan Hajnoczi 提交于
      The return value of .save_live_pending() is the number of bytes
      remaining.  This is just an estimate because we do not know how many
      blocks will be dirtied by the running guest.
      
      Currently our return value for .save_live_pending() is wrong because it
      includes dirty blocks but not in-flight bdrv_aio_readv() requests or
      unsent blocks.  Crucially, it also doesn't include the bulk phase where
      the entire device is transferred - therefore we risk completing block
      migration before all blocks have been transferred!
      
      The return value of .save_live_iterate() is the number of bytes
      transferred this iteration.  Currently we return whether there are bytes
      remaining, which is incorrect.
      
      Move the bytes remaining calculation into .save_live_pending() and
      really return the number of bytes transferred this iteration in
      .save_live_iterate().
      
      Also fix the %ld format specifier which was used for a uint64_t
      argument.  PRIu64 must be use to avoid warnings on 32-bit hosts.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      Message-id: 1360661835-28663-3-git-send-email-stefanha@redhat.com
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      6aaa9dae
  17. 11 2月, 2013 3 次提交
  18. 26 1月, 2013 2 次提交
  19. 21 12月, 2012 1 次提交
    • J
      savevm: New save live migration method: pending · e4ed1541
      Juan Quintela 提交于
      Code just now does (simplified for clarity)
      
          if (qemu_savevm_state_iterate(s->file) == 1) {
             vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
             qemu_savevm_state_complete(s->file);
          }
      
      Problem here is that qemu_savevm_state_iterate() returns 1 when it
      knows that remaining memory to sent takes less than max downtime.
      
      But this means that we could end spending 2x max_downtime, one
      downtime in qemu_savevm_iterate, and the other in
      qemu_savevm_state_complete.
      
      Changed code to:
      
          pending_size = qemu_savevm_state_pending(s->file, max_size);
          DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
          if (pending_size >= max_size) {
              ret = qemu_savevm_state_iterate(s->file);
           } else {
              vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
              qemu_savevm_state_complete(s->file);
           }
      
      So what we do is: at current network speed, we calculate the maximum
      number of bytes we can sent: max_size.
      
      Then we ask every save_live section how much they have pending.  If
      they are less than max_size, we move to complete phase, otherwise we
      do an iterate one.
      
      This makes things much simpler, because now individual sections don't
      have to caluclate the bandwidth (it was implossible to do right from
      there).
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      e4ed1541
  20. 19 12月, 2012 4 次提交
  21. 18 10月, 2012 3 次提交
  22. 28 9月, 2012 1 次提交