1. 01 3月, 2017 5 次提交
  2. 28 2月, 2017 1 次提交
  3. 24 2月, 2017 1 次提交
  4. 21 2月, 2017 2 次提交
  5. 15 11月, 2016 3 次提交
    • P
      mirror: do not flush every time the disks are synced · bdffb31d
      Paolo Bonzini 提交于
      This puts a huge strain on the disks when there are many concurrent
      migrations.  With this patch we only flush twice: just before issuing
      the event, and just before pivoting to the destination.  If management
      will complete the job close to the BLOCK_JOB_READY event, the cost of
      the second flush should be small anyway.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-id: 20161109162008.27287-2-pbonzini@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      bdffb31d
    • J
      blockjob: add block_job_start · 5ccac6f1
      John Snow 提交于
      Instead of automatically starting jobs at creation time via backup_start
      et al, we'd like to return a job object pointer that can be started
      manually at later point in time.
      
      For now, add the block_job_start mechanism and start the jobs
      automatically as we have been doing, with conversions job-by-job coming
      in later patches.
      
      Of note: cancellation of unstarted jobs will perform all the normal
      cleanup as if the job had started, particularly abort and clean. The
      only difference is that we will not emit any events, because the job
      never actually started.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      5ccac6f1
    • J
      blockjob: add .start field · a7815a76
      John Snow 提交于
      Add an explicit start field to specify the entrypoint. We already have
      ownership of the coroutine itself AND managing the lifetime of the
      coroutine, let's take control of creation of the coroutine, too.
      
      This will allow us to delay creation of the actual coroutine until we
      know we'll actually start a BlockJob in block_job_start. This avoids
      the sticky question of how to "un-create" a Coroutine that hasn't been
      started yet.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      a7815a76
  6. 01 11月, 2016 5 次提交
  7. 31 10月, 2016 2 次提交
  8. 28 10月, 2016 2 次提交
  9. 24 10月, 2016 1 次提交
  10. 13 9月, 2016 1 次提交
  11. 08 8月, 2016 1 次提交
  12. 27 7月, 2016 1 次提交
  13. 22 7月, 2016 1 次提交
  14. 20 7月, 2016 8 次提交
  15. 13 7月, 2016 6 次提交
    • S
      Improve block job rate limiting for small bandwidth values · f14a39cc
      Sascha Silbe 提交于
      ratelimit_calculate_delay() previously reset the accounting every time
      slice, no matter how much data had been processed before. This had (at
      least) two consequences:
      
      1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.
      
         Not sure if there are real-world use cases where this would be a
         problem. Mirroring and backup over a slow link (e.g. DSL) would
         come to mind, though.
      
      2. Tests for block job operations (e.g. cancel) were rather racy
      
         All block jobs currently use a time slice of 100ms. That's a
         reasonable value to get smooth output during regular
         operation. However this also meant that the state of block jobs
         changed every 100ms, no matter how low the configured limit was. On
         busy hosts, qemu often transferred additional chunks until the test
         case had a chance to cancel the job.
      
      Fix the block job rate limit code to delay for more than one time
      slice to address the above issues. To make it easier to handle
      oversized chunks we switch the semantics from returning a delay
      _before_ the current request to a delay _after_ the current
      request. If necessary, this delay consists of multiple time slice
      units.
      
      Since the mirror job sends multiple chunks in one go even if the rate
      limit was exceeded in between, we need to keep track of the start of
      the current time slice so we can correctly re-compute the delay for
      the updated amount of data.
      
      The minimum bandwidth now is 1 data unit per time slice. The block
      jobs are currently passing the amount of data transferred in sectors
      and using 100ms time slices, so this translates to 5120
      bytes/second. With chunk sizes usually being O(512KiB), tests have
      plenty of time (O(100s)) to operate on block jobs. The chance of a
      race condition now is fairly remote, except possibly on insanely
      loaded systems.
      Signed-off-by: NSascha Silbe <silbe@linux.vnet.ibm.com>
      Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      f14a39cc
    • P
      coroutine: move entry argument to qemu_coroutine_create · 0b8b8753
      Paolo Bonzini 提交于
      In practice the entry argument is always known at creation time, and
      it is confusing that sometimes qemu_coroutine_enter is used with a
      non-NULL argument to re-enter a coroutine (this happens in
      block/sheepdog.c and tests/test-coroutine.c).  So pass the opaque value
      at creation time, for consistency with e.g. aio_bh_new.
      
      Mostly done with the following semantic patch:
      
      @ entry1 @
      expression entry, arg, co;
      @@
      - co = qemu_coroutine_create(entry);
      + co = qemu_coroutine_create(entry, arg);
        ...
      - qemu_coroutine_enter(co, arg);
      + qemu_coroutine_enter(co);
      
      @ entry2 @
      expression entry, arg;
      identifier co;
      @@
      - Coroutine *co = qemu_coroutine_create(entry);
      + Coroutine *co = qemu_coroutine_create(entry, arg);
        ...
      - qemu_coroutine_enter(co, arg);
      + qemu_coroutine_enter(co);
      
      @ entry3 @
      expression entry, arg;
      @@
      - qemu_coroutine_enter(qemu_coroutine_create(entry), arg);
      + qemu_coroutine_enter(qemu_coroutine_create(entry, arg));
      
      @ reentry @
      expression co;
      @@
      - qemu_coroutine_enter(co, NULL);
      + qemu_coroutine_enter(co);
      
      except for the aforementioned few places where the semantic patch
      stumbled (as expected) and for test_co_queue, which would otherwise
      produce an uninitialized variable warning.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      0b8b8753
    • A
      commit: Add 'job-id' parameter to 'block-commit' · fd62c609
      Alberto Garcia 提交于
      This patch adds a new optional 'job-id' parameter to 'block-commit',
      allowing the user to specify the ID of the block job to be created.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      fd62c609
    • A
      mirror: Add 'job-id' parameter to 'blockdev-mirror' and 'drive-mirror' · 71aa9867
      Alberto Garcia 提交于
      This patch adds a new optional 'job-id' parameter to 'blockdev-mirror'
      and 'drive-mirror', allowing the user to specify the ID of the block
      job to be created.
      
      The HMP 'drive_mirror' command remains unchanged.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      71aa9867
    • A
      blockjob: Add 'job_id' parameter to block_job_create() · 7f0317cf
      Alberto Garcia 提交于
      When a new job is created, the job ID is taken from the device name of
      the BDS. This patch adds a new 'job_id' parameter to let the caller
      provide one instead.
      
      This patch also verifies that the ID is always unique and well-formed.
      This causes problems in a couple of places where no ID is being set,
      because the BDS does not have a device name.
      
      In the case of test_block_job_start() (from test-blockjob-txn.c) we
      can simply use this new 'job_id' parameter to set the missing ID.
      
      In the case of img_commit() (from qemu-img.c) we still don't have the
      API to make commit_active_start() set the job ID, so we solve it by
      setting a default value. We'll get rid of this as soon as we extend
      the API.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      7f0317cf
    • A
      blockjob: Update description of the 'id' field · 9df229c3
      Alberto Garcia 提交于
      The 'id' field of the BlockJob structure will be able to hold any ID,
      not only a device name. This patch updates the description of that
      field and the error messages where it is being used.
      
      Soon we'll add the ability to set an arbitrary ID when creating a
      block job.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      9df229c3