1. 01 3月, 2017 1 次提交
  2. 15 11月, 2016 2 次提交
    • J
      blockjob: add block_job_start · 5ccac6f1
      John Snow 提交于
      Instead of automatically starting jobs at creation time via backup_start
      et al, we'd like to return a job object pointer that can be started
      manually at later point in time.
      
      For now, add the block_job_start mechanism and start the jobs
      automatically as we have been doing, with conversions job-by-job coming
      in later patches.
      
      Of note: cancellation of unstarted jobs will perform all the normal
      cleanup as if the job had started, particularly abort and clean. The
      only difference is that we will not emit any events, because the job
      never actually started.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      5ccac6f1
    • J
      blockjob: add .start field · a7815a76
      John Snow 提交于
      Add an explicit start field to specify the entrypoint. We already have
      ownership of the coroutine itself AND managing the lifetime of the
      coroutine, let's take control of creation of the coroutine, too.
      
      This will allow us to delay creation of the actual coroutine until we
      know we'll actually start a BlockJob in block_job_start. This avoids
      the sticky question of how to "un-create" a Coroutine that hasn't been
      started yet.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      a7815a76
  3. 01 11月, 2016 3 次提交
  4. 31 10月, 2016 1 次提交
  5. 13 7月, 2016 4 次提交
    • S
      Improve block job rate limiting for small bandwidth values · f14a39cc
      Sascha Silbe 提交于
      ratelimit_calculate_delay() previously reset the accounting every time
      slice, no matter how much data had been processed before. This had (at
      least) two consequences:
      
      1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.
      
         Not sure if there are real-world use cases where this would be a
         problem. Mirroring and backup over a slow link (e.g. DSL) would
         come to mind, though.
      
      2. Tests for block job operations (e.g. cancel) were rather racy
      
         All block jobs currently use a time slice of 100ms. That's a
         reasonable value to get smooth output during regular
         operation. However this also meant that the state of block jobs
         changed every 100ms, no matter how low the configured limit was. On
         busy hosts, qemu often transferred additional chunks until the test
         case had a chance to cancel the job.
      
      Fix the block job rate limit code to delay for more than one time
      slice to address the above issues. To make it easier to handle
      oversized chunks we switch the semantics from returning a delay
      _before_ the current request to a delay _after_ the current
      request. If necessary, this delay consists of multiple time slice
      units.
      
      Since the mirror job sends multiple chunks in one go even if the rate
      limit was exceeded in between, we need to keep track of the start of
      the current time slice so we can correctly re-compute the delay for
      the updated amount of data.
      
      The minimum bandwidth now is 1 data unit per time slice. The block
      jobs are currently passing the amount of data transferred in sectors
      and using 100ms time slices, so this translates to 5120
      bytes/second. With chunk sizes usually being O(512KiB), tests have
      plenty of time (O(100s)) to operate on block jobs. The chance of a
      race condition now is fairly remote, except possibly on insanely
      loaded systems.
      Signed-off-by: NSascha Silbe <silbe@linux.vnet.ibm.com>
      Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      f14a39cc
    • P
      coroutine: move entry argument to qemu_coroutine_create · 0b8b8753
      Paolo Bonzini 提交于
      In practice the entry argument is always known at creation time, and
      it is confusing that sometimes qemu_coroutine_enter is used with a
      non-NULL argument to re-enter a coroutine (this happens in
      block/sheepdog.c and tests/test-coroutine.c).  So pass the opaque value
      at creation time, for consistency with e.g. aio_bh_new.
      
      Mostly done with the following semantic patch:
      
      @ entry1 @
      expression entry, arg, co;
      @@
      - co = qemu_coroutine_create(entry);
      + co = qemu_coroutine_create(entry, arg);
        ...
      - qemu_coroutine_enter(co, arg);
      + qemu_coroutine_enter(co);
      
      @ entry2 @
      expression entry, arg;
      identifier co;
      @@
      - Coroutine *co = qemu_coroutine_create(entry);
      + Coroutine *co = qemu_coroutine_create(entry, arg);
        ...
      - qemu_coroutine_enter(co, arg);
      + qemu_coroutine_enter(co);
      
      @ entry3 @
      expression entry, arg;
      @@
      - qemu_coroutine_enter(qemu_coroutine_create(entry), arg);
      + qemu_coroutine_enter(qemu_coroutine_create(entry, arg));
      
      @ reentry @
      expression co;
      @@
      - qemu_coroutine_enter(co, NULL);
      + qemu_coroutine_enter(co);
      
      except for the aforementioned few places where the semantic patch
      stumbled (as expected) and for test_co_queue, which would otherwise
      produce an uninitialized variable warning.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      0b8b8753
    • A
      stream: Add 'job-id' parameter to 'block-stream' · 2323322e
      Alberto Garcia 提交于
      This patch adds a new optional 'job-id' parameter to 'block-stream',
      allowing the user to specify the ID of the block job to be created.
      
      The HMP 'block_stream' command remains unchanged.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      2323322e
    • A
      blockjob: Add 'job_id' parameter to block_job_create() · 7f0317cf
      Alberto Garcia 提交于
      When a new job is created, the job ID is taken from the device name of
      the BDS. This patch adds a new 'job_id' parameter to let the caller
      provide one instead.
      
      This patch also verifies that the ID is always unique and well-formed.
      This causes problems in a couple of places where no ID is being set,
      because the BDS does not have a device name.
      
      In the case of test_block_job_start() (from test-blockjob-txn.c) we
      can simply use this new 'job_id' parameter to set the missing ID.
      
      In the case of img_commit() (from qemu-img.c) we still don't have the
      API to make commit_active_start() set the job ID, so we solve it by
      setting a default value. We'll get rid of this as soon as we extend
      the API.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      7f0317cf
  6. 26 5月, 2016 1 次提交
  7. 19 5月, 2016 2 次提交
    • K
      blockjob: Don't touch BDS iostatus · 66a0fae4
      Kevin Wolf 提交于
      Block jobs don't actually make use of the iostatus for their BDSes, but
      they manage a separate block job iostatus. Still, they require that it
      is enabled for the source BDS and they enable it automatically for the
      target and set the error handling mode - which ends up never being used
      by the job.
      
      This patch removes all of the BDS iostatus handling from the block job,
      which removes another few bs->blk accesses.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      66a0fae4
    • K
      blockjob: Don't set iostatus of target · 81e254dc
      Kevin Wolf 提交于
      When block job errors were introduced, we assigned the iostatus of the
      target BDS "just in case". The field has never been accessible for the
      user because the target isn't listed in query-block.
      
      Before we can allow the user to have a second BlockBackend on the
      target, we need to clean this up. If anything, we would want to set the
      iostatus for the internal BB of the job (which we can always do later),
      but certainly not for a separate BB which the job doesn't even use.
      
      As a nice side effect, this gets us rid of another bs->blk use.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      81e254dc
  8. 29 3月, 2016 1 次提交
  9. 23 3月, 2016 1 次提交
    • M
      include/qemu/osdep.h: Don't include qapi/error.h · da34e65c
      Markus Armbruster 提交于
      Commit 57cb38b3 included qapi/error.h into qemu/osdep.h to get the
      Error typedef.  Since then, we've moved to include qemu/osdep.h
      everywhere.  Its file comment explains: "To avoid getting into
      possible circular include dependencies, this file should not include
      any other QEMU headers, with the exceptions of config-host.h,
      compiler.h, os-posix.h and os-win32.h, all of which are doing a
      similar job to this file and are under similar constraints."
      qapi/error.h doesn't do a similar job, and it doesn't adhere to
      similar constraints: it includes qapi-types.h.  That's in excess of
      100KiB of crap most .c files don't actually need.
      
      Add the typedef to qemu/typedefs.h, and include that instead of
      qapi/error.h.  Include qapi/error.h in .c files that need it and don't
      get it now.  Include qapi-types.h in qom/object.h for uint16List.
      
      Update scripts/clean-includes accordingly.  Update it further to match
      reality: replace config.h by config-target.h, add sysemu/os-posix.h,
      sysemu/os-win32.h.  Update the list of includes in the qemu/osdep.h
      comment quoted above similarly.
      
      This reduces the number of objects depending on qapi/error.h from "all
      of them" to less than a third.  Unfortunately, the number depending on
      qapi-types.h shrinks only a little.  More work is needed for that one.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      [Fix compilation without the spice devel packages. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      da34e65c
  10. 20 1月, 2016 1 次提交
  11. 24 10月, 2015 1 次提交
  12. 16 10月, 2015 2 次提交
  13. 23 6月, 2015 2 次提交
  14. 03 11月, 2014 1 次提交
    • S
      block: let stream blockjob run in BDS AioContext · f3e69beb
      Stefan Hajnoczi 提交于
      The stream block job must run in the BlockDriverState AioContext so that
      it works with dataplane.
      
      The basics of acquiring the AioContext are easy in blockdev.c.
      
      The tricky part is the completion code which drops part of the backing
      file chain.  This must be done in the main loop where bdrv_unref() and
      bdrv_close() are safe to call.  Use block_job_defer_to_main_loop() to
      achieve that.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: 1413889440-32577-9-git-send-email-stefanha@redhat.com
      f3e69beb
  15. 20 10月, 2014 1 次提交
  16. 18 7月, 2014 1 次提交
  17. 01 7月, 2014 1 次提交
    • J
      block: add backing-file option to block-stream · 13d8cc51
      Jeff Cody 提交于
      On some image chains, QEMU may not always be able to resolve the
      filenames properly, when updating the backing file of an image
      after a block job.
      
      For instance, certain relative pathnames may fail, or drives may
      have been specified originally by file descriptor (e.g. /dev/fd/???),
      or a relative protocol pathname may have been used.
      
      In these instances, QEMU may lack the information to be able to make
      the correct choice, but the user or management layer most likely does
      have that knowledge.
      
      With this extension to the block-stream api, the user is able to change
      the backing file of the active layer as part of the block-stream
      operation.
      
      This allows the change to be 'safe', in the sense that if the attempt
      to write the active image metadata fails, then the block-stream
      operation returns failure, without disrupting the guest.
      
      If a backing file string is not specified in the command, the backing
      file string to use is determined in the same manner as it was
      previously.
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      13d8cc51
  18. 23 6月, 2014 1 次提交
  19. 28 5月, 2014 1 次提交
  20. 25 1月, 2014 1 次提交
  21. 28 11月, 2013 1 次提交
  22. 11 10月, 2013 2 次提交
  23. 25 9月, 2013 1 次提交
  24. 06 9月, 2013 4 次提交
  25. 23 8月, 2013 1 次提交
  26. 19 8月, 2013 1 次提交
    • S
      block: stop relying on io_flush() in bdrv_drain_all() · 88266f5a
      Stefan Hajnoczi 提交于
      If a block driver has no file descriptors to monitor but there are still
      active requests, it can return 1 from .io_flush().  This is used to spin
      during synchronous I/O.
      
      Stop relying on .io_flush() and instead check
      QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
      requests.
      
      This is the first step in removing .io_flush() so that event loops no
      longer need to have the concept of synchronous I/O.  Eventually we may
      be able to kill synchronous I/O completely by running everything in a
      coroutine, but that is future work.
      
      Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
      that bdrv_requests_pending(bs) can safely access it.  In practice bs is
      g_malloc0() so the memory is already zeroed but it's safer to initialize
      the queue properly.
      
      We also need to fix up block/stream.c:close_unused_images() to prevent
      traversing a dangling pointer while it rearranges the backing file
      chain.  This is necessary since the new bdrv_drain_all() traverses the
      backing file chain.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      88266f5a
  27. 28 6月, 2013 1 次提交