1. 13 3月, 2017 1 次提交
  2. 07 3月, 2017 2 次提交
  3. 01 3月, 2017 8 次提交
  4. 15 11月, 2016 2 次提交
    • J
      blockjob: add block_job_start · 5ccac6f1
      John Snow 提交于
      Instead of automatically starting jobs at creation time via backup_start
      et al, we'd like to return a job object pointer that can be started
      manually at later point in time.
      
      For now, add the block_job_start mechanism and start the jobs
      automatically as we have been doing, with conversions job-by-job coming
      in later patches.
      
      Of note: cancellation of unstarted jobs will perform all the normal
      cleanup as if the job had started, particularly abort and clean. The
      only difference is that we will not emit any events, because the job
      never actually started.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      5ccac6f1
    • J
      blockjob: add .start field · a7815a76
      John Snow 提交于
      Add an explicit start field to specify the entrypoint. We already have
      ownership of the coroutine itself AND managing the lifetime of the
      coroutine, let's take control of creation of the coroutine, too.
      
      This will allow us to delay creation of the actual coroutine until we
      know we'll actually start a BlockJob in block_job_start. This avoids
      the sticky question of how to "un-create" a Coroutine that hasn't been
      started yet.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      a7815a76
  5. 01 11月, 2016 3 次提交
  6. 31 10月, 2016 1 次提交
  7. 28 10月, 2016 1 次提交
  8. 23 9月, 2016 1 次提交
  9. 21 9月, 2016 1 次提交
    • A
      commit: get the overlay node before manipulating the backing chain · 4d6f8cbb
      Alberto Garcia 提交于
      The 'block-commit' command has a 'top' parameter to specify the
      topmost node from which the data is going to be copied.
      
         [E] <- [D] <- [C] <- [B] <- [A]
      
      In this case if [C] is the top node then this is the result:
      
         [E] <- [B] <- [A]
      
      [B] must be modified so its backing image string points to [E] instead
      of [C]. commit_start() takes care of reopening [B] in read-write
      mode, and commit_complete() puts it back in read-only mode once the
      operation has finished.
      
      In order to find [B] (the overlay node) we look for the node that has
      [C] (the top node) as its backing image. However in commit_complete()
      we're doing it after [C] has been removed from the chain, so [B] is
      never found and remains in read-write mode.
      
      This patch gets the overlay node before the backing chain is
      manipulated.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Message-id: 1471836963-28548-1-git-send-email-berto@igalia.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      4d6f8cbb
  10. 13 7月, 2016 5 次提交
    • S
      Improve block job rate limiting for small bandwidth values · f14a39cc
      Sascha Silbe 提交于
      ratelimit_calculate_delay() previously reset the accounting every time
      slice, no matter how much data had been processed before. This had (at
      least) two consequences:
      
      1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.
      
         Not sure if there are real-world use cases where this would be a
         problem. Mirroring and backup over a slow link (e.g. DSL) would
         come to mind, though.
      
      2. Tests for block job operations (e.g. cancel) were rather racy
      
         All block jobs currently use a time slice of 100ms. That's a
         reasonable value to get smooth output during regular
         operation. However this also meant that the state of block jobs
         changed every 100ms, no matter how low the configured limit was. On
         busy hosts, qemu often transferred additional chunks until the test
         case had a chance to cancel the job.
      
      Fix the block job rate limit code to delay for more than one time
      slice to address the above issues. To make it easier to handle
      oversized chunks we switch the semantics from returning a delay
      _before_ the current request to a delay _after_ the current
      request. If necessary, this delay consists of multiple time slice
      units.
      
      Since the mirror job sends multiple chunks in one go even if the rate
      limit was exceeded in between, we need to keep track of the start of
      the current time slice so we can correctly re-compute the delay for
      the updated amount of data.
      
      The minimum bandwidth now is 1 data unit per time slice. The block
      jobs are currently passing the amount of data transferred in sectors
      and using 100ms time slices, so this translates to 5120
      bytes/second. With chunk sizes usually being O(512KiB), tests have
      plenty of time (O(100s)) to operate on block jobs. The chance of a
      race condition now is fairly remote, except possibly on insanely
      loaded systems.
      Signed-off-by: NSascha Silbe <silbe@linux.vnet.ibm.com>
      Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      f14a39cc
    • K
      commit: Fix use of error handling policy · 1e8fb7f1
      Kevin Wolf 提交于
      Commit implemented the 'enospc' policy as 'ignore' if the error was not
      ENOSPC. The QAPI documentation promises that it's treated as 'stop'.
      Using the common block job error handling function fixes this and also
      adds the missing QMP event.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      1e8fb7f1
    • P
      coroutine: move entry argument to qemu_coroutine_create · 0b8b8753
      Paolo Bonzini 提交于
      In practice the entry argument is always known at creation time, and
      it is confusing that sometimes qemu_coroutine_enter is used with a
      non-NULL argument to re-enter a coroutine (this happens in
      block/sheepdog.c and tests/test-coroutine.c).  So pass the opaque value
      at creation time, for consistency with e.g. aio_bh_new.
      
      Mostly done with the following semantic patch:
      
      @ entry1 @
      expression entry, arg, co;
      @@
      - co = qemu_coroutine_create(entry);
      + co = qemu_coroutine_create(entry, arg);
        ...
      - qemu_coroutine_enter(co, arg);
      + qemu_coroutine_enter(co);
      
      @ entry2 @
      expression entry, arg;
      identifier co;
      @@
      - Coroutine *co = qemu_coroutine_create(entry);
      + Coroutine *co = qemu_coroutine_create(entry, arg);
        ...
      - qemu_coroutine_enter(co, arg);
      + qemu_coroutine_enter(co);
      
      @ entry3 @
      expression entry, arg;
      @@
      - qemu_coroutine_enter(qemu_coroutine_create(entry), arg);
      + qemu_coroutine_enter(qemu_coroutine_create(entry, arg));
      
      @ reentry @
      expression co;
      @@
      - qemu_coroutine_enter(co, NULL);
      + qemu_coroutine_enter(co);
      
      except for the aforementioned few places where the semantic patch
      stumbled (as expected) and for test_co_queue, which would otherwise
      produce an uninitialized variable warning.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      0b8b8753
    • A
      commit: Add 'job-id' parameter to 'block-commit' · fd62c609
      Alberto Garcia 提交于
      This patch adds a new optional 'job-id' parameter to 'block-commit',
      allowing the user to specify the ID of the block job to be created.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      fd62c609
    • A
      blockjob: Add 'job_id' parameter to block_job_create() · 7f0317cf
      Alberto Garcia 提交于
      When a new job is created, the job ID is taken from the device name of
      the BDS. This patch adds a new 'job_id' parameter to let the caller
      provide one instead.
      
      This patch also verifies that the ID is always unique and well-formed.
      This causes problems in a couple of places where no ID is being set,
      because the BDS does not have a device name.
      
      In the case of test_block_job_start() (from test-blockjob-txn.c) we
      can simply use this new 'job_id' parameter to set the missing ID.
      
      In the case of img_commit() (from qemu-img.c) we still don't have the
      API to make commit_active_start() set the job ID, so we solve it by
      setting a default value. We'll get rid of this as soon as we extend
      the API.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      7f0317cf
  11. 05 7月, 2016 2 次提交
  12. 16 6月, 2016 1 次提交
  13. 26 5月, 2016 1 次提交
  14. 19 5月, 2016 1 次提交
    • K
      blockjob: Don't touch BDS iostatus · 66a0fae4
      Kevin Wolf 提交于
      Block jobs don't actually make use of the iostatus for their BDSes, but
      they manage a separate block job iostatus. Still, they require that it
      is enabled for the source BDS and they enable it automatically for the
      target and set the error handling mode - which ends up never being used
      by the job.
      
      This patch removes all of the BDS iostatus handling from the block job,
      which removes another few bs->blk accesses.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      66a0fae4
  15. 23 3月, 2016 1 次提交
    • M
      include/qemu/osdep.h: Don't include qapi/error.h · da34e65c
      Markus Armbruster 提交于
      Commit 57cb38b3 included qapi/error.h into qemu/osdep.h to get the
      Error typedef.  Since then, we've moved to include qemu/osdep.h
      everywhere.  Its file comment explains: "To avoid getting into
      possible circular include dependencies, this file should not include
      any other QEMU headers, with the exceptions of config-host.h,
      compiler.h, os-posix.h and os-win32.h, all of which are doing a
      similar job to this file and are under similar constraints."
      qapi/error.h doesn't do a similar job, and it doesn't adhere to
      similar constraints: it includes qapi-types.h.  That's in excess of
      100KiB of crap most .c files don't actually need.
      
      Add the typedef to qemu/typedefs.h, and include that instead of
      qapi/error.h.  Include qapi/error.h in .c files that need it and don't
      get it now.  Include qapi-types.h in qom/object.h for uint16List.
      
      Update scripts/clean-includes accordingly.  Update it further to match
      reality: replace config.h by config-target.h, add sysemu/os-posix.h,
      sysemu/os-win32.h.  Update the list of includes in the qemu/osdep.h
      comment quoted above similarly.
      
      This reduces the number of objects depending on qapi/error.h from "all
      of them" to less than a third.  Unfortunately, the number depending on
      qapi-types.h shrinks only a little.  More work is needed for that one.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      [Fix compilation without the spice devel packages. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      da34e65c
  16. 20 1月, 2016 1 次提交
  17. 11 11月, 2015 1 次提交
    • A
      commit: reopen overlay_bs before base · 3db2bd55
      Alberto Garcia 提交于
      'block-commit' needs write access to two different nodes of the chain:
      
      - 'base', because that's where the data is written to.
      - the overlay of 'top', because it needs to update the backing file
        string to point to 'base' after the operation.
      
      Both images have to be opened in read-write mode, and commit_start()
      takes care of reopening them if necessary.
      
      With the current implementation, however, when overlay_bs is reopened
      in read-write mode it has the side effect of making 'base' read-only
      again, eventually making 'block-commit' fail.
      
      This needs to be fixed in bdrv_reopen(), but until we get to that it
      can be worked around simply by swapping the order of base and
      overlay_bs in the reopen queue.
      
      In order to reproduce this bug, overlay_bs needs to be initially in
      read-only mode. That is: the 'top' parameter of 'block-commit' cannot
      be the active layer nor its immediate backing chain.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      3db2bd55
  18. 24 10月, 2015 1 次提交
  19. 14 9月, 2015 1 次提交
  20. 23 6月, 2015 2 次提交
  21. 03 11月, 2014 1 次提交
    • S
      block: let commit blockjob run in BDS AioContext · 9e85cd5c
      Stefan Hajnoczi 提交于
      The commit block job must run in the BlockDriverState AioContext so that
      it works with dataplane.
      
      Acquire the AioContext in blockdev.c so starting the block job is safe.
      One detail here is that the bdrv_drain_all() must be moved inside the
      aio_context_acquire() region so requests cannot sneak in between the
      drain and acquire.
      
      The completion code in block/commit.c must perform backing chain
      manipulation and bdrv_reopen() from the main loop.  Use
      block_job_defer_to_main_loop() to achieve that.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: 1413889440-32577-11-git-send-email-stefanha@redhat.com
      9e85cd5c
  22. 20 10月, 2014 1 次提交
  23. 01 7月, 2014 1 次提交
    • J
      block: extend block-commit to accept a string for the backing file · 54e26900
      Jeff Cody 提交于
      On some image chains, QEMU may not always be able to resolve the
      filenames properly, when updating the backing file of an image
      after a block commit.
      
      For instance, certain relative pathnames may fail, or drives may
      have been specified originally by file descriptor (e.g. /dev/fd/???),
      or a relative protocol pathname may have been used.
      
      In these instances, QEMU may lack the information to be able to make
      the correct choice, but the user or management layer most likely does
      have that knowledge.
      
      With this extension to the block-commit api, the user is able to change
      the backing file of the overlay image as part of the block-commit
      operation.
      
      This allows the change to be 'safe', in the sense that if the attempt
      to write the overlay image metadata fails, then the block-commit
      operation returns failure, without disrupting the guest.
      
      If the commit top is the active layer, then specifying the backing
      file string will be treated as an error (there is no overlay image
      to modify in that case).
      
      If a backing file string is not specified in the command, the backing
      file string to use is determined in the same manner as it was
      previously.
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      54e26900