1. 28 6月, 2014 1 次提交
  2. 27 6月, 2014 1 次提交
  3. 26 6月, 2014 9 次提交
  4. 25 6月, 2014 1 次提交
  5. 23 6月, 2014 4 次提交
  6. 16 6月, 2014 2 次提交
  7. 04 6月, 2014 5 次提交
    • S
      throttle: add throttle_detach/attach_aio_context() · 13af91eb
      Stefan Hajnoczi 提交于
      Block I/O throttling uses timers and currently always adds them to the
      main loop.  Throttling will break if bdrv_set_aio_context() is used to
      move a BlockDriverState to a different AioContext.
      
      This patch adds throttle_detach/attach_aio_context() interfaces so the
      throttling timers and uses them to move timers to the new AioContext.
      Note that bdrv_set_aio_context() already drains all requests so we're
      sure no throttled requests are pending.
      
      The test cases need to be updated since the throttle_init() interface
      has changed.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NBenoit Canet <benoit@irqsave.net>
      13af91eb
    • S
      block: add bdrv_set_aio_context() · dcd04228
      Stefan Hajnoczi 提交于
      Up until now all BlockDriverState instances have used the QEMU main loop
      for fd handlers, timers, and BHs.  This is not scalable on SMP guests
      and hosts so we need to move to a model with multiple event loops on
      different host CPUs.
      
      bdrv_set_aio_context() assigns the AioContext event loop to use for a
      particular BlockDriverState.  It first detaches the entire
      BlockDriverState graph from the current AioContext and then attaches to
      the new AioContext.
      
      This function will be used by virtio-blk data-plane to assign a
      BlockDriverState to its IOThread AioContext.  Make
      bdrv_aio_set_context() public since data-plane should not include
      block_int.h.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      dcd04228
    • S
      block: acquire AioContext in bdrv_drain_all() · 9b536adc
      Stefan Hajnoczi 提交于
      Modify bdrv_drain_all() to take into account that BlockDriverState
      instances may be running in different AioContexts.
      
      This patch changes the implementation of bdrv_drain_all() while
      preserving the semantics.  Previously kicking throttled requests and
      checking for pending requests were done across all BlockDriverState
      instances in sequence.  Now we process each BlockDriverState in turn,
      making sure to acquire and release its AioContext.
      
      This prevents race conditions between the thread executing
      bdrv_drain_all() and the thread running the AioContext.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      9b536adc
    • S
      block: acquire AioContext in bdrv_*_all() · ed78cda3
      Stefan Hajnoczi 提交于
      bdrv_close_all(), bdrv_commit_all(), bdrv_flush_all(),
      bdrv_invalidate_cache_all(), and bdrv_clear_incoming_migration_all() are
      called by main loop code and touch all BlockDriverState instances.
      
      Some BlockDriverState instances may be running in another AioContext.
      Make sure to acquire the AioContext before closing the BlockDriverState.
      
      This will protect against race conditions once virtio-blk data-plane is
      using the BlockDriverState from another AioContext event loop.
      
      Note that this patch does not convert bdrv_drain_all() yet since that
      conversion is non-trivial.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      ed78cda3
    • S
      block: use BlockDriverState AioContext · 2572b37a
      Stefan Hajnoczi 提交于
      Drop the assumption that we're using the main AioContext.  Convert
      qemu_aio_wait() to aio_poll() and qemu_bh_new() to aio_bh_new() so the
      BlockDriverState AioContext is used.
      
      Note there is still one qemu_aio_wait() left in bdrv_create() but we do
      not have a BlockDriverState there and only main loop code invokes this
      function.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      2572b37a
  8. 30 5月, 2014 1 次提交
  9. 28 5月, 2014 6 次提交
  10. 19 5月, 2014 3 次提交
    • P
      block: optimize zero writes with bdrv_write_zeroes · 465bee1d
      Peter Lieven 提交于
      this patch tries to optimize zero write requests
      by automatically using bdrv_write_zeroes if it is
      supported by the format.
      
      This significantly speeds up file system initialization and
      should speed zero write test used to test backend storage
      performance.
      
      I ran the following 2 tests on my internal SSD with a
      50G QCOW2 container and on an attached iSCSI storage.
      
      a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
      
      QCOW2         [off]     [on]     [unmap]
      -----
      runtime:       14secs    1.1secs  1.1secs
      filesize:      937M      18M      18M
      
      iSCSI         [off]     [on]     [unmap]
      ----
      runtime:       9.3s      0.9s     0.9s
      
      b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
      
      QCOW2         [off]     [on]     [unmap]
      -----
      runtime:       246secs   18secs   18secs
      filesize:      51G       192K     192K
      throughput:    203M/s    2.3G/s   2.3G/s
      
      iSCSI*        [off]     [on]     [unmap]
      ----
      runtime:       8mins     45secs   33secs
      throughput:    106M/s    1.2G/s   1.6G/s
      allocated:     100%      100%     0%
      
      * The storage was connected via an 1Gbit interface.
        It seems to internally handle writing zeroes
        via WRITESAME16 very fast.
      Signed-off-by: NPeter Lieven <pl@kamp.de>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      465bee1d
    • M
      block: Allow JSON filenames · 4993f7ea
      Max Reitz 提交于
      If the filename given to bdrv_open() is prefixed with "json:", parse the
      rest as a JSON object and merge the result into the options QDict. If
      there are conflicts, the options QDict takes precedence.
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      4993f7ea
    • K
      block: Fix bdrv_is_allocated() for short backing files · e88ae226
      Kevin Wolf 提交于
      bdrv_is_allocated() shouldn't return true for sectors that are
      unallocated, but after the end of a short backing file, even though
      such sectors are (correctly) marked as containing zeros.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      e88ae226
  11. 10 5月, 2014 1 次提交
  12. 30 4月, 2014 6 次提交
    • K
      block: Fix open_flags in bdrv_reopen() · f1f25a2e
      Kevin Wolf 提交于
      Use the same function as bdrv_open() for determining what the right
      flags for bs->file are. Without doing this, a reopen means that
      bs->file loses BDRV_O_CACHE_WB or BDRV_O_UNMAP if bs doesn't have it as
      well.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      f1f25a2e
    • K
      Revert "block: another bdrv_append fix" · 7e3d98dd
      Kevin Wolf 提交于
      This reverts commit 3a389e79. The commit
      was wrong and what it tried to fix just works today without any change.
      
      What the commit tried to fix:
      
          When creating live snapshots, the new image file is opened with
          BDRV_O_NO_BACKING because the whole backing chain is already opened.
          It is then appended to the chain using bdrv_append(). The result of
          this was that the image had a backing file, but BDRV_O_NO_BACKING
          was still set. This is obviously inconsistent.
      
          There used to be some places in qemu that closed and image and then
          opened it again, with its old flags (a bdrv_open()/close() sequence
          involves reopening the whole backing file chain, too). In this case
          the BDRV_O_NO_BACKING flag meant that the backing chain wasn't
          reopened and only the top layer was left.
      
          (Most, but not all of these places are replaced by bdrv_reopen()
          today, which doesn't touch the backing files at all.)
      
          Other places that looked at bs->open_flags weren't interested in
          BDRV_O_NO_BACKING, so no breakage there.
      
      What it actually did:
      
          The commit moved the BDRV_O_NO_BACKING away to the backing file.
          Because the bdrv_open()/close() sequences only looked at the flags
          of the top level BlockDriverState and used it for the whole chain,
          the flag didn't hurt there any more. Obviously, it is still
          inconsistent because the backing file may have another backing file,
          but without practical impact.
      
          At the same time, it swapped all other flags. This is practically
          irrelevant as long as live snapshots only allow opening the new
          layer with the same flags as the old top layer. It still doesn't
          make any sense, and it is a time bomb that explodes as soon as the
          flags can differ.
      
          bdrv_append_temp_snapshot() is such a case: It adds the new flag
          BDRV_O_TEMPORARY for the temporary snapshot. The swapping of commit
          3a389e79 results in the following nonsensical configuration:
      
          bs->open_flags:                     BDRV_O_TEMPORARY cleared
          bs->file->open_flags:               BDRV_O_TEMPORARY set
          bs->backing_hd->open_flags:         BDRV_O_TEMPORARY set
          bs->backing_hd->file->open_flags:   BDRV_O_TEMPORARY cleared
      
          We're still lucky because the format layer ignores the flag and the
          protocol layer happens to get the right value, but sooner or later
          this is bound to go wrong...
      
      What the right fix would have been:
      
          Simply clear the BDRV_O_NO_BACKING flag when the BlockDriverState is
          appended to an existing backing file chain, because now it does have
          a backing file.
      
          Commit 4ddc07ca already implemented this silently in bdrv_append(),
          so we don't have to come up with a new fix.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      7e3d98dd
    • K
      block: Unlink temporary files in raw-posix/win32 · 8bfea15d
      Kevin Wolf 提交于
      Instead of having unlink() calls in the generic block layer, where we
      aren't even guarateed to have a file name, move them to those block
      drivers that are actually used and that always have a filename. Gets us
      rid of some #ifdefs as well.
      
      The patch also converts bs->is_temporary to a new BDRV_O_TEMPORARY open
      flag so that it is inherited in the protocol layer and the raw-posix and
      raw-win32 drivers can unlink the file.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      8bfea15d
    • K
      block: Remove BDRV_O_COPY_ON_READ for bs->file · 5669b44d
      Kevin Wolf 提交于
      Copy on Read makes sense on the format level where backing files are
      implemented, but it's not required on the protocol level. While it
      shouldn't actively break anything to have COR enabled on both layers,
      needless serialisation and allocation checks may impact performance.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      5669b44d
    • K
      block: Create bdrv_backing_flags() · 317fc44e
      Kevin Wolf 提交于
      Instead of manipulation flags inline, move the derivation of the flags
      of a backing file into a new function next to the existing functions
      that derive flags for bs->file and for the block driver open function.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      317fc44e
    • K
      block: Create bdrv_inherited_flags() · 0b50cc88
      Kevin Wolf 提交于
      Instead of having bdrv_open_flags() as a function that creates flags for
      several unrelated places and then adding open-coded flags on top, create
      a new function that derives the flags for bs->file from the flags for bs.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      0b50cc88