1. 07 2月, 2015 1 次提交
    • F
      block: add event when disk usage exceeds threshold · e2462113
      Francesco Romani 提交于
      Managing applications, like oVirt (http://www.ovirt.org), make extensive
      use of thin-provisioned disk images.
      To let the guest run smoothly and be not unnecessarily paused, oVirt sets
      a disk usage threshold (so called 'high water mark') based on the occupation
      of the device,  and automatically extends the image once the threshold
      is reached or exceeded.
      
      In order to detect the crossing of the threshold, oVirt has no choice but
      aggressively polling the QEMU monitor using the query-blockstats command.
      This lead to unnecessary system load, and is made even worse under scale:
      deployments with hundreds of VMs are no longer rare.
      
      To fix this, this patch adds:
      * A new monitor command `block-set-write-threshold', to set a mark for
        a given block device.
      * A new event `BLOCK_WRITE_THRESHOLD', to report if a block device
        usage exceeds the threshold.
      * A new `write_threshold' field into the `BlockDeviceInfo' structure,
        to report the configured threshold.
      
      This will allow the managing application to use smarter and more
      efficient monitoring, greatly reducing the need of polling.
      
      [Updated qemu-iotests 067 output to add the new 'write_threshold'
      property. --Stefan]
      [Changed g_assert_false() to !g_assert() to fix the build on older glib
      versions. --Kevin]
      Signed-off-by: NFrancesco Romani <fromani@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-id: 1421068273-692-1-git-send-email-fromani@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      e2462113
  2. 13 12月, 2014 1 次提交
  3. 10 12月, 2014 2 次提交
  4. 05 10月, 2014 1 次提交
  5. 19 5月, 2014 1 次提交
    • P
      block: optimize zero writes with bdrv_write_zeroes · 465bee1d
      Peter Lieven 提交于
      this patch tries to optimize zero write requests
      by automatically using bdrv_write_zeroes if it is
      supported by the format.
      
      This significantly speeds up file system initialization and
      should speed zero write test used to test backend storage
      performance.
      
      I ran the following 2 tests on my internal SSD with a
      50G QCOW2 container and on an attached iSCSI storage.
      
      a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
      
      QCOW2         [off]     [on]     [unmap]
      -----
      runtime:       14secs    1.1secs  1.1secs
      filesize:      937M      18M      18M
      
      iSCSI         [off]     [on]     [unmap]
      ----
      runtime:       9.3s      0.9s     0.9s
      
      b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
      
      QCOW2         [off]     [on]     [unmap]
      -----
      runtime:       246secs   18secs   18secs
      filesize:      51G       192K     192K
      throughput:    203M/s    2.3G/s   2.3G/s
      
      iSCSI*        [off]     [on]     [unmap]
      ----
      runtime:       8mins     45secs   33secs
      throughput:    106M/s    1.2G/s   1.6G/s
      allocated:     100%      100%     0%
      
      * The storage was connected via an 1Gbit interface.
        It seems to internally handle writing zeroes
        via WRITESAME16 very fast.
      Signed-off-by: NPeter Lieven <pl@kamp.de>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      465bee1d
  6. 07 11月, 2013 1 次提交
  7. 11 10月, 2013 1 次提交