1. 10 9月, 2014 3 次提交
    • B
      block: Make the block accounting functions operate on BlockAcctStats · 5366d0c8
      Benoît Canet 提交于
      This is the next step for decoupling block accounting functions from
      BlockDriverState.
      In a future commit the BlockAcctStats structure will be moved from
      BlockDriverState to the device models structures.
      
      Note that bdrv_get_stats was introduced so device models can retrieve the
      BlockAcctStats structure of a BlockDriverState without being aware of it's
      layout.
      This function should go away when BlockAcctStats will be embedded in the device
      models structures.
      
      CC: Kevin Wolf <kwolf@redhat.com>
      CC: Stefan Hajnoczi <stefanha@redhat.com>
      CC: Keith Busch <keith.busch@intel.com>
      CC: Anthony Liguori <aliguori@amazon.com>
      CC: "Michael S. Tsirkin" <mst@redhat.com>
      CC: Paolo Bonzini <pbonzini@redhat.com>
      CC: Eric Blake <eblake@redhat.com>
      CC: Peter Maydell <peter.maydell@linaro.org>
      CC: Michael Tokarev <mjt@tls.msk.ru>
      CC: John Snow <jsnow@redhat.com>
      CC: Markus Armbruster <armbru@redhat.com>
      CC: Alexander Graf <agraf@suse.de>
      CC: Max Reitz <mreitz@redhat.com>
      Signed-off-by: NBenoît Canet <benoit.canet@nodalink.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      5366d0c8
    • B
      block: rename BlockAcctType members to start with BLOCK_ instead of BDRV_ · 28298fd3
      Benoît Canet 提交于
      The middle term goal is to move the BlockAcctStats structure in the device models.
      (Capturing I/O accounting statistics in the device models is good for billing)
      This patch make a small step in this direction by removing a reference to BDRV.
      
      CC: Kevin Wolf <kwolf@redhat.com>
      CC: Stefan Hajnoczi <stefanha@redhat.com>
      CC: Keith Busch <keith.busch@intel.com>
      CC: Anthony Liguori <aliguori@amazon.com>
      CC: "Michael S. Tsirkin" <mst@redhat.com>
      CC: Paolo Bonzini <pbonzini@redhat.com>
      CC: John Snow <jsnow@redhat.com>
      CC: Richard Henderson <rth@twiddle.net>
      CC: Markus Armbruster <armbru@redhat.com>
      CC: Alexander Graf <agraf@suse.de>i
      Signed-off-by: NBenoît Canet <benoit.canet@nodalink.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      28298fd3
    • M
      xen_disk: Plug memory leak on error path · cedccf13
      Markus Armbruster 提交于
      The Error object was leaked after failed bdrv_new(). While there,
      streamline control flow a bit.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      cedccf13
  2. 08 9月, 2014 2 次提交
    • L
      pflash_cfi01: write flash contents to bdrv on incoming migration · 4c0cfc72
      Laszlo Ersek 提交于
      A drive that backs a pflash device is special:
      - it is very small,
      - its entire contents are kept in a RAMBlock at all times, covering the
        guest-phys address range that provides the guest's view of the emulated
        flash chip.
      
      The pflash device model keeps the drive (the host-side file) and the
      guest-visible flash contents in sync. When migrating the guest, the
      guest-visible flash contents (the RAMBlock) is migrated by default, but on
      the target host, the drive (the host-side file) remains in full sync with
      the RAMBlock only if:
      - the source and target hosts share the storage underlying the pflash
        drive,
      - or the migration requests full or incremental block migration too, which
        then covers all drives.
      
      Due to the special nature of pflash drives, the following scenario makes
      sense as well:
      - no full nor incremental block migration, covering all drives, alongside
        the base migration (justified eg. by shared storage for "normal" (big)
        drives),
      - non-shared storage for pflash drives.
      
      In this case, currently only those portions of the flash drive are updated
      on the target disk that the guest reprograms while running on the target
      host.
      
      In order to restore accord, dump the entire flash contents to the bdrv in
      a post_load() callback.
      
      - The read-only check follows the other call-sites of pflash_update();
      - both "pfl->ro" and pflash_update() reflect / consider the case when
        "pfl->bs" is NULL;
      - the total size of the flash device is calculated as in
        pflash_cfi01_realize().
      
      When using shared storage, or requesting full or incremental block
      migration along with the normal migration, the patch should incur a
      harmless rewrite from the target side.
      
      It is assumed that, on the target host, RAM is loaded ahead of the call to
      pflash_post_load().
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NLaszlo Ersek <lersek@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      4c0cfc72
    • L
      pflash_cfi01: fixup stale DPRINTF() calls · afeb25f9
      Laszlo Ersek 提交于
      Signed-off-by: NLaszlo Ersek <lersek@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      afeb25f9
  3. 29 8月, 2014 1 次提交
    • S
      virtio-blk: allow drive_del with dataplane · 3255d1c2
      Stefan Hajnoczi 提交于
      Now that drive_del acquires the AioContext we can safely allow deleting
      the drive.  As with non-dataplane mode, all I/Os submitted by the guest
      after drive_del will return EIO.
      
      This patch makes hot unplug work with virtio-blk dataplane.  Previously
      drive_del reported an error because the device was busy.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      3255d1c2
  4. 26 8月, 2014 1 次提交
  5. 20 8月, 2014 3 次提交
  6. 18 8月, 2014 1 次提交
  7. 16 8月, 2014 4 次提交
  8. 15 8月, 2014 1 次提交
  9. 15 7月, 2014 2 次提交
  10. 14 7月, 2014 6 次提交
  11. 07 7月, 2014 1 次提交
    • M
      dataplane: submit I/O as a batch · dd67c1d7
      Ming Lei 提交于
      Before commit 580b6b2a(dataplane: use the QEMU block
      layer for I/O), dataplane for virtio-blk submits block
      I/O as a batch.
      
      This commit 580b6b2a replaces the custom linux AIO
      implementation(including submit I/O as a batch) with QEMU
      block layer, but this commit causes ~40% throughput regression
      on virtio-blk performance, and removing submitting I/O
      as a batch is one of the causes.
      
      This patch applies the newly introduced bdrv_io_plug() and
      bdrv_io_unplug() interfaces to support submitting I/O
      at batch for Qemu block layer, and in my test, the change
      can improve throughput by ~30% with 'aio=native'.
      
      Following my fio test script:
      
      	[global]
      	direct=1
      	size=4G
      	bsrange=4k-4k
      	timeout=40
      	numjobs=4
      	ioengine=libaio
      	iodepth=64
      	filename=/dev/vdc
      	group_reporting=1
      
      	[f]
      	rw=randread
      
      Result on one of my small machine(host: x86_64, 2cores, 4thread, guest: 4cores):
      	- qemu master: 65K IOPS
      	- qemu master with these patches: 92K IOPS
      	- 2.0.0 release(dataplane using custom linux aio): 104K IOPS
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      dd67c1d7
  12. 01 7月, 2014 5 次提交
  13. 30 6月, 2014 3 次提交
  14. 28 6月, 2014 7 次提交