1. 20 8月, 2014 2 次提交
  2. 15 8月, 2014 4 次提交
  3. 18 7月, 2014 2 次提交
  4. 09 7月, 2014 1 次提交
    • K
      block/backup: Fix hang for unaligned image size · d40593dd
      Kevin Wolf 提交于
      When doing a block backup of an image with an unaligned size (with
      respect to the BACKUP_CLUSTER_SIZE), qemu would check the allocation
      status of sectors after the end of the image. bdrv_is_allocated()
      returns a result that is valid for 0 sectors in this case, so the backup
      job ran into an endless loop.
      
      Stop looping when seeing a result valid for 0 sectors, we're at EOF then.
      
      The test case looks somewhat unrelated at first sight because I
      originally tried to reproduce a different suspected bug that turned out
      to not exist. Still a good test case and it accidentally found this one.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      d40593dd
  5. 07 7月, 2014 2 次提交
  6. 01 7月, 2014 4 次提交
  7. 28 6月, 2014 6 次提交
  8. 27 6月, 2014 2 次提交
  9. 26 6月, 2014 4 次提交
  10. 25 6月, 2014 1 次提交
  11. 16 6月, 2014 2 次提交
  12. 28 5月, 2014 2 次提交
  13. 19 5月, 2014 8 次提交
    • P
      block: optimize zero writes with bdrv_write_zeroes · 465bee1d
      Peter Lieven 提交于
      this patch tries to optimize zero write requests
      by automatically using bdrv_write_zeroes if it is
      supported by the format.
      
      This significantly speeds up file system initialization and
      should speed zero write test used to test backend storage
      performance.
      
      I ran the following 2 tests on my internal SSD with a
      50G QCOW2 container and on an attached iSCSI storage.
      
      a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
      
      QCOW2         [off]     [on]     [unmap]
      -----
      runtime:       14secs    1.1secs  1.1secs
      filesize:      937M      18M      18M
      
      iSCSI         [off]     [on]     [unmap]
      ----
      runtime:       9.3s      0.9s     0.9s
      
      b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
      
      QCOW2         [off]     [on]     [unmap]
      -----
      runtime:       246secs   18secs   18secs
      filesize:      51G       192K     192K
      throughput:    203M/s    2.3G/s   2.3G/s
      
      iSCSI*        [off]     [on]     [unmap]
      ----
      runtime:       8mins     45secs   33secs
      throughput:    106M/s    1.2G/s   1.6G/s
      allocated:     100%      100%     0%
      
      * The storage was connected via an 1Gbit interface.
        It seems to internally handle writing zeroes
        via WRITESAME16 very fast.
      Signed-off-by: NPeter Lieven <pl@kamp.de>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      465bee1d
    • K
      qcow1: Stricter backing file length check · d66e5cee
      Kevin Wolf 提交于
      Like qcow2 since commit 6d33e8e7, error out on invalid lengths instead
      of silently truncating them to 1023.
      
      Also don't rely on bdrv_pread() catching integer overflows that make len
      negative, but use unsigned variables in the first place.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NBenoit Canet <benoit@irqsave.net>
      d66e5cee
    • K
      qcow1: Validate image size (CVE-2014-0223) · 46485de0
      Kevin Wolf 提交于
      A huge image size could cause s->l1_size to overflow. Make sure that
      images never require a L1 table larger than what fits in s->l1_size.
      
      This cannot only cause unbounded allocations, but also the allocation of
      a too small L1 table, resulting in out-of-bounds array accesses (both
      reads and writes).
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      46485de0
    • K
      qcow1: Validate L2 table size (CVE-2014-0222) · 42eb5817
      Kevin Wolf 提交于
      Too large L2 table sizes cause unbounded allocations. Images actually
      created by qemu-img only have 512 byte or 4k L2 tables.
      
      To keep things consistent with cluster sizes, allow ranges between 512
      bytes and 64k (in fact, down to 1 entry = 8 bytes is technically
      working, but L2 table sizes smaller than a cluster don't make a lot of
      sense).
      
      This also means that the number of bytes on the virtual disk that are
      described by the same L2 table is limited to at most 8k * 64k or 2^29,
      preventively avoiding any integer overflows.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NBenoit Canet <benoit@irqsave.net>
      42eb5817
    • K
      qcow1: Check maximum cluster size · 7159a45b
      Kevin Wolf 提交于
      Huge values for header.cluster_bits cause unbounded allocations (e.g.
      for s->cluster_cache) and crash qemu this way. Less huge values may
      survive those allocations, but can cause integer overflows later on.
      
      The only cluster sizes that qemu can create are 4k (for standalone
      images) and 512 (for images with backing files), so we can limit it
      to 64k.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NBenoit Canet <benoit@irqsave.net>
      7159a45b
    • F
      qemu-iotests: Fix blkdebug in VM drive in 030 · b5e51dd7
      Fam Zheng 提交于
      The test test_stream_pause in this class uses vm.pause_drive, which
      requires a blkdebug driver on top of image, otherwise it's no-op and the
      test running is undeterministic.
      
      So add it.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      b5e51dd7
    • M
      qemu-iotests: Fix core dump suppression in test 039 · d530e342
      Markus Armbruster 提交于
      The shell script attempts to suppress core dumps like this:
      
          old_ulimit=$(ulimit -c)
          ulimit -c 0
          $QEMU_IO arg...
          ulimit -c "$old_ulimit"
      
      This breaks the test hard unless the limit was zero to begin with!
      ulimit sets both hard and soft limit by default, and (re-)raising the
      hard limit requires privileges.  Broken since it was added in commit
      dc68afe0.
      
      Could be fixed by adding -S to set only the soft limit, but I'm not
      sure how portable that is in practice.  Simply do it in a subshell
      instead, like this:
      
          (ulimit -c 0; exec $QEMU_IO arg...)
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      d530e342
    • M
      iotests: Add test for the JSON protocol · 4ad30336
      Max Reitz 提交于
      Add a test for the JSON protocol driver.
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      4ad30336