提交 6b7d4c55 编写于 作者: K Kevin Wolf 提交者: Stefan Hajnoczi

qcow2: Fix copy_sectors() with VM state

bs->total_sectors is not the highest possible sector number that could
be involved in a copy on write operation: VM state is after the end of
the virtual disk. This resulted in wrong values for the number of
sectors to be copied (n).

The code that checks for the end of the image isn't required any more
because the code hasn't been calling the block layer's bdrv_read() for a
long time; instead, it directly calls qcow2_readv(), which doesn't error
out on VM state sector numbers.
Signed-off-by: NKevin Wolf <kwolf@redhat.com>
Reviewed-by: NMax Reitz <mreitz@redhat.com>
Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
上级 8f4754ed
...@@ -359,15 +359,6 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs, ...@@ -359,15 +359,6 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
struct iovec iov; struct iovec iov;
int n, ret; int n, ret;
/*
* If this is the last cluster and it is only partially used, we must only
* copy until the end of the image, or bdrv_check_request will fail for the
* bdrv_read/write calls below.
*/
if (start_sect + n_end > bs->total_sectors) {
n_end = bs->total_sectors - start_sect;
}
n = n_end - n_start; n = n_end - n_start;
if (n <= 0) { if (n <= 0) {
return 0; return 0;
......
#!/bin/bash #!/bin/bash
# #
# Test loading internal snapshots where the L1 table of the snapshot # qcow2 internal snapshots/VM state tests
# is smaller than the current L1 table.
# #
# Copyright (C) 2011 Red Hat, Inc. # Copyright (C) 2011 Red Hat, Inc.
# #
...@@ -45,6 +44,11 @@ _supported_fmt qcow2 ...@@ -45,6 +44,11 @@ _supported_fmt qcow2
_supported_proto generic _supported_proto generic
_supported_os Linux _supported_os Linux
echo
echo Test loading internal snapshots where the L1 table of the snapshot
echo is smaller than the current L1 table.
echo
CLUSTER_SIZE=65536 CLUSTER_SIZE=65536
_make_test_img 64M _make_test_img 64M
$QEMU_IMG snapshot -c foo "$TEST_IMG" $QEMU_IMG snapshot -c foo "$TEST_IMG"
...@@ -59,6 +63,20 @@ $QEMU_IO -c 'write -b 0 4M' "$TEST_IMG" | _filter_qemu_io ...@@ -59,6 +63,20 @@ $QEMU_IO -c 'write -b 0 4M' "$TEST_IMG" | _filter_qemu_io
$QEMU_IMG snapshot -a foo "$TEST_IMG" $QEMU_IMG snapshot -a foo "$TEST_IMG"
_check_test_img _check_test_img
echo
echo Try using a huge VM state
echo
CLUSTER_SIZE=65536
_make_test_img 64M
{ $QEMU_IO -c "write -b -P 0x11 1T 4k" $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
{ $QEMU_IMG snapshot -c foo $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
{ $QEMU_IMG snapshot -a foo $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
{ $QEMU_IO -c "read -b -P 0x11 1T 4k" $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
_check_test_img
# success, all done # success, all done
echo "*** done" echo "*** done"
rm -f $seq.full rm -f $seq.full
......
QA output created by 029 QA output created by 029
Test loading internal snapshots where the L1 table of the snapshot
is smaller than the current L1 table.
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864 Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864
wrote 4096/4096 bytes at offset 0 wrote 4096/4096 bytes at offset 0
4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) 4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
...@@ -7,4 +11,13 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=16777216 ...@@ -7,4 +11,13 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=16777216
wrote 4194304/4194304 bytes at offset 0 wrote 4194304/4194304 bytes at offset 0
4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) 4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
No errors were found on the image. No errors were found on the image.
Try using a huge VM state
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864
wrote 4096/4096 bytes at offset 1099511627776
4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
read 4096/4096 bytes at offset 1099511627776
4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
No errors were found on the image.
*** done *** done
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册