- 06 2月, 2020 3 次提交
-
-
由 Alberto Garcia 提交于
qcow2_alloc_cluster_offset() and qcow2_get_cluster_offset() always return offsets that are cluster-aligned so don't just check that they are sector-aligned. The check in qcow2_co_preadv_task() is also replaced by an assertion for the same reason. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 558ba339965f858bede4c73ce3f50f0c0493597d.1579374329.git.berto@igalia.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Alberto Garcia 提交于
The L1 table is read from disk using the byte-based bdrv_pread() and is never accessed beyond its last element, so there's no need to allocate more memory than that. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: b2e27214ec7b03a585931bcf383ee1ac3a641a10.1579374329.git.berto@igalia.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Alberto Garcia 提交于
This is a bit more efficient than having to allocate and free memory for each item. The default size (60) is enough for all the existing incompatible features or the "Unknown incompatible feature" message. Suggested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20200115135626.19442-1-berto@igalia.com Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: NStefano Garzarella <sgarzare@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 06 1月, 2020 1 次提交
-
-
由 Andrey Shinkevich 提交于
QEMU currently supports writing compressed data of the size equal to one cluster. This patch allows writing QCOW2 compressed data that exceed one cluster. Now, we split buffered data into separate clusters and write them compressed using the block/aio_task API. Suggested-by: NPavel Butsykin <pbutsykin@virtuozzo.com> Suggested-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NAndrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 1575288906-551879-3-git-send-email-andrey.shinkevich@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 19 12月, 2019 1 次提交
-
-
由 Tuguoyi 提交于
The local_err check outside of the if block was necessary when it was introduced in commit d1258dd0 because it needed to be executed even if qcow2_load_autoloading_dirty_bitmaps() returned false. After some modifications that all required the error check to remain where it is, commit 9c98f145 finally moved the qcow2_load_dirty_bitmaps() call into the if block, so now the error check should be there, too. Signed-off-by: NGuoyi Tu <tu.guoyi@h3c.com> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 18 12月, 2019 2 次提交
-
-
由 Alberto Garcia 提交于
There's a couple of places left in the qcow2 code that still do the calculation manually, so let's replace them. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
In the common case, qcow2_co_pwrite_zeroes() already only modifies metadata case, so we're fine with or without BDRV_REQ_NO_FALLBACK set. The only exception is when using an external data file, where the request is passed down to the block driver of the external data file. We are forwarding the BDRV_REQ_NO_FALLBACK flag there, though, so this is fine, too. Declare the flag supported therefore. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com>
-
- 28 10月, 2019 9 次提交
-
-
由 Max Reitz 提交于
This is a change in behavior, so all instances need a good justification. The comments added here should explain my reasoning. qed already had a comment that suggests it always expected bdrv_truncate()/blk_truncate() to behave as if exact=true were passed (c743849b came eight months before 55b949c8), so it was simply broken until now. Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-8-mreitz@redhat.com Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> [mreitz: Changed comment in qed.c to explain why a new QED file must be empty, as requested and suggested by Maxim] Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
When truncating a format node, the @exact parameter is generally handled simply by virtue of the format storing the new size in the image metadata. Such formats do not need to pass on the parameter to their file nodes. There are exceptions, though: - raw and crypto cannot store the image size, and thus must pass on @exact. - When using qcow2 with an external data file, it just makes sense to keep its size in sync with the qcow2 virtual disk (because the external data file is the virtual disk). Therefore, we should pass @exact when truncating it. Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-7-mreitz@redhat.com Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
We have two drivers (iscsi and file-posix) that (in some cases) return success from their .bdrv_co_truncate() implementation if the block device is larger than the requested offset, but cannot be shrunk. Some callers do not want that behavior, so this patch adds a new parameter that they can use to turn off that behavior. This patch just adds the parameter and lets the block/io.c and block/block-backend.c functions pass it around. All other callers always pass false and none of the implementations evaluate it, so that this patch does not change existing behavior. Future patches take care of that. Suggested-by: NMaxim Levitsky <mlevitsk@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-5-mreitz@redhat.com Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
There is no reason why the format drivers need to truncate the protocol node when formatting it. When using the old .bdrv_co_create_ops() interface, the file will be created with no size option anyway, which generally gives it a size of 0. (Exceptions are block devices, which cannot be truncated anyway.) When using blockdev-create, the user must have given the file node some size anyway, so there is no reason why we should override that. qed is an exception, it needs the file to start completely empty (as explained by c743849b). Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-4-mreitz@redhat.com Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
qcow2_check_read_snapshot_table() can perform consistency checks, but it cannot fix everything. Specifically, it cannot allocate new clusters, because that should wait until the refcount structures are known to be consistent (i.e., after qcow2_check_refcounts()). Thus, it cannot call qcow2_write_snapshots(). Do that in qcow2_check_fix_snapshot_table(), which is called after qcow2_check_refcounts(). Currently, there is nothing that would set result->corruptions, so this is a no-op. A follow-up patch will change that. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-10-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
Reading the snapshot table can fail. That is a problem when we want to repair the image. Therefore, stop reading the snapshot table in qcow2_do_open() in check mode. Instead, add a new function qcow2_check_read_snapshot_table() that reads the snapshot table at a later point. In the future, we want to handle errors here and fix them. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-9-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
qcow2 v3 requires every snapshot table entry to have two extra data fields: The 64-bit VM state size, and the virtual disk size. Both are optional for v2 images, so they may not be present. qcow2_upgrade() therefore should update the snapshot table to ensure all entries have these extra data fields. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1727347Reported-by: NEric Blake <eblake@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-8-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
This does not make sense right now, but it will make sense once we need to do more than to just update s->qcow_version. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-7-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-4-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 25 10月, 2019 1 次提交
-
-
由 Kevin Wolf 提交于
qcow2_detect_metadata_preallocation() calls qcow2_get_refcount() which requires s->lock to be taken to protect its accesses to the refcount table and refcount blocks. However, nothing in this code path actually took the lock. This could cause the same cache entry to be used by two requests at the same time, for different tables at different offsets, resulting in image corruption. As it would be preferable to base the detection on consistent data (even though it's just heuristics), let's take the lock not only around the qcow2_get_refcount() calls, but around the whole function. This patch takes the lock in qcow2_co_block_status() earlier and asserts in qcow2_detect_metadata_preallocation() that we hold the lock. Fixes: 69f47505 Cc: qemu-stable@nongnu.org Reported-by: NMichael Weiser <michael.weiser@gmx.de> Signed-off-by: NKevin Wolf <kwolf@redhat.com> Tested-by: NMichael Weiser <michael.weiser@gmx.de> Reviewed-by: NMichael Weiser <michael.weiser@gmx.de> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
- 18 10月, 2019 3 次提交
-
-
The only reason I can imagine for this strange code at the very-end of bdrv_reopen_commit is the fact that bs->read_only updated after calling drv->bdrv_reopen_commit in bdrv_reopen_commit. And in the same time, prior to previous commit, qcow2_reopen_bitmaps_rw did a wrong check for being writable, when actually it only need writable file child not self. So, as it's fixed, let's move things to correct place. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Acked-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190927122355.7344-10-vsementsov@virtuozzo.com Signed-off-by: NJohn Snow <jsnow@redhat.com>
-
qcow2_reopen_bitmaps_ro wants to store bitmaps and then mark them all readonly. But the latter don't work, as qcow2_store_persistent_dirty_bitmaps removes bitmaps after storing. It's OK for inactivation but bad idea for reopen-ro. And this leads to the following bug: Assume we have persistent bitmap 'bitmap0'. Create external snapshot bitmap0 is stored and therefore removed Commit snapshot now we have no bitmaps Do some writes from guest (*) they are not marked in bitmap Shutdown Start bitmap0 is loaded as valid, but it is actually broken! It misses writes (*) Incremental backup it will be inconsistent So, let's stop removing bitmaps on reopen-ro. But don't rejoice: reopening bitmaps to rw is broken too, so the whole scenario will not work after this patch and we can't enable corresponding test cases in 260 iotests still. Reopening bitmaps rw will be fixed in the following patches. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Message-id: 20190927122355.7344-7-vsementsov@virtuozzo.com Signed-off-by: NJohn Snow <jsnow@redhat.com>
-
qmp_block_dirty_bitmap_add and do_block_dirty_bitmap_remove do acquire aio context since 0a6c86d0. But this is not enough: we also must lock qcow2 mutex when access in-image metadata. Especially it concerns freeing qcow2 clusters. To achieve this, move qcow2_can_store_new_dirty_bitmap and qcow2_remove_persistent_dirty_bitmap to coroutine context. Since we work in coroutines in correct aio context, we don't need context acquiring in blockdev.c anymore, drop it. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Message-id: 20190920082543.23444-4-vsementsov@virtuozzo.com Signed-off-by: NJohn Snow <jsnow@redhat.com>
-
- 10 10月, 2019 3 次提交
-
-
It improves performance for fragmented qcow2 images. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190916175324.18478-6-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
Similarly to previous commit, prepare for parallelizing write-loop iterations. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190916175324.18478-5-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
Further patch will run partial requests of iterations of qcow2_co_preadv in parallel for performance reasons. To prepare for this, separate part which may be parallelized into separate function (qcow2_co_preadv_task). While being here, also separate encrypted clusters reading to own function, like it is done for compressed reading. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190916175324.18478-4-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 16 9月, 2019 2 次提交
-
-
由 Maxim Levitsky 提交于
* Change the qcow2_co_{encrypt|decrypt} to just receive full host and guest offsets and use this function directly instead of calling do_perform_cow_encrypt (which is removed by that patch). * Adjust qcow2_co_encdec to take full host and guest offsets as well. * Document the qcow2_co_{encrypt|decrypt} arguments to prevent the bug fixed in former commit from hopefully happening again. Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com> Message-id: 20190915203655.21638-3-mlevitsk@redhat.com Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [mreitz: Let perform_cow() return the error value returned by qcow2_co_encrypt(), as proposed by Vladimir] Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Nir Soffer 提交于
Replace instances of: (n & (BDRV_SECTOR_SIZE - 1)) == 0 And: (n & ~BDRV_SECTOR_MASK) == 0 With: QEMU_IS_ALIGNED(n, BDRV_SECTOR_SIZE) Which reveals the intent of the code better, and makes it easier to locate the code checking alignment. Signed-off-by: NNir Soffer <nsoffer@redhat.com> Message-id: 20190827185913.27427-2-nsoffer@redhat.com Reviewed-by: NJohn Snow <jsnow@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 10 9月, 2019 1 次提交
-
-
由 Alberto Garcia 提交于
The size of the qcow2 L2 cache defaults to 32 MB, which can be easily larger than the maximum amount of L2 metadata that the image can have. For example: with 64 KB clusters the user would need a qcow2 image with a virtual size of 256 GB in order to have 32 MB of L2 metadata. Because of that, since commit b749562d we forbid the L2 cache to become larger than the maximum amount of L2 metadata for the image, calculated using this formula: uint64_t max_l2_cache = virtual_disk_size / (s->cluster_size / 8); The problem with this formula is that the result should be rounded up to the cluster size because an L2 table on disk always takes one full cluster. For example, a 1280 MB qcow2 image with 64 KB clusters needs exactly 160 KB of L2 metadata, but we need 192 KB on disk (3 clusters) even if the last 32 KB of those are not going to be used. However QEMU rounds the numbers down and only creates 2 cache tables (128 KB), which is not enough for the image. A quick test doing 4KB random writes on a 1280 MB image gives me around 500 IOPS, while with the correct cache size I get 16K IOPS. Cc: qemu-stable@nongnu.org Signed-off-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 27 8月, 2019 3 次提交
-
-
Implement and use new interface to get rid of hd_qiov. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Acked-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20190604161514.262241-13-vsementsov@virtuozzo.com Message-Id: <20190604161514.262241-13-vsementsov@virtuozzo.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
Implement and use new interface to get rid of hd_qiov. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Acked-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20190604161514.262241-12-vsementsov@virtuozzo.com Message-Id: <20190604161514.262241-12-vsementsov@virtuozzo.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
Use buffer based io in encrypted case. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Acked-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20190604161514.262241-11-vsementsov@virtuozzo.com Message-Id: <20190604161514.262241-11-vsementsov@virtuozzo.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 19 8月, 2019 2 次提交
-
-
由 Max Reitz 提交于
If a qcow2 file is preallocated, it can no longer guarantee that it initially appears as filled with zeroes. So implement .bdrv_has_zero_init() by checking whether the file is preallocated; if so, forward the call to the underlying storage node, except for when it is encrypted: Encrypted preallocated images always return effectively random data, so .bdrv_has_zero_init() must always return 0 for them. .bdrv_has_zero_init_truncate() can remain bdrv_has_zero_init_1(), because it presupposes PREALLOC_MODE_OFF. Reported-by: NStefano Garzarella <sgarzare@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190724171239.8764-7-mreitz@redhat.com Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
We need to implement .bdrv_has_zero_init_truncate() for every block driver that supports truncation and has a .bdrv_has_zero_init() implementation. Implement it the same way each driver implements .bdrv_has_zero_init(). This is at least not any more unsafe than what we had before. Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190724171239.8764-5-mreitz@redhat.com Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> Reviewed-by: NStefano Garzarella <sgarzare@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 16 8月, 2019 1 次提交
-
-
由 Markus Armbruster 提交于
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
-
- 08 7月, 2019 1 次提交
-
-
由 Eric Blake 提交于
Commit b76b4f60 allowed '-o compat=v3' as an alias for the less-appealing '-o compat=1.1' for 'qemu-img create' since we want to use the QMP form as much as possible, but forgot to do likewise for qemu-img amend. Also, it doesn't help that '-o help' doesn't list our new preferred spellings. Signed-off-by: NEric Blake <eblake@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 02 7月, 2019 1 次提交
-
-
由 Andrey Shinkevich 提交于
This patch is used in the 'block/stream: introduce a bottom node' that is following. Instead of the base node, the caller may pass the node that has the base as its backing image to the function bdrv_is_allocated_above() with a new parameter include_base = true and get rid of the dependency on the base that may change during commit/stream parallel jobs. Now, if the specified base is not found in the backing image chain, the QEMU will abort. Suggested-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NAndrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 1559152576-281803-2-git-send-email-andrey.shinkevich@virtuozzo.com [mreitz: Squashed in the following as a rebase on conflicting patches:] Message-id: e3cf99ae-62e9-8b6e-5a06-d3c8b9363b85@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 04 6月, 2019 2 次提交
-
-
由 Kevin Wolf 提交于
This adds a new parameter to blk_new() which requires its callers to declare from which AioContext this BlockBackend is going to be used (or the locks of which AioContext need to be taken anyway). The given context is only stored and kept up to date when changing AioContexts. Actually applying the stored AioContext to the root node is saved for another commit. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
drv_co_block_status digs bs->file for additional, more accurate search for hole inside region, reported as DATA by bs since 5daa74a6. This accuracy is not free: assume we have qcow2 disk. Actually, qcow2 knows, where are holes and where is data. But every block_status request calls lseek additionally. Assume a big disk, full of data, in any iterative copying block job (or img convert) we'll call lseek(HOLE) on every iteration, and each of these lseeks will have to iterate through all metadata up to the end of file. It's obviously ineffective behavior. And for many scenarios we don't need this lseek at all. However, lseek is needed when we have metadata-preallocated image. So, let's detect metadata-preallocation case and don't dig qcow2's protocol file in other cases. The idea is to compare allocation size in POV of filesystem with allocations size in POV of Qcow2 (by refcounts). If allocation in fs is significantly lower, consider it as metadata-preallocation case. 102 iotest changed, as our detector can't detect shrinked file as metadata-preallocation, which don't seem to be wrong, as with metadata preallocation we always have valid file length. Two other iotests have a slight change in their QMP output sequence: Active 'block-commit' returns earlier because the job coroutine yields earlier on a blocking operation. This operation is loading the refcount blocks in qcow2_detect_metadata_preallocation(). Suggested-by: NDenis V. Lunev <den@openvz.org> Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 29 5月, 2019 4 次提交
-
-
由 Anton Nefedov 提交于
If COW areas of the newly allocated clusters are zeroes on the backing image, efficient bdrv_write_zeroes(flags=BDRV_REQ_NO_FALLBACK) can be used on the whole cluster instead of writing explicit zero buffers later in perform_cow(). iotest 060: write to the discarded cluster does not trigger COW anymore. Use a backing image instead. Signed-off-by: NAnton Nefedov <anton.nefedov@virtuozzo.com> Message-id: 20190516142749.81019-2-anton.nefedov@virtuozzo.com Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
Do encryption/decryption in threads, like it is already done for compression. This improves asynchronous encrypted io. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190506142741.41731-9-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
Encryption will be done in threads, to take benefit of it, we should move it out of the lock first. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20190506142741.41731-8-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
Background: decryption will be done in threads, to take benefit of it, we should move it out of the lock first. But let's go further: it turns out, that only qcow2_get_cluster_offset() needs locking, so reduce locking to it. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190506142741.41731-7-vsementsov@virtuozzo.com Reviewed-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-