- 29 6月, 2018 2 次提交
-
-
由 Kevin Wolf 提交于
This moves the bdrv_truncate() implementation from block.c to block/io.c so it can have access to the tracked requests infrastructure. This involves making refresh_total_sectors() public (in block_int.h). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_truncate() is an operation that can block (even for a quite long time, depending on the PreallocMode) in I/O paths that shouldn't block. Convert it to a coroutine_fn so that we have the infrastructure for drivers to make their .bdrv_co_truncate implementation asynchronous. This change could potentially introduce new race conditions because bdrv_truncate() isn't necessarily executed atomically any more. Whether this is a problem needs to be evaluated for each block driver that supports truncate: * file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The protocol drivers are trivially safe because they don't actually yield yet, so there is no change in behaviour. * copy-on-read, crypto, raw-format: Essentially just filter drivers that pass the request to a child node, no problem. * qcow2: The implementation modifies metadata, so it needs to hold s->lock to be safe with concurrent I/O requests. In order to avoid double locking, this requires pulling the locking out into preallocate_co() and using qcow2_write_caches() instead of bdrv_flush(). * qed: Does a single header update, this is fine without locking. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 18 6月, 2018 6 次提交
-
-
由 Max Reitz 提交于
Currently, bdrv_replace_node() refuses to create loops from one BDS to itself if the BDS to be replaced is the backing node of the BDS to replace it: Say there is a node A and a node B. Replacing B by A means making all references to B point to A. If B is a child of A (i.e. A has a reference to B), that would mean we would have to make this reference point to A itself -- so we'd create a loop. bdrv_replace_node() (through should_update_child()) refuses to do so if B is the backing node of A. There is no reason why we should create loops if B is not the backing node of A, though. The BDS graph should never contain loops, so we should always refuse to create them. If B is a child of A and B is to be replaced by A, we should simply leave B in place there because it is the most sensible choice. A more specific argument would be: Putting filter drivers into the BDS graph is basically the same as appending an overlay to a backing chain. But the main child BDS of a filter driver is not "backing" but "file", so restricting the no-loop rule to backing nodes would fail here. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 20180613181823.13618-7-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and did a subtree drain for each of them. This works fine as long as the graph is static, but sadly, reality looks different. If the graph changes so that root nodes are added or removed, we would have to compensate for this. bdrv_next() returns each root node only once even if it's the root node for multiple BlockBackends or for a monitor-owned block driver tree, which would only complicate things. The much easier and more obviously correct way is to fundamentally change the way the functions work: Iterate over all BlockDriverStates, no matter who owns them, and drain them individually. Compensation is only necessary when a new BDS is created inside a drain_all section. Removal of a BDS doesn't require any action because it's gone afterwards anyway. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
In the future, bdrv_drained_all_begin/end() will drain all invidiual nodes separately rather than whole subtrees. This means that we don't want to propagate the drain to all parents any more: If the parent is a BDS, it will already be drained separately. Recursing to all parents is unnecessary work and would make it an O(n²) operation. Prepare the drain function for the changed drain_all by adding an ignore_bds_parents parameter to the internal implementation that prevents the propagation of the drain to BDS parents. We still (have to) propagate it to non-BDS parents like BlockBackends or Jobs because those are not drained separately. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_do_drained_begin() is only safe if we have a single BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow that parent callbacks introduce a nested polling loop that could cause graph changes while we're traversing the graph. Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single node without waiting for its requests to complete. These requests will be waited for in the BDRV_POLL_WHILE() call down the call chain. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Anything can happen inside BDRV_POLL_WHILE(), including graph changes that may interfere with its callers (e.g. child list iteration in recursive callers of bdrv_do_drained_begin). Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the end of bdrv_do_drained_begin() to avoid such effects. The recursion happens now inside the loop condition. As the graph can only change between bdrv_drain_poll() calls, but not inside of it, doing the recursion here is safe. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 15 6月, 2018 1 次提交
-
-
由 Max Reitz 提交于
There are numerous QDict functions that have been introduced for and are used only by the block layer. Move their declarations into an own header file to reflect that. While qdict_extract_subqdict() is in fact used outside of the block layer (in util/qemu-config.c), it is still a function related very closely to how the block layer works with nested QDicts, namely by sometimes flattening them. Therefore, its declaration is put into this header as well and util/qemu-config.c includes it with a comment stating exactly which function it needs. Suggested-by: NMarkus Armbruster <armbru@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-Id: <20180509165530.29561-7-mreitz@redhat.com> [Copyright note tweaked, superfluous includes dropped] Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 11 6月, 2018 2 次提交
-
-
由 Max Reitz 提交于
This is a useful function for the whole block layer, so make it public. At the same time, users outside of block.c probably do not need to make use of the reopen functionality, so rename the current function to bdrv_is_writable_after_reopen() create a new bdrv_is_writable() function that just passes NULL to it for the reopen queue. Cc: qemu-stable@nongnu.org Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20180606193702.7113-2-mreitz@redhat.com Reviewed-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
Looking at the qcow2 code that is riddled with error_report() calls, this is really how it should have been from the start. Along the way, turn the target_version/current_version comparisons at the beginning of qcow2_downgrade() into assertions (the caller has to make sure these conditions are met), and rephrase the error message on using compat=1.1 to get refcount widths other than 16 bits. Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20180509210023.20283-3-mreitz@redhat.com Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 23 5月, 2018 2 次提交
-
-
由 Kevin Wolf 提交于
Now that we cancel all jobs and not only block jobs on shutdown, doing that in bdrv_close_all() isn't really appropriate any more. Move the job_cancel_sync_all() call to the callers, and only assert that there are no job running in bdrv_close_all(). Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This moves the top-level job completion and cancellation functions from BlockJob to Job. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 04 5月, 2018 2 次提交
-
-
由 Marc-André Lureau 提交于
For convenience and clarity, make it possible to call qobject_ref() at the time when the reference is associated with a variable, or argument, by making qobject_ref() return the same pointer as given. Use that to simplify the callers. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-5-marcandre.lureau@redhat.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> [Useless change to qobject_ref_impl() dropped, commit message improved slightly] Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
-
由 Marc-André Lureau 提交于
Now that we can safely call QOBJECT() on QObject * as well as its subtypes, we can have macros qobject_ref() / qobject_unref() that work everywhere instead of having to use QINCREF() / QDECREF() for QObject and qobject_incref() / qobject_decref() for its subtypes. The replacement is mechanical, except I broke a long line, and added a cast in monitor_qmp_cleanup_req_queue_locked(). Unlike qobject_decref(), qobject_unref() doesn't accept void *. Note that the new macros evaluate their argument exactly once, thus no need to shout them. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-4-marcandre.lureau@redhat.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> [Rebased, semantic conflict resolved, commit message improved] Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
-
- 20 3月, 2018 3 次提交
-
-
由 Max Reitz 提交于
We have a clear replacement, so let's deprecate it. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-Id: <20180224154033.29559-8-mreitz@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Max Reitz 提交于
Instead of converting all "backing": null instances into "backing": "", handle a null value directly in bdrv_open_inherit(). This enables explicitly null backing links for json:{} filenames. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-Id: <20180224154033.29559-7-mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> [eblake: rebase to qobject_to() parameter order and qapi headers split] Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Max Reitz 提交于
This patch was generated using the following Coccinelle script: @@ expression Obj; @@ ( - qobject_to_qnum(Obj) + qobject_to(QNum, Obj) | - qobject_to_qstring(Obj) + qobject_to(QString, Obj) | - qobject_to_qdict(Obj) + qobject_to(QDict, Obj) | - qobject_to_qlist(Obj) + qobject_to(QList, Obj) | - qobject_to_qbool(Obj) + qobject_to(QBool, Obj) ) and a bit of manual fix-up for overly long lines and three places in tests/check-qjson.c that Coccinelle did not find. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-Id: <20180224154033.29559-4-mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> [eblake: swap order from qobject_to(o, X), rebase to master, also a fix to latent false-positive compiler complaint about hw/i386/acpi-build.c] Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 19 3月, 2018 2 次提交
-
-
由 Fam Zheng 提交于
Reported-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Reopen flags are not synchronized according to the bdrv_reopen_queue_child precedence until bdrv_reopen_prepare. It is a bit too late: we already check the consistency in bdrv_check_perm before that. This fixes the bug that when bdrv_reopen a RO node as RW, the flags for backing child are wrong. Before, we could recurse with flags.rw=1; now, role->inherit_options + update_flags_from_options will make sure to clear the bit when necessary. Note that this will not clear an explicitly set bit, as in the case of parallel block jobs (e.g. test_stream_parallel in 030), because the explicit options include 'read-only=false' (for an intermediate node used by a different job). Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 09 3月, 2018 5 次提交
-
-
由 Kevin Wolf 提交于
Most callers have their own checks, but something like this should also be checked centrally. As it happens, x-blockdev-create can pass negative image sizes to format drivers (because there is no QAPI type that would reject negative numbers) and triggers the check added by this patch. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
We'll use a separate source file for image creation, and we need to check there whether the requested driver is whitelisted. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
Instead of passing a separate BlockDriverState* into qcow2_co_create(), make use of the BlockdevRef that is included in BlockdevCreateOptions. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Paolo Bonzini 提交于
Suggested-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Message-Id: <1516279431-30424-8-git-send-email-pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Paolo Bonzini 提交于
QED's bdrv_invalidate_cache implementation would like to reuse functions that acquire/release the metadata locks. Call it from coroutine context to simplify the logic. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Message-Id: <1516279431-30424-6-git-send-email-pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 03 3月, 2018 5 次提交
-
-
由 Markus Armbruster 提交于
In my "build everything" tree, a change to the types in qapi-schema.json triggers a recompile of about 4800 out of 5100 objects. The previous commit split up qmp-commands.h, qmp-event.h, qmp-visit.h, qapi-types.h. Each of these headers still includes all its shards. Reduce compile time by including just the shards we actually need. To illustrate the benefits: adding a type to qapi/migration.json now recompiles some 2300 instead of 4800 objects. The next commit will improve it further. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180211093607.27351-24-armbru@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com> [eblake: rebase to master] Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Markus Armbruster 提交于
Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180211093607.27351-2-armbru@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Stefan Hajnoczi 提交于
BlockDriver->bdrv_create() has been called from coroutine context since commit 5b7e1542 ("block: make bdrv_create adopt coroutine"). Make this explicit by renaming to .bdrv_co_create_opts() and add the coroutine_fn annotation. This makes it obvious to block driver authors that they may yield, use CoMutex, or other coroutine_fn APIs. bdrv_co_create is reserved for the QAPI-based version that Kevin is working on. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <20170705102231.20711-2-stefanha@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
BlockBackend currently relies on BlockDriverState->in_flight to track requests for blk_drain(). There is a corner case where BlockDriverState->in_flight cannot be used though: blk->root can be NULL when there is no medium. This results in a segfault when the NULL pointer is dereferenced. Introduce a BlockBackend->in_flight counter for aio requests so it works even when blk->root == NULL. Based on a patch by Kevin Wolf <kwolf@redhat.com>. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
BlockDriverState has the BDRV_POLL_WHILE() macro to wait on event loop activity while a condition evaluates to true. This is used to implement synchronous operations where it acts as a condvar between the IOThread running the operation and the main loop waiting for the operation. It can also be called from the thread that owns the AioContext and in that case it's just a nested event loop. BlockBackend needs this behavior but doesn't always have a BlockDriverState it can use. This patch extracts BDRV_POLL_WHILE() into the AioWait abstraction, which can be used with AioContext and isn't tied to BlockDriverState anymore. This feature could be built directly into AioContext but then all users would kick the event loop even if they signal different conditions. Imagine an AioContext with many BlockDriverStates, each time a request completes any waiter would wake up and re-check their condition. It's nicer to keep a separate AioWait object for each condition instead. Please see "block/aio-wait.h" for details on the API. The name AIO_WAIT_WHILE() avoids the confusion between AIO_POLL_WHILE() and AioContext polling. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 10 2月, 2018 1 次提交
-
-
由 Eric Blake 提交于
We don't need the can_write_zeroes_with_unmap field in BlockDriverInfo, because it is redundant information with supported_zero_flags & BDRV_REQ_MAY_UNMAP. Note that BlockDriverInfo and supported_zero_flags are both per-device settings, rather than global state about the driver as a whole, which means one or both of these bits of information can already be conditional. Let's audit how they were set: crypto: always setting can_write_ to false is pointless (the struct starts life zero-initialized), no use of supported_ nbd: just recently fixed to set can_write_ if supported_ includes MAY_UNMAP (thus this commit effectively reverts bca80059e and solves the problem mentioned there in a more global way) file-posix, iscsi, qcow2: can_write_ is conditional, while supported_ was unconditional; but passing MAY_UNMAP would fail with ENOTSUP if the condition wasn't met qed: can_write_ is unconditional, but pwrite_zeroes lacks support for MAY_UNMAP and supported_ is not set. Perhaps support can be added later (since it would be similar to qcow2), but for now claiming false is no real loss all other drivers: can_write_ is not set, and supported_ is either unset or a passthrough Simplify the code by moving the conditional into supported_zero_flags for all drivers, then dropping the now-unused BDI field. For callers that relied on bdrv_can_write_zeroes_with_unmap(), we return the same per-device settings for drivers that had conditions (no observable change in behavior there); and can now return true (instead of false) for drivers that support passthrough (for example, the commit driver) which gives those drivers the same fix as nbd just got in bca80059e. For callers that relied on supported_zero_flags, we now have a few more places that can avoid a wasted call to pwrite_zeroes() that will just fail with ENOTSUP. Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com> Message-Id: <20180126193439.20219-1-eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 09 2月, 2018 6 次提交
-
-
由 Markus Armbruster 提交于
qemu-common.h includes qemu/option.h, but most places that include the former don't actually need the latter. Drop the include, and add it to the places that actually need it. While there, drop superfluous includes of both headers, and separate #include from file comment with a blank line. This cleanup makes the number of objects depending on qemu/option.h drop from 4545 (out of 4743) to 284 in my "build everything" tree. Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-20-armbru@redhat.com> [Semantic conflict with commit bdd6a90a in block/nvme.c resolved]
-
由 Markus Armbruster 提交于
Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-15-armbru@redhat.com>
-
由 Markus Armbruster 提交于
Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-14-armbru@redhat.com>
-
由 Markus Armbruster 提交于
This cleanup makes the number of objects depending on qapi/qmp/qdict.h drop from 4550 (out of 4743) to 368 in my "build everything" tree. For qapi/qmp/qobject.h, the number drops from 4552 to 390. While there, separate #include from file comment with a blank line. Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-13-armbru@redhat.com>
-
由 Markus Armbruster 提交于
Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-7-armbru@redhat.com> [OSX breakage fixed]
-
由 Markus Armbruster 提交于
This cleanup makes the number of objects depending on qapi/error.h drop from 1910 (out of 4743) to 1612 in my "build everything" tree. While there, separate #include from file comment with a blank line, and drop a useless comment on why qemu/osdep.h is included first. Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-5-armbru@redhat.com> [Semantic conflict with commit 34e304e9 resolved, OSX breakage fixed]
-
- 22 12月, 2017 3 次提交
-
-
由 Kevin Wolf 提交于
The bdrv_reopen*() implementation doesn't like it if the graph is changed between queuing nodes for reopen and actually reopening them (one of the reasons is that queuing can be recursive). So instead of draining the device only in bdrv_reopen_multiple(), require that callers already drained all affected nodes, and assert this in bdrv_reopen_queue(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
We need to remember how many of the drain sections in which a node is were recursive (i.e. subtree drain rather than node drain), so that they can be correctly applied when children are added or removed during the drained section. With this change, it is safe to modify the graph even inside a bdrv_subtree_drained_begin/end() section. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This is in preparation for subtree drains, i.e. drained sections that affect not only a single node, but recursively all child nodes, too. Calling the parent callbacks for drain is pointless when we just came from that parent node recursively and leads to multiple increases of bs->quiesce_counter in a single drain call. Don't do it. In order for this to work correctly, the parent callback must be called for every bdrv_drain_begin/end() call, not only for the outermost one: If we have a node N with two parents A and B, recursive draining of A should cause the quiesce_counter of B to increase because its child N is drained independently of B. If now B is recursively drained, too, A must increase its quiesce_counter because N is drained independently of A only now, even if N is going from quiesce_counter 1 to 2. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-