- 19 5月, 2020 2 次提交
-
-
由 Max Reitz 提交于
For now, it is always set to 0. Later patches in this series will ensure that all callers pass an appropriate combination of flags. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-Id: <20200513110544.176672-6-mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Max Reitz 提交于
This structure nearly only contains parent callbacks for child state changes. It cannot really reflect a child's role, because different roles may overlap (as we will see when real roles are introduced), and because parents can have custom callbacks even when the child fulfills a standard role. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-Id: <20200513110544.176672-4-mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 10 9月, 2019 1 次提交
-
-
In job_finish_sync job_enter should be enough for a job to make some progress and draining is a wrong tool for it. So use job_enter directly here and drop job_drain with all related staff not used more. Suggested-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Tested-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 16 8月, 2019 4 次提交
-
-
由 Markus Armbruster 提交于
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
-
由 Kevin Wolf 提交于
This fixes devices like IDE that can still start new requests from I/O handlers in the CPU thread while the block backend is drained. The basic assumption is that in a drain section, no new requests should be allowed through a BlockBackend (blk_drained_begin/end don't exist, we get drain sections only on the node level). However, there are two special cases where requests should not be queued: 1. Block jobs: We already make sure that block jobs are paused in a drain section, so they won't start new requests. However, if the drain_begin is called on the job's BlockBackend first, it can happen that we deadlock because the job stays busy until it reaches a pause point - which it can't if its requests aren't processed any more. The proper solution here would be to make all requests through the job's filter node instead of using a BlockBackend. For now, just disabling request queuing on the job BlockBackend is simpler. 2. In test cases where making requests through bdrv_* would be cumbersome because we'd need a BdrvChild. As we already got the functionality to disable request queuing from 1., use it in tests, too, for convenience. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
Add a test for what happens when you call bdrv_replace_child_noperm() for various drain situations ({old,new} child {drained,not drained}). Most importantly, if both the old and the new child are drained, the parent must not be undrained at any point. Signed-off-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Max Reitz 提交于
Signed-off-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 19 7月, 2019 2 次提交
-
-
由 Max Reitz 提交于
Signed-off-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Max Reitz 提交于
Signed-off-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 04 6月, 2019 5 次提交
-
-
由 Kevin Wolf 提交于
No reason to use the unchecked version in tests, even more so when these are the last callers of bdrv_set_aio_context() outside of block.c. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
A node should only be in a non-default AioContext if a user is attached to it that requires this. When the last parent of a node is gone, it can move back to the main AioContext. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
So far, we only made sure that updating the AioContext of a node affected the whole subtree. However, if a node is newly attached to a new parent, we also need to make sure that both the subtree of the node and the parent are in the same AioContext. This tries to move the new child node to the parent AioContext and returns an error if this isn't possible. BlockBackends now actually apply their AioContext to their root node. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This adds a new parameter to blk_new() which requires its callers to declare from which AioContext this BlockBackend is going to be used (or the locks of which AioContext need to be taken anyway). The given context is only stored and kept up to date when changing AioContexts. Actually applying the stored AioContext to the root node is saved for another commit. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Add an Error parameter to blk_set_aio_context() and use bdrv_child_try_set_aio_context() internally to check whether all involved nodes can actually support the AioContext switch. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 29 5月, 2019 1 次提交
-
-
由 Alberto Garcia 提交于
bdrv_unref_child() does the following things: - Updates the child->bs->inherits_from pointer. - Calls bdrv_detach_child() to remove the BdrvChild from bs->children. - Calls bdrv_unref() to unref the child BlockDriverState. When bdrv_unref_child() was introduced in commit 33a60407 it was not used in bdrv_close() because the drivers that had additional children (like quorum or blkverify) had already called bdrv_unref() on their children during their own close functions. This was changed later (in 0bd6e91a for quorum, in 3e586be0 for blkverify) so there's no reason not to use bdrv_unref_child() in bdrv_close() anymore. After this there's also no need to remove bs->backing and bs->file separately from the rest of the children, so bdrv_close() can be simplified. Now bdrv_close() unrefs all children (before this patch it was only bs->file and bs->backing). As a result, none of the callers of brvd_attach_child() should remove their reference to child_bs (because this function effectively steals that reference). This patch updates a couple of tests that were doing their own bdrv_unref(). Signed-off-by: NAlberto Garcia <berto@igalia.com> Message-id: 6d1d5feaa53aa1ab127adb73d605dc4503e3abd5.1557754872.git.berto@igalia.com [mreitz: s/where/were/] Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 25 2月, 2019 1 次提交
-
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 22 2月, 2019 1 次提交
-
-
Use new QEMU_IOVEC_INIT_BUF() instead of qemu_iovec_init_external( ... , 1), which simplifies the code. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20190218140926.333779-15-vsementsov@virtuozzo.com Message-Id: <20190218140926.333779-15-vsementsov@virtuozzo.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 02 10月, 2018 1 次提交
-
-
由 Kevin Wolf 提交于
qemu_event_reset() must be called before the AIO request in a different iothread is submitted. Otherwise the request could be completed before we do the qemu_event_reset() and the test would hang in qemu_event_wait(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Tested-by: NMax Reitz <mreitz@redhat.com>
-
- 25 9月, 2018 6 次提交
-
-
由 Kevin Wolf 提交于
For the block job drain test, don't only test draining the source and the target node, but create a backing chain for the source (source_backing <- source <- source_overlay) and test draining each of the nodes in it. When using iothreads, the source node (and therefore the job) is in a different AioContext than the drain, which happens from the main thread. This way, the main thread waits in AIO_WAIT_WHILE() for the iothread to make process and aio_wait_kick() is required to notify it. The test validates that calling bdrv_wakeup() for a child or a parent node will actually notify AIO_WAIT_WHILE() instead of letting it hang. Increase the sleep time a bit (to 1 ms) because the test case is racy and with the shorter sleep, it didn't reproduce the bug it is supposed to test for me under 'rr record -n'. This was because bdrv_drain_invoke_entry() (in the main thread) was only called after the job had already reached the pause point, so we got a bdrv_dec_in_flight() from the main thread and the additional aio_wait_kick() when the job becomes idle (that we really wanted to test here) wasn't even necessary any more to make progress. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
Commit 89bd0305 changed the test case from using job_sleep_ns() to using qemu_co_sleep_ns() instead. Also, block_job_sleep_ns() became job_sleep_ns() in commit 5d43e86e. In both cases, some comments in the test case were not updated. Do that now. Reported-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
This adds tests for calling AIO_WAIT_WHILE() in the .commit and .abort callbacks. Both reasons why .abort could be called for a single job are tested: Either .run or .prepare could return an error. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
This is a regression test for a deadlock that could occur in callbacks called from the aio_poll() in bdrv_drain_poll_top_level(). The AioContext lock wasn't released and therefore would be taken a second time in the callback. This would cause a possible AIO_WAIT_WHILE() in the callback to hang. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
This is a regression test for a deadlock that occurred in block job completion callbacks (via job_defer_to_main_loop) because the AioContext lock was taken twice: once in job_finish_sync() and then again in job_defer_to_main_loop_bh(). This would cause AIO_WAIT_WHILE() to hang. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
This extends the existing drain test with a block job to include variants where the block job runs in a different AioContext. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
- 31 8月, 2018 4 次提交
-
-
由 John Snow 提交于
Utilize the job_exit shim by not calling job_defer_to_main_loop, and where applicable, converting the deferred callback into the job_exit callback. This converts backup, stream, create, and the unit tests all at once. Most of these jobs do not see any changes to the order in which they clean up their resources, except the test-blockjob-txn test, which now puts down its bs before job_completed is called. This is safe for the same reason the reordering in the mirror job is safe, because job_completed no longer runs under two locks, making the unref safe even if it causes a flush. Signed-off-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-7-jsnow@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 John Snow 提交于
Jobs presently use both an Error object in the case of the create job, and char strings in the case of generic errors elsewhere. Unify the two paths as just j->err, and remove the extra argument from job_completed. The integer error code for job_completed is kept for now, to be removed shortly in a separate patch. Signed-off-by: NJohn Snow <jsnow@redhat.com> Message-id: 20180830015734.19765-3-jsnow@redhat.com [mreitz: Dropped a superfluous g_strdup()] Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 John Snow 提交于
Presently we codify the entry point for a job as the "start" callback, but a more apt name would be "run" to clarify the idea that when this function returns we consider the job to have "finished," except for any cleanup which occurs in separate callbacks later. As part of this clarification, change the signature to include an error object and a return code. The error ptr is not yet used, and the return code while captured, will be overwritten by actions in the job_completed function. Signed-off-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-2-jsnow@redhat.com Reviewed-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Marc-André Lureau 提交于
Spotted by ASAN: ================================================================= ==5378==ERROR: LeakSanitizer: detected memory leaks Direct leak of 65536 byte(s) in 1 object(s) allocated from: #0 0x7f788f83bc48 in malloc (/lib64/libasan.so.5+0xeec48) #1 0x7f788c9923c5 in g_malloc (/lib64/libglib-2.0.so.0+0x523c5) #2 0x5622a1fe37bc in coroutine_trampoline /home/elmarco/src/qq/util/coroutine-ucontext.c:116 #3 0x7f788a15d75f in __correctly_grouped_prefixwc (/lib64/libc.so.6+0x4c75f) (Broken in commit 4c8158e3.) Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Message-id: 20180809114417.28718-3-marcandre.lureau@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 10 7月, 2018 1 次提交
-
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 18 6月, 2018 9 次提交
-
-
由 Kevin Wolf 提交于
This tests both adding and remove a node between bdrv_drain_all_begin() and bdrv_drain_all_end(), and enabled the existing detach test for drain_all. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This adds a test case that goes wrong if bdrv_drain_invoke() calls aio_poll(). Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
If bdrv_do_drained_begin() polls during its subtree recursion, the graph can change and mess up the bs->children iteration. Test that this doesn't happen. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Max Reitz 提交于
This patch adds two bdrv-drain tests for what happens if some BDS goes away during the drainage. The basic idea is that you have a parent BDS with some child nodes. Then, you drain one of the children. Because of that, the party who actually owns the parent decides to (A) delete it, or (B) detach all its children from it -- both while the child is still being drained. A real-world case where this can happen is the mirror block job, which may exit if you drain one of its children. Signed-off-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(), coroutine context is automatically left with a BH, preventing the deadlocks that made bdrv_drain_all*() unsafe in coroutine context. Now that we even removed the old polling code as dead code, it's obvious that it's compatible now. Enable the coroutine test cases for bdrv_drain_all(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_do_drain_begin/end() implement already everything that bdrv_drain_all_begin/end() need and currently still do manually: Disable external events, call parent drain callbacks, call block driver callbacks. It also does two more things: The first is incrementing bs->quiesce_counter. bdrv_drain_all() already stood out in the test case by behaving different from the other drain variants. Adding this is not only safe, but in fact a bug fix. The second is calling bdrv_drain_recurse(). We already do that later in the same function in a loop, so basically doing an early first iteration doesn't hurt. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
As long as nobody keeps the other I/O thread from working, there is no reason why bdrv_drain() wouldn't work with cross-AioContext events. The key is that the root request we're waiting for is in the AioContext we're polling (which it always is for bdrv_drain()) so that aio_poll() is woken up in the end. Add a test case that shows that it works. Remove the comment in bdrv_drain() that claims otherwise. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 30 5月, 2018 1 次提交
-
-
由 Kevin Wolf 提交于
So far we relied on job->ret and strerror() to produce an error message for failed jobs. Not surprisingly, this tends to result in completely useless messages. This adds a Job.error field that can contain an error string for a failing job, and a parameter to job_completed() that sets the field. As a default, if NULL is passed, we continue to use strerror(job->ret). All existing callers are changed to pass NULL. They can be improved in separate patches. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com>
-
- 23 5月, 2018 1 次提交
-
-
由 Kevin Wolf 提交于
The transition to the READY state was still performed in the BlockJob layer, in the same function that sent the BLOCK_JOB_READY QMP event. This patch brings the state transition to the Job layer and implements the QMP event using a notifier called from the Job layer, like we already do for other events related to state transitions. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-