- 27 9月, 2018 5 次提交
-
-
由 Gerd Hoffmann 提交于
This patch adds edid support to the qemu stdvga. It is turned off by default and can be enabled with the new edid property. The patch also adds xres and yres properties to specify the video mode you want the guest use. Works only with edid enabled and updated guest driver. The mmio bar of the stdvga has some unused address space at the start. It was reserved just in case it'll be needed for virtio, but it turned out to not be needed for that. So let's use that region to place the EDID data block there. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Message-id: 20180925075646.25114-6-kraxel@redhat.com
-
由 Gerd Hoffmann 提交于
Add a define for edid monitor properties. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Message-id: 20180925075646.25114-5-kraxel@redhat.com
-
由 Gerd Hoffmann 提交于
Create a io region for an EDID data block. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180925075646.25114-4-kraxel@redhat.com
-
由 Gerd Hoffmann 提交于
Helper function to figure the size of a edid blob, by checking how many extensions are present. Both the base edid blob and the extensions are 128 bytes in size. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Message-id: 20180925075646.25114-3-kraxel@redhat.com
-
由 Gerd Hoffmann 提交于
EDID is a metadata format to describe monitors. On physical hardware the monitor has an eeprom with that data block which can be read over i2c bus. On a linux system you can usually find the EDID data block in /sys/class/drm/$card/$connector/edid. xorg ships a edid-decode utility which you can use to turn the blob into readable form. I think it would be a good idea to use EDID for virtual displays too. Needs changes in both qemu and guest kms drivers. This patch is the first step, it adds an generator for EDID blobs to qemu. Comes with a qemu-edid test tool included. With EDID we can pass more information to the guest. Names and serial numbers, so the guests display configuration has no boring "Unknown Monitor". List of video modes. Display resolution, pretty important in case we want add HiDPI support some day. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Message-id: 20180925075646.25114-2-kraxel@redhat.com
-
- 25 9月, 2018 35 次提交
-
-
由 Peter Maydell 提交于
Block layer patches: - Drain fixes - node-name parameters for block-commit - Refactor block jobs to use transactional callbacks for exiting # gpg: Signature made Tue 25 Sep 2018 16:12:44 BST # gpg: using RSA key F407DB0061D5CF40 # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40 * remotes/xanclic/tags/pull-block-2018-09-25: (42 commits) test-bdrv-drain: Test draining job source child and parent block: Use a single global AioWait test-bdrv-drain: Fix outdated comments test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort job: Avoid deadlocks in job_completed_txn_abort() test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level() block: Remove aio_poll() in bdrv_drain_poll variants blockjob: Lie better in child_job_drained_poll() block-backend: Decrease in_flight only after callback block-backend: Fix potential double blk_delete() block-backend: Add .drained_poll callback block: Add missing locking in bdrv_co_drain_bh_cb() test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback job: Use AIO_WAIT_WHILE() in job_finish_sync() test-blockjob: Acquire AioContext around job_cancel_sync() test-bdrv-drain: Drain with block jobs in an I/O thread aio-wait: Increase num_waiters even in home thread blockjob: Wake up BDS when job becomes idle job: Fix missing locking due to mismerge job: Fix nested aio_poll() hanging in job_txn_apply ... Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
HMP pull 2018-09-25 # gpg: Signature made Tue 25 Sep 2018 15:11:09 BST # gpg: using RSA key 0516331EBC5BFDE7 # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7 * remotes/dgilbert/tags/pull-hmp-20180925: qmp, hmp: add PCI subsystem id and vendor id to PCI info hmp: fix migrate status timer leak monitor: print message when using 'help' with an unknown command Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
target-arm queue: * target/arm: Fix cpu_get_tb_cpu_state() for non-SVE CPUs * hw/arm/exynos4210: fix Exynos4210 UART support * hw/arm/virt-acpi-build: Add a check for memory-less NUMA nodes * arm: Add BBC micro:bit machine * aspeed/i2c: Fix interrupt handling bugs * hw/arm/smmu-common: Fix the name of the iommu memory regions * hw/arm/smmuv3: fix eventq recording and IRQ triggerring * hw/intc/arm_gic: Document QEMU interface * hw/intc/arm_gic: Drop GIC_BASE_IRQ macro * hw/net/pcnet-pci: Convert away from old_mmio accessors * hw/timer/cmsdk-apb-dualtimer: Add missing 'break' statements * aspeed/timer: fix compile breakage with clang 3.4.2 * hw/arm/aspeed: change the FMC flash model of the AST2500 evb * hw/arm/aspeed: Minor code cleanups * target/arm: Start AArch32 CPUs with EL2 but not EL3 in Hyp mode # gpg: Signature made Tue 25 Sep 2018 15:23:11 BST # gpg: using RSA key 3C2525ED14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" # gpg: aka "Peter Maydell <pmaydell@gmail.com>" # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20180925-1: (21 commits) target/arm: Start AArch32 CPUs with EL2 but not EL3 in Hyp mode aspeed/smc: fix some alignment issues hw/arm/aspeed: Add an Aspeed machine class hw/arm/aspeed: change the FMC flash model of the AST2500 evb aspeed/timer: fix compile breakage with clang 3.4.2 hw/timer/cmsdk-apb-dualtimer: Add missing 'break' statements hw/net/pcnet-pci: Unify pcnet_ioport_read/write and pcnet_mmio_read/write hw/net/pcnet-pci: Convert away from old_mmio accessors hw/intc/arm_gic: Drop GIC_BASE_IRQ macro hw/intc/arm_gic: Document QEMU interface hw/arm/smmuv3: fix eventq recording and IRQ triggerring hw/arm/smmu-common: Fix the name of the iommu memory regions aspeed/i2c: Fix receive done interrupt handling aspeed/i2c: Handle receive command in separate function aspeed/i2c: interrupts should be cleared by software only arm: Add BBC micro:bit machine arm: Add Nordic Semiconductor nRF51 SoC MAINTAINERS: Add NRF51 entry hw/arm/virt-acpi-build: Add a check for memory-less NUMA nodes hw/arm/exynos4210: fix Exynos4210 UART support ... Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
The ARMv8 architecture defines that an AArch32 CPU starts in SVC mode, unless EL2 is the highest available EL, in which case it starts in Hyp mode. (In ARMv7 a CPU with EL2 but not EL3 was not a valid configuration, but we don't specifically reject this if the user asks for one.) Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20180823135047.16525-1-peter.maydell@linaro.org
-
由 Cédric Le Goater 提交于
Signed-off-by: NCédric Le Goater <clg@kaod.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180921161939.822-6-clg@kaod.org Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Cédric Le Goater 提交于
The code looks better, it removes duplicated lines and it will ease the introduction of common properties for the Aspeed machines. Signed-off-by: NCédric Le Goater <clg@kaod.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180921161939.822-4-clg@kaod.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Cédric Le Goater 提交于
The AST2500 evb is shipped with a W25Q256 which has a non volatile bit to make the chip operate in 4 Byte address mode at power up. This should be an interesting feature to model as it will exercise a bit more the SMC controllers and MMIO execution at boot time. Signed-off-by: NCédric Le Goater <clg@kaod.org> Message-id: 20180921161939.822-3-clg@kaod.org Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Cédric Le Goater 提交于
In file included from /home/thuth/devel/qemu/hw/timer/aspeed_timer.c:16: /home/thuth/devel/qemu/include/hw/misc/aspeed_scu.h:37:3: error: redefinition of typedef 'AspeedSCUState' is a C11 feature [-Werror,-Wtypedef-redefinition] } AspeedSCUState; ^ /home/thuth/devel/qemu/include/hw/timer/aspeed_timer.h:27:31: note: previous definition is here typedef struct AspeedSCUState AspeedSCUState; Reported-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NCédric Le Goater <clg@kaod.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180921161939.822-2-clg@kaod.org Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
Add 'break' statements missing from a switch in the APB dual-timer write function. Spotted by Coverity as CID 1395626 and 1395633. Reported-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180924123122.14549-1-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
The only difference between our implementation of the pcnet ioport accessors and the mmio accessors is that the former check BCR_DWIO to see what access widths are permitted for addresses in the aprom range (0x0..0xf). In fact our failure to do this in the mmio accessors is a bug (one which was fixed for the ioport accessors in commit 7ba79741 in 2011). The data sheet for the Am79C970A does not describe the DWIO bit as only applying for I/O space mapped I/O resources and not memory mapped I/O resources, and our MMIO accessors already honour DWIO for accesses in the 0x10..0x1f range (since the pcnet_ioport_{read,write}{w,l} functions check it). The data sheet for the later but compatible Am79C976 is clearer: it states specifically "DWIO mode applies to both I/O- and memory-mapped acceses." This seems to be reasonable evidence in favour of interpretating the Am79C970A spec as being the same. (NB: Linux's pcnet driver only supports I/O accesses, so the MMIO access part of this device is probably untested anyway.) Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
Convert the pcnet-pci device away from using the old_mmio MemoryRegionOps accessor functions. This commit is a no-behaviour-change API conversion. (Since PCNET_PNPMMIO_SIZE is 0x20, the old "addr & 0x10" check and the new "addr < 0x10" check are exact opposites; the new code is phrased to be parallel with the pcnet_io_read/write functions.) I have left a TODO comment marker because the similarity between the MMIO and IO accessor behaviour is suspicious and they could be combined, but this will be left to a different patch. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
The GIC_BASE_IRQ macro is a leftover from when we shared code between the GICv2 and the v7M NVIC. Since the NVIC is now split off, GIC_BASE_IRQ is always 0, and we can just delete it. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NLuc Michel <luc.michel@greensocs.com> Message-id: 20180824161819.11085-1-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
The GICv2's QEMU interface (sysbus MMIO regions, IRQs, etc) is now quite complicated with the addition of the virtualization extensions. Add a comment in the header file which documents it. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NLuc Michel <luc.michel@greensocs.com> Message-id: 20180823103818.31189-1-peter.maydell@linaro.org
-
由 Eric Auger 提交于
The event queue management is broken today. Event records are not properly written as EVT_SET_* macro was not updating the actual event record. Also the event queue interrupt is not correctly triggered. Fixes: bb981004 ("hw/arm/smmuv3: Event queue recording helper") Signed-off-by: NEric Auger <eric.auger@redhat.com> Message-id: 20180921070138.10114-3-eric.auger@redhat.com Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Max Reitz 提交于
Block layer patches: - Fix some jobs/drain/aio_poll related hangs - commit: Add top-node/base-node options - linux-aio: Fix locking for qemu_laio_process_completions() - Fix use after free error in bdrv_open_inherit # gpg: Signature made Tue Sep 25 15:54:01 2018 CEST # gpg: using RSA key 7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kevin/tags/for-upstream: (26 commits) test-bdrv-drain: Test draining job source child and parent block: Use a single global AioWait test-bdrv-drain: Fix outdated comments test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort job: Avoid deadlocks in job_completed_txn_abort() test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level() block: Remove aio_poll() in bdrv_drain_poll variants blockjob: Lie better in child_job_drained_poll() block-backend: Decrease in_flight only after callback block-backend: Fix potential double blk_delete() block-backend: Add .drained_poll callback block: Add missing locking in bdrv_co_drain_bh_cb() test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback job: Use AIO_WAIT_WHILE() in job_finish_sync() test-blockjob: Acquire AioContext around job_cancel_sync() test-bdrv-drain: Drain with block jobs in an I/O thread aio-wait: Increase num_waiters even in home thread blockjob: Wake up BDS when job becomes idle job: Fix missing locking due to mismerge job: Fix nested aio_poll() hanging in job_txn_apply ... Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
For the block job drain test, don't only test draining the source and the target node, but create a backing chain for the source (source_backing <- source <- source_overlay) and test draining each of the nodes in it. When using iothreads, the source node (and therefore the job) is in a different AioContext than the drain, which happens from the main thread. This way, the main thread waits in AIO_WAIT_WHILE() for the iothread to make process and aio_wait_kick() is required to notify it. The test validates that calling bdrv_wakeup() for a child or a parent node will actually notify AIO_WAIT_WHILE() instead of letting it hang. Increase the sleep time a bit (to 1 ms) because the test case is racy and with the shorter sleep, it didn't reproduce the bug it is supposed to test for me under 'rr record -n'. This was because bdrv_drain_invoke_entry() (in the main thread) was only called after the job had already reached the pause point, so we got a bdrv_dec_in_flight() from the main thread and the additional aio_wait_kick() when the job becomes idle (that we really wanted to test here) wasn't even necessary any more to make progress. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
When draining a block node, we recurse to its parent and for subtree drains also to its children. A single AIO_WAIT_WHILE() is then used to wait for bdrv_drain_poll() to become true, which depends on all of the nodes we recursed to. However, if the respective child or parent becomes quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent is checked, while AIO_WAIT_WHILE() depends on the AioWait of the original node. Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE(). This may mean that the draining thread gets a few more unnecessary wakeups because an unrelated operation got completed, but we already wake it up when something _could_ have changed rather than only if it has certainly changed. Apart from that, drain is a slow path anyway. In theory it would be possible to use wakeups more selectively and still correctly, but the gains are likely not worth the additional complexity. In fact, this patch is a nice simplification for some places in the code. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
Commit 89bd0305 changed the test case from using job_sleep_ns() to using qemu_co_sleep_ns() instead. Also, block_job_sleep_ns() became job_sleep_ns() in commit 5d43e86e. In both cases, some comments in the test case were not updated. Do that now. Reported-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
This adds tests for calling AIO_WAIT_WHILE() in the .commit and .abort callbacks. Both reasons why .abort could be called for a single job are tested: Either .run or .prepare could return an error. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
Amongst others, job_finalize_single() calls the .prepare/.commit/.abort callbacks of the individual job driver. Recently, their use was adapted for all block jobs so that they involve code calling AIO_WAIT_WHILE() now. Such code must be called under the AioContext lock for the respective job, but without holding any other AioContext lock. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
This is a regression test for a deadlock that could occur in callbacks called from the aio_poll() in bdrv_drain_poll_top_level(). The AioContext lock wasn't released and therefore would be taken a second time in the callback. This would cause a possible AIO_WAIT_WHILE() in the callback to hang. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_drain_poll_top_level() was buggy because it didn't release the AioContext lock of the node to be drained before calling aio_poll(). This way, callbacks called by aio_poll() would possibly take the lock a second time and run into a deadlock with a nested AIO_WAIT_WHILE() call. However, it turns out that the aio_poll() call isn't actually needed any more. It was introduced in commit 91af091f, which is effectively reverted by this patch. The cases it was supposed to fix are now covered by bdrv_drain_poll(), which waits for block jobs to reach a quiescent state. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
Block jobs claim in .drained_poll() that they are in a quiescent state as soon as job->deferred_to_main_loop is true. This is obviously wrong, they still have a completion BH to run. We only get away with this because commit 91af091f added an unconditional aio_poll(false) to the drain functions, but this is bypassing the regular drain mechanisms. However, just removing this and telling that the job is still active doesn't work either: The completion callbacks themselves call drain functions (directly, or indirectly with bdrv_reopen), so they would deadlock then. As a better lie, tell that the job is active as long as the BH is pending, but falsely call it quiescent from the point in the BH when the completion callback is called. At this point, nested drain calls won't deadlock because they ignore the job, and outer drains will wait for the job to really reach a quiescent state because the callback is already running. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
Request callbacks can do pretty much anything, including operations that will yield from the coroutine (such as draining the backend). In that case, a decreased in_flight would be visible to other code and could lead to a drain completing while the callback hasn't actually completed yet. Note that reordering these operations forbids calling drain directly inside an AIO callback. As Paolo explains, indirectly calling it is okay: - Calling it through a coroutine is okay, because then bdrv_drained_begin() goes through bdrv_co_yield_to_drain() and you have in_flight=2 when bdrv_co_yield_to_drain() yields, then soon in_flight=1 when the aio_co_wake() in the AIO callback completes, then in_flight=0 after the bottom half starts. - Calling it through a bottom half would be okay too, as long as the AIO callback remembers to do inc_in_flight/dec_in_flight just like bdrv_co_yield_to_drain() and bdrv_co_drain_bh_cb() do A few more important cases that come to mind: - A coroutine that yields because of I/O is okay, with a sequence similar to bdrv_co_yield_to_drain(). - A coroutine that yields with no I/O pending will correctly decrease in_flight to zero before yielding. - Calling more AIO from the callback won't overflow the counter just because of mutual recursion, because AIO functions always yield at least once before invoking the callback. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Kevin Wolf 提交于
blk_unref() first decreases the refcount of the BlockBackend and calls blk_delete() if the refcount reaches zero. Requests can still be in flight at this point, they are only drained during blk_delete(): At this point, arbitrary callbacks can run. If any callback takes a temporary BlockBackend reference, it will first increase the refcount to 1 and then decrease it to 0 again, triggering another blk_delete(). This will cause a use-after-free crash in the outer blk_delete(). Fix it by draining the BlockBackend before decreasing to refcount to 0. Assert in blk_ref() that it never takes the first refcount (which would mean that the BlockBackend is already being deleted). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
A bdrv_drain operation must ensure that all parents are quiesced, this includes BlockBackends. Otherwise, callbacks called by requests that are completed on the BDS layer, but not quite yet on the BlockBackend layer could still create new requests. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_do_drained_begin/end() assume that they are called with the AioContext lock of bs held. If we call drain functions from a coroutine with the AioContext lock held, we yield and schedule a BH to move out of coroutine context. This means that the lock for the home context of the coroutine is released and must be re-acquired in the bottom half. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
This is a regression test for a deadlock that occurred in block job completion callbacks (via job_defer_to_main_loop) because the AioContext lock was taken twice: once in job_finish_sync() and then again in job_defer_to_main_loop_bh(). This would cause AIO_WAIT_WHILE() to hang. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
job_finish_sync() needs to release the AioContext lock of the job before calling aio_poll(). Otherwise, callbacks called by aio_poll() would possibly take the lock a second time and run into a deadlock with a nested AIO_WAIT_WHILE() call. Also, job_drain() without aio_poll() isn't necessarily enough to make progress on a job, it could depend on bottom halves to be executed. Combine both open-coded while loops into a single AIO_WAIT_WHILE() call that solves both of these problems. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
All callers in QEMU proper hold the AioContext lock when calling job_finish_sync(). test-blockjob should do the same when it calls the function indirectly through job_cancel_sync(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
This extends the existing drain test with a block job to include variants where the block job runs in a different AioContext. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
Even if AIO_WAIT_WHILE() is called in the home context of the AioContext, we still want to allow the condition to change depending on other threads as long as they kick the AioWait. Specfically block jobs can be running in an I/O thread and should then be able to kick a drain in the main loop context. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com>
-
由 Kevin Wolf 提交于
In the context of draining a BDS, the .drained_poll callback of block jobs is called. If this returns true (i.e. there is still some activity pending), the drain operation may call aio_poll() with blocking=true to wait for completion. As soon as the pending activity is completed and the job finally arrives in a quiescent state (i.e. its coroutine either yields with busy=false or terminates), the block job must notify the aio_poll() loop to wake up, otherwise we get a deadlock if both are running in different threads. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
job_completed() had a problem with double locking that was recently fixed independently by two different commits: "job: Fix nested aio_poll() hanging in job_txn_apply" "jobs: add exit shim" One fix removed the first aio_context_acquire(), the other fix removed the other one. Now we have a bug again and the code is run without any locking. Add it back in one of the places. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NJohn Snow <jsnow@redhat.com>
-
由 Fam Zheng 提交于
All callers have acquired ctx already. Doing that again results in aio_poll() hang. This fixes the problem that a BDRV_POLL_WHILE() in the callback cannot make progress because ctx is recursively locked, for example, when drive-backup finishes. There are two callers of job_finalize(): fam@lemon:~/work/qemu [master]$ git grep -w -A1 '^\s*job_finalize' blockdev.c: job_finalize(&job->job, errp); blockdev.c- aio_context_release(aio_context); -- job-qmp.c: job_finalize(job, errp); job-qmp.c- aio_context_release(aio_context); -- tests/test-blockjob.c: job_finalize(&job->job, &error_abort); tests/test-blockjob.c- assert(job->job.status == JOB_STATUS_CONCLUDED); Ignoring the test, it's easy to see both callers to job_finalize (and job_do_finalize) have acquired the context. Cc: qemu-stable@nongnu.org Reported-by: NGu Nini <ngu@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NFam Zheng <famz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-