- 18 6月, 2018 23 次提交
-
-
由 Max Reitz 提交于
In order to talk to the source BDS (and maybe in the future to the target BDS as well) directly, we need to convert our existing AIO requests into coroutine I/O requests. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Message-id: 20180613181823.13618-3-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Max Reitz 提交于
When converting mirror's I/O to coroutines, we are going to need a point where these coroutines are created. mirror_perform() is going to be that point. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NJeff Cody <jcody@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 20180613181823.13618-2-mreitz@redhat.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Greg Kurz 提交于
Removing a drive with drive_del while it is being used to run an I/O intensive workload can cause QEMU to crash. An AIO flush can yield at some point: blk_aio_flush_entry() blk_co_flush(blk) bdrv_co_flush(blk->root->bs) ... qemu_coroutine_yield() and let the HMP command to run, free blk->root and give control back to the AIO flush: hmp_drive_del() blk_remove_bs() bdrv_root_unref_child(blk->root) child_bs = blk->root->bs bdrv_detach_child(blk->root) bdrv_replace_child(blk->root, NULL) blk->root->bs = NULL g_free(blk->root) <============== blk->root becomes stale bdrv_unref(child_bs) bdrv_delete(child_bs) bdrv_close() bdrv_drained_begin() bdrv_do_drained_begin() bdrv_drain_recurse() aio_poll() ... qemu_coroutine_switch() and the AIO flush completion ends up dereferencing blk->root: blk_aio_complete() scsi_aio_complete() blk_get_aio_context(blk) bs = blk_bs(blk) ie, bs = blk->root ? blk->root->bs : NULL ^^^^^ stale The problem is that we should avoid making block driver graph changes while we have in-flight requests. Let's drain all I/O for this BB before calling bdrv_root_unref_child(). Signed-off-by: NGreg Kurz <groug@kaod.org> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This tests both adding and remove a node between bdrv_drain_all_begin() and bdrv_drain_all_end(), and enabled the existing detach test for drain_all. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and did a subtree drain for each of them. This works fine as long as the graph is static, but sadly, reality looks different. If the graph changes so that root nodes are added or removed, we would have to compensate for this. bdrv_next() returns each root node only once even if it's the root node for multiple BlockBackends or for a monitor-owned block driver tree, which would only complicate things. The much easier and more obviously correct way is to fundamentally change the way the functions work: Iterate over all BlockDriverStates, no matter who owns them, and drain them individually. Compensation is only necessary when a new BDS is created inside a drain_all section. Removal of a BDS doesn't require any action because it's gone afterwards anyway. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
In the future, bdrv_drained_all_begin/end() will drain all invidiual nodes separately rather than whole subtrees. This means that we don't want to propagate the drain to all parents any more: If the parent is a BDS, it will already be drained separately. Recursing to all parents is unnecessary work and would make it an O(n²) operation. Prepare the drain function for the changed drain_all by adding an ignore_bds_parents parameter to the internal implementation that prevents the propagation of the drain to BDS parents. We still (have to) propagate it to non-BDS parents like BlockBackends or Jobs because those are not drained separately. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Before we can introduce a single polling loop for all nodes in bdrv_drain_all_begin(), we must make sure to run it outside of coroutine context like we already do for bdrv_do_drained_begin(). Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_drain_all() wants to have a single polling loop for draining the in-flight requests of all nodes. This means that the AIO_WAIT_WHILE() condition relies on activity in multiple AioContexts, which is polled from the mainloop context. We must therefore call AIO_WAIT_WHILE() from the mainloop thread and use the AioWait notification mechanism. Just randomly picking the AioContext of any non-mainloop thread would work, but instead of bothering to find such a context in the caller, we can just as well accept NULL for ctx. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This adds a test case that goes wrong if bdrv_drain_invoke() calls aio_poll(). Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
We cannot allow aio_poll() in bdrv_drain_invoke(begin=true) until we're done with propagating the drain through the graph and are doing the single final BDRV_POLL_WHILE(). Just schedule the coroutine with the callback and increase bs->in_flight to make sure that the polling phase will wait for it. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_do_drained_begin() is only safe if we have a single BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow that parent callbacks introduce a nested polling loop that could cause graph changes while we're traversing the graph. Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single node without waiting for its requests to complete. These requests will be waited for in the BDRV_POLL_WHILE() call down the call chain. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
If bdrv_do_drained_begin() polls during its subtree recursion, the graph can change and mess up the bs->children iteration. Test that this doesn't happen. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Anything can happen inside BDRV_POLL_WHILE(), including graph changes that may interfere with its callers (e.g. child list iteration in recursive callers of bdrv_do_drained_begin). Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the end of bdrv_do_drained_begin() to avoid such effects. The recursion happens now inside the loop condition. As the graph can only change between bdrv_drain_poll() calls, but not inside of it, doing the recursion here is safe. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Max Reitz 提交于
This patch adds two bdrv-drain tests for what happens if some BDS goes away during the drainage. The basic idea is that you have a parent BDS with some child nodes. Then, you drain one of the children. Because of that, the party who actually owns the parent decides to (A) delete it, or (B) detach all its children from it -- both while the child is still being drained. A real-world case where this can happen is the mirror block job, which may exit if you drain one of its children. Signed-off-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
For bdrv_drain(), recursively waiting for child node requests is pointless because we didn't quiesce their parents, so new requests could come in anyway. Letting the function work only on a single node makes it more consistent. For subtree drains and drain_all, we already have the recursion in bdrv_do_drained_begin(), so the extra recursion doesn't add anything either. Remove the useless code. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Commit 91af091f added an additional aio_poll() to BDRV_POLL_WHILE() in order to make sure that all pending BHs are executed on drain. This was the wrong place to make the fix, as it is useless overhead for all other users of the macro and unnecessarily complicates the mechanism. This patch effectively reverts said commit (the context has changed a bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the loop condition for drain. The effect is probably hard to measure in any real-world use case because actual I/O will dominate, but if I run only the initialisation part of 'qemu-img convert' where it calls bdrv_block_status() for the whole image to find out how much data there is copy, this phase actually needs only roughly half the time after this patch. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(), coroutine context is automatically left with a BH, preventing the deadlocks that made bdrv_drain_all*() unsafe in coroutine context. Now that we even removed the old polling code as dead code, it's obvious that it's compatible now. Enable the coroutine test cases for bdrv_drain_all(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
All involved nodes are already idle, we called bdrv_do_drain_begin() on them. The comment in the code suggested that this was not correct because the completion of a request on one node could spawn a new request on a different node (which might have been drained before, so we wouldn't drain the new request). In reality, new requests to different nodes aren't spawned out of nothing, but only in the context of a parent request, and they aren't submitted to random nodes, but only to child nodes. As long as we still poll for the completion of the parent request (which we do), draining each root node separately is good enough. Remove the additional polling code from bdrv_drain_all_begin() and replace it with an assertion that all nodes are already idle after we drained them separately. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
All callers pass false for the 'recursive' parameter now. Remove it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
bdrv_do_drain_begin/end() implement already everything that bdrv_drain_all_begin/end() need and currently still do manually: Disable external events, call parent drain callbacks, call block driver callbacks. It also does two more things: The first is incrementing bs->quiesce_counter. bdrv_drain_all() already stood out in the test case by behaving different from the other drain variants. Adding this is not only safe, but in fact a bug fix. The second is calling bdrv_drain_recurse(). We already do that later in the same function in a loop, so basically doing an early first iteration doesn't hurt. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
As long as nobody keeps the other I/O thread from working, there is no reason why bdrv_drain() wouldn't work with cross-AioContext events. The key is that the root request we're waiting for is in the AioContext we're polling (which it always is for bdrv_drain()) so that aio_poll() is woken up in the end. Add a test case that shows that it works. Remove the comment in bdrv_drain() that claims otherwise. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 16 6月, 2018 2 次提交
-
-
由 Peter Maydell 提交于
Migration pull 2018-06-15 # gpg: Signature made Fri 15 Jun 2018 16:13:17 BST # gpg: using RSA key 0516331EBC5BFDE7 # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7 * remotes/dgilbert/tags/pull-migration-20180615a: migration: calculate expected_downtime with ram_bytes_remaining() migration/postcopy: Wake rate limit sleep on postcopy request migration: Wake rate limiting for urgent requests migration/postcopy: Add max-postcopy-bandwidth parameter migration: introduce migration_update_rates migration: fix counting xbzrle cache_miss_rate migration/block-dirty-bitmap: fix dirty_bitmap_load migration: Poison ramblock loops in migration migration: Fixes for non-migratable RAMBlocks typedefs: add QJSON Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
Merge remote-tracking branch 'remotes/edgar/tags/edgar/xilinx-next-2018-06-15.for-upstream' into staging xilinx-next-2018-06-15.for-upstream # gpg: Signature made Fri 15 Jun 2018 15:32:47 BST # gpg: using RSA key 29C596780F6BCA83 # gpg: Good signature from "Edgar E. Iglesias (Xilinx key) <edgar.iglesias@xilinx.com>" # gpg: aka "Edgar E. Iglesias <edgar.iglesias@gmail.com>" # Primary key fingerprint: AC44 FEDC 14F7 F1EB EDBF 4151 29C5 9678 0F6B CA83 * remotes/edgar/tags/edgar/xilinx-next-2018-06-15.for-upstream: target-microblaze: Rework NOP/zero instruction handling target-microblaze: mmu: Correct masking of output addresses Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 15 6月, 2018 15 次提交
-
-
由 Peter Maydell 提交于
Block layer patches: - Fix options that work only with -drive or -blockdev, but not with both, because of QDict type confusion - rbd: Add options 'auth-client-required' and 'key-secret' - Remove deprecated -drive options serial/addr/cyls/heads/secs/trans - rbd, iscsi: Remove deprecated 'filename' option - Fix 'qemu-img map' crash with unaligned image size - Improve QMP documentation for jobs # gpg: Signature made Fri 15 Jun 2018 15:20:03 BST # gpg: using RSA key 7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: (26 commits) block: Remove dead deprecation warning code block: Remove deprecated -drive option serial block: Remove deprecated -drive option addr block: Remove deprecated -drive geometry options rbd: New parameter key-secret rbd: New parameter auth-client-required block: Fix -blockdev / blockdev-add for empty objects and arrays check-block-qdict: Cover flattening of empty lists and dictionaries check-block-qdict: Rename qdict_flatten()'s variables for clarity block-qdict: Simplify qdict_is_list() some block-qdict: Clean up qdict_crumple() a bit block-qdict: Tweak qdict_flatten_qdict(), qdict_flatten_qlist() block-qdict: Simplify qdict_flatten_qdict() block: Make remaining uses of qobject input visitor more robust block: Factor out qobject_input_visitor_new_flat_confused() block: Clean up a misuse of qobject_to() in .bdrv_co_create_opts() block: Fix -drive for certain non-string scalars block: Fix -blockdev for certain non-string scalars qobject: Move block-specific qdict code to block-qdict.c block: Add block-specific QDict header ... Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
target-arm and miscellaneous queue: * fix KVM state save/restore for GICv3 priority registers for high IRQ numbers * hw/arm/mps2-tz: Put ethernet controller behind PPC * hw/sh/sh7750: Convert away from old_mmio * hw/m68k/mcf5206: Convert away from old_mmio * hw/block/pflash_cfi02: Convert away from old_mmio * hw/watchdog/wdt_i6300esb: Convert away from old_mmio * hw/input/pckbd: Convert away from old_mmio * hw/char/parallel: Convert away from old_mmio * armv7m: refactor to get rid of armv7m_init() function * arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC * hw/core/or-irq: Support more than 16 inputs to an OR gate * cpu-defs.h: Document CPUIOTLBEntry 'addr' field * cputlb: Pass cpu_transaction_failed() the correct physaddr * CODING_STYLE: Define our preferred form for multiline comments * Add and use new stn_*_p() and ldn_*_p() memory access functions * target/arm: More parts of the upcoming SVE support * aspeed_scu: Implement RNG register * m25p80: add support for two bytes WRSR for Macronix chips * exec.c: Handle IOMMUs being in the path of TCG CPU memory accesses * target/arm: Allow ARMv6-M Thumb2 instructions # gpg: Signature made Fri 15 Jun 2018 15:24:03 BST # gpg: using RSA key 3C2525ED14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" # gpg: aka "Peter Maydell <pmaydell@gmail.com>" # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20180615: (43 commits) target/arm: Allow ARMv6-M Thumb2 instructions exec.c: Handle IOMMUs in address_space_translate_for_iotlb() iommu: Add IOMMU index argument to translate method iommu: Add IOMMU index argument to notifier APIs iommu: Add IOMMU index concept to IOMMU API m25p80: add support for two bytes WRSR for Macronix chips aspeed_scu: Implement RNG register target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group target/arm: Implement FDUP/DUP target/arm: Implement SVE Integer Compare - Scalars Group target/arm: Implement SVE Predicate Count Group target/arm: Implement SVE Partition Break Group target/arm: Implement SVE Integer Compare - Immediate Group target/arm: Implement SVE Integer Compare - Vectors Group target/arm: Implement SVE Select Vectors Group target/arm: Implement SVE vector splice (predicated) target/arm: Implement SVE reverse within elements target/arm: Implement SVE copy to vector (predicated) target/arm: Implement SVE conditionally broadcast/extract element ... Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Julia Suvorova 提交于
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these instructions and allows their execution. Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit. This patch is required for future Cortex-M0 support. Signed-off-by: NJulia Suvorova <jusual@mail.ru> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20180612204632.28780-1-jusual@mail.ru [PMM: move armv6m_insn[] and armv6m_mask[] closer to point of use, and mark 'const'. Check for M-and-not-v7 rather than M-and-6.] Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
Currently we don't support board configurations that put an IOMMU in the path of the CPU's memory transactions, and instead just assert() if the memory region fonud in address_space_translate_for_iotlb() is an IOMMUMemoryRegion. Remove this limitation by having the function handle IOMMUs. This is mostly straightforward, but we must make sure we have a notifier registered for every IOMMU that a transaction has passed through, so that we can flush the TLB appropriately when any of the IOMMUs change their mappings. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Add an IOMMU index argument to the translate method of IOMMUs. Since all of our current IOMMU implementations support only a single IOMMU index, this has no effect on the behaviour. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Add support for multiple IOMMU indexes to the IOMMU notifier APIs. When initializing a notifier with iommu_notifier_init(), the caller must pass the IOMMU index that it is interested in. When a change happens, the IOMMU implementation must pass memory_region_notify_iommu() the IOMMU index that has changed and that notifiers must be called for. IOMMUs which support only a single index don't need to change. Callers which only really support working with IOMMUs with a single index can use the result of passing MEMTXATTRS_UNSPECIFIED to memory_region_iommu_attrs_to_index(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
If an IOMMU supports mappings that care about the memory transaction attributes, then it no longer has a unique address -> output mapping, but more than one. We can represent these using an IOMMU index, analogous to TCG's mmu indexes. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20180604152941.20374-2-peter.maydell@linaro.org
-
由 Cédric Le Goater 提交于
On Macronix chips, two bytes can written to the WRSR. First byte will configure the status register and the second the configuration register. It is important to save the configuration value as it contains the dummy cycle setting when using dual or quad IO mode. Signed-off-by: NCédric Le Goater <clg@kaod.org> Acked-by: NAlistair Francis <alistair.francis@wdc.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Joel Stanley 提交于
The ASPEED SoCs contain a single register that returns random data when read. This models that register so that guests can use it. The random number data register has a corresponding control register, however it returns data regardless of the state of the enabled bit, so the model follows this behaviour. When the qcrypto call fails we exit as the guest uses the random number device to feed it's entropy pool, which is used for cryptographic purposes. Reviewed-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NJoel Stanley <joel@jms.id.au> Message-id: 20180613114836.9265-1-joel@jms.id.au Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180613015641.5667-19-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180613015641.5667-18-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180613015641.5667-17-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180613015641.5667-16-richard.henderson@linaro.org Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180613015641.5667-15-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180613015641.5667-14-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-