- 15 5月, 2018 1 次提交
-
-
由 Daniel Henrique Barboza 提交于
blk_get_aio_context verifies if BlockDriverState bs is not NULL, return bdrv_get_aio_context(bs) if true or qemu_get_aio_context() otherwise. However, bdrv_get_aio_context from block.c already does this verification itself, also returning qemu_get_aio_context() if bs is NULL: AioContext *bdrv_get_aio_context(BlockDriverState *bs) { return bs ? bs->aio_context : qemu_get_aio_context(); } This patch simplifies blk_get_aio_context to simply call bdrv_get_aio_context instead of replicating the same logic. Signed-off-by: NDaniel Henrique Barboza <danielhb@linux.vnet.ibm.com> Reviewed-by: NDarren Kenny <darren.kenny@oracle.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 11 5月, 2018 2 次提交
-
-
由 Stefan Hajnoczi 提交于
mincore(2) checks whether pages are resident. Use it to verify that page cache has been dropped. You can trigger a verification failure by mmapping the image file from another process that loads a byte from a page, forcing it to become resident. bdrv_co_invalidate_cache() will fail while that process is alive. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Message-id: 20180427162312.18583-3-stefanha@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
On Linux posix_fadvise(POSIX_FADV_DONTNEED) invalidates pages*. Use this to drop page cache on the destination host during shared storage migration. This way the destination host will read the latest copy of the data and will not use stale data from the page cache. The flow is as follows: 1. Source host writes out all dirty pages and inactivates drives. 2. QEMU_VM_EOF is sent on migration stream. 3. Destination host invalidates caches before accessing drives. This patch enables live migration even with -drive cache.direct=off. * Terms and conditions may apply, please see patch for details. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Message-id: 20180427162312.18583-2-stefanha@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 08 5月, 2018 3 次提交
-
-
由 Kevin Wolf 提交于
Both the option string for the 'redundancy' option and the SheepdogRedundancy object that is created accordingly could be leaked in error paths. This fixes the memory leaks. Reported by Coverity (CID 1390614 and 1390641). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Message-id: 20180503153509.22223-1-kwolf@redhat.com Reviewed-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Max Reitz 提交于
Commit b76e4458 made the mirror block job respect block-job-cancel's @force flag: With that flag set, it would now always really cancel, even post-READY. Unfortunately, it had a side effect: Without that flag set, it would now never cancel, not even before READY. Considering that is an incompatible change and not noted anywhere in the commit or the description of block-job-cancel's @force parameter, this seems unintentional and we should revert to the previous behavior, which is to immediately cancel the job when block-job-cancel is called before source and target are in sync (i.e. before the READY event). Cc: qemu-stable@nongnu.org Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1572856Reported-by: NYanan Fu <yfu@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20180501220509.14152-2-mreitz@redhat.com Reviewed-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Stefan Hajnoczi 提交于
Commit b76e4458 ("block/mirror: change the semantic of 'force' of block-job-cancel") accidentally removed the ratelimit in the mirror job. Reintroduce the ratelimit but keep the block-job-cancel force=true behavior that was added in commit b76e4458. Note that block_job_sleep_ns() returns immediately when the job is cancelled. Therefore it's safe to unconditionally call block_job_sleep_ns() - a cancelled job does not sleep. This commit fixes the non-deterministic qemu-iotests 185 output. The test relies on the ratelimit to make the job sleep until the 'quit' command is processed. Previously the job could complete before the 'quit' command was received since there was no ratelimit. Cc: Liang Li <liliang.opensource@gmail.com> Cc: Jeff Cody <jcody@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20180424123527.19168-1-stefanha@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
- 04 5月, 2018 3 次提交
-
-
由 Eric Blake 提交于
The NBD spec is proposing a relaxation of NBD_CMD_BLOCK_STATUS where a server may have the final extent per context give a length beyond the original request, if it can easily prove that subsequent bytes have the same status, on the grounds that a client can take advantage of this information for fewer block status requests. Since qemu 2.12 as a client always sends NBD_CMD_FLAG_REQ_ONE, and rejects a server that sends extra length, the upstream NBD spec will probably limit this behavior to clients that don't request REQ_ONE semantics; but it doesn't hurt to relax qemu to always be permissive of this server behavior, even if it continues to use REQ_ONE. CC: qemu-stable@nongnu.org Signed-off-by: NEric Blake <eblake@redhat.com> Message-Id: <20180503222626.1303410-1-eblake@redhat.com> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
-
由 Marc-André Lureau 提交于
For convenience and clarity, make it possible to call qobject_ref() at the time when the reference is associated with a variable, or argument, by making qobject_ref() return the same pointer as given. Use that to simplify the callers. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-5-marcandre.lureau@redhat.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> [Useless change to qobject_ref_impl() dropped, commit message improved slightly] Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
-
由 Marc-André Lureau 提交于
Now that we can safely call QOBJECT() on QObject * as well as its subtypes, we can have macros qobject_ref() / qobject_unref() that work everywhere instead of having to use QINCREF() / QDECREF() for QObject and qobject_incref() / qobject_decref() for its subtypes. The replacement is mechanical, except I broke a long line, and added a cast in monitor_qmp_cleanup_req_queue_locked(). Unlike qobject_decref(), qobject_unref() doesn't accept void *. Note that the new macros evaluate their argument exactly once, thus no need to shout them. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-4-marcandre.lureau@redhat.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> [Rebased, semantic conflict resolved, commit message improved] Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
-
- 16 4月, 2018 1 次提交
-
-
Checking reopen by existence of some bitmaps is wrong, as it may be some other bitmaps, or on the other hand, user may remove bitmaps. This criteria is bad. To simplify things and make behavior more predictable let's just add a flag to remember, that we've already tried to load bitmaps on open and do not want do it again. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20180411122606.367301-2-vsementsov@virtuozzo.com [mreitz: Changed comment wording according to Eric Blake's suggestion] Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 10 4月, 2018 1 次提交
-
-
由 Kevin Wolf 提交于
Streaming and the commit block job only want to apply throttling when they actually copied data instead of skipping it, so they made the calculation of delay_ns conditional. However, delay_ns isn't reset when skipping some sectors, so instead of not waiting, the old delay is applied again. Properly reset delay_ns where needed. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 05 4月, 2018 1 次提交
-
-
由 Jeff Cody 提交于
Commit 4bfb2741 added some QAPIfication of option parsing in qemu_rbd_open(). We need to remove all the options we processed, otherwise in bdrv_open_inherit() we will think the remaining options are invalid. (This needs to go in 2.12 to avoid a regression that prevents rbd from being opened.) Suggested-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com>
-
- 03 4月, 2018 4 次提交
-
-
由 Max Reitz 提交于
Storing the lseek() result in an int results in it overflowing when the file is at least 2 GB big. Then, we have a 50 % chance of the result being "negative" and thus thinking an error occurred when actually everything went just fine. So we should use the correct type for storing the result: off_t. Reported-by: NDaniel P. Berrange <berrange@redhat.com> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1549231 Cc: qemu-stable@nongnu.org Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20180228131315.30194-2-mreitz@redhat.com Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
The legacy command line interface gets the socket path from an option called 'socket'. QAPI in contract uses SocketAddress, where the corresponding option is called 'path'. Fix the gluster block driver to accept both 'socket' and 'path', with 'path' being the preferred syntax. https://bugzilla.redhat.com/show_bug.cgi?id=1545155 Cc: qemu-stable@nongnu.org Signed-off-by: NKevin Wolf <kwolf@redhat.com> Message-id: 20180403110810.25624-1-kwolf@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Jeff Cody 提交于
In commit 223a23c1, we implemented a workaround in the gluster driver to handle invalid values returned for SEEK_DATA or SEEK_HOLE. In some instances, these same invalid values can be seen in the posix file handler as well - for example, it has been reported on FUSE gluster mounts. Calling assert() for these invalid values is overly harsh; we can safely return -EIO and allow this case to be treated as a "learned nothing" case (e.g., D4 / H4, as commented in the code). This patch does the same thing that 223a23c1 did for gluster.c, except in file-posix.c Signed-off-by: NJeff Cody <jcody@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
The legacy command line interface gets the socket path from an option called 'socket'. QAPI in contract uses SocketAddress, where the corresponding option is called 'path'. Fix the gluster block driver to accept both 'socket' and 'path', with 'path' being the preferred syntax. https://bugzilla.redhat.com/show_bug.cgi?id=1545155 Cc: qemu-stable@nongnu.org Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 02 4月, 2018 1 次提交
-
-
由 Eric Blake 提交于
iotests 123 and 209 fail on 32-bit platforms. The culprit: sizeof(extent) is wrong; we want sizeof(*extent). But since the struct is 8 bytes, it happened to work on 64-bit platforms where the pointer is also 8 bytes (nasty). Fixes: 78a33ab5Reported-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com> Message-Id: <20180327210517.1804242-1-eblake@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
-
- 27 3月, 2018 5 次提交
-
-
由 Laurent Vivier 提交于
Re-run Coccinelle script scripts/coccinelle/qobject.cocci Signed-off-by: NLaurent Vivier <lvivier@redhat.com> Message-Id: <20180323143202.28879-5-lvivier@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Acked-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: NFam Zheng <famz@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Stefan Hajnoczi 提交于
qemu_aio_coroutine_enter() is (indirectly) called recursively when processing co_queue_wakeup. This can lead to stack exhaustion. This patch rewrites co_queue_wakeup in an iterative fashion (instead of recursive) with bounded memory usage to prevent stack exhaustion. qemu_co_queue_run_restart() is inlined into qemu_aio_coroutine_enter() and the qemu_coroutine_enter() call is turned into a loop to avoid recursion. There is one change that is worth mentioning: Previously, when coroutine A queued coroutine B, qemu_co_queue_run_restart() entered coroutine B from coroutine A. If A was terminating then it would still stay alive until B yielded. After this patch B is entered by A's parent so that a A can be deleted immediately if it is terminating. It is safe to make this change since B could never interact with A if it was terminating anyway. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 20180322152834.12656-3-stefanha@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 yuchenlin 提交于
VMDK has a hard limitation of extent size, which is due to the size of grain table entry is 32 bits. It means it can only point to a grain located at offset = 2^32. To avoid writing the user data beyond limitation and record a useless offset in grain table. We should return ERROR here. Signed-off-by: Nyuchenlin <yuchenlin@synology.com> Message-id: 20180322133337.28024-1-yuchenlin@synology.com Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
On reopen with existing bitmaps, instead of loading bitmaps, lets reopen them if needed. This also fixes bitmaps migration through shared storage. Consider the case. Persistent bitmaps are stored on bdrv_inactivate. Then, on destination process_incoming_migration_bh() calls bdrv_invalidate_cache_all() which leads to qcow2_load_autoloading_dirty_bitmaps() which fails if bitmaps are already loaded on destination start. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20180320170521.32152-3-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
Add version of qcow2_reopen_bitmaps_rw, which do the same work but also return a hint about was header updated or not. This will be used in the following fix for bitmaps reloading after migration. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Message-id: 20180320170521.32152-2-vsementsov@virtuozzo.com Signed-off-by: NMax Reitz <mreitz@redhat.com>
-
- 26 3月, 2018 12 次提交
-
-
由 Kevin Wolf 提交于
It's unclear what the real maximum is, but we use an uint32_t to store the log size in vhdx_co_create(), so we should check that the given value fits in 32 bits. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com>
-
由 Kevin Wolf 提交于
error_setg_errno() is meant for cases where we got an errno from the OS that can add useful extra information to an error message. It's pointless if we pass a constant errno, these cases should use plain error_setg(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com>
-
由 Kevin Wolf 提交于
Images with a non-power-of-two block size are invalid and cannot be opened. Reject such block sizes when creating an image. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com>
-
由 Kevin Wolf 提交于
It's unclear what the real maximum cluster size is for the Parallels format, but let's at least make sure that we don't get integer overflows in our .bdrv_co_create implementation. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
Commit e39e959e fixed an invalid assertion in the .bdrv_length implementation, but left a similar assertion in place for .bdrv_truncate. Instead of crashing when the user requests a too large image size, fail gracefully. A file size of exactly INT64_MAX caused failure before, but is actually legal. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Kevin Wolf 提交于
Use qemu_uuid_unparse() instead of uuid_unparse() to make vdi.c compile again when CONFIG_VDI_DEBUG is set. In order to prevent future bitrot, replace '#ifdef CONFIG_VDI_DEBUG' by 'if (VDI_DEBUG)' so that the compiler always sees the code. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
What static=on really does is what we call metadata preallocation for other block drivers. While we can still change the QMP interface, make it more consistent by using 'preallocation' for VDI, too. This doesn't implement any new functionality, so the only supported preallocation modes are 'off' and 'metadata' for now. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Alberto Garcia 提交于
When we try to allocate new clusters we first look for available ones starting from s->free_cluster_index and once we find them we increase their reference counts. Before we get to call update_refcount() to do this last step s->free_cluster_index is already pointing to the next cluster after the ones we are trying to allocate. During update_refcount() it may happen however that we also need to allocate a new refcount block in order to store the refcounts of these new clusters (and to complicate things further that may also require us to grow the refcount table). After all this we don't know if the clusters that we originally tried to allocate are still available, so we return -EAGAIN to ask the caller to restart the search for free clusters. This is what can happen in a common scenario: 1) We want to allocate a new cluster and we see that cluster N is free. 2) We try to increase N's refcount but all refcount blocks are full, so we allocate a new one at N+1 (where s->free_cluster_index was pointing at). 3) Once we're done we return -EAGAIN to look again for a free cluster, but now s->free_cluster_index points at N+2, so that's the one we allocate. Cluster N remains unallocated and we have a hole in the qcow2 file. This can be reproduced easily: qemu-img create -f qcow2 -o cluster_size=512 hd.qcow2 1M qemu-io -c 'write 0 124k' hd.qcow2 After this the image has 132608 bytes (256 clusters), and the refcount block is full. If we write 512 more bytes it should allocate two new clusters: the data cluster itself and a new refcount block. qemu-io -c 'write 124k 512' hd.qcow2 However the image has now three new clusters (259 in total), and the first one of them is empty (and unallocated): dd if=hd.qcow2 bs=512c skip=256 count=1 | hexdump -C If we write larger amounts of data in the last step instead of the 512 bytes used in this example we can create larger holes in the qcow2 file. What this patch does is reset s->free_cluster_index to its previous value when alloc_refcount_block() returns -EAGAIN. This way the caller will try to allocate again the original clusters if they are still free. The output of iotest 026 also needs to be updated because now that images have no holes some tests fail at a different point and the number of leaked clusters is different. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fabiano Rosas 提交于
The blkreplay driver is not a protocol so it should implement bdrv_open instead of bdrv_file_open and not provide a protocol_name. Attempts to invoke this driver using protocol syntax (i.e. blkreplay:<filename:options:...>) will now fail gracefully: $ qemu-img info blkreplay:foo qemu-img: Could not open 'blkreplay:foo': Unknown protocol 'blkreplay' Signed-off-by: NFabiano Rosas <farosas@linux.vnet.ibm.com> Reviewed-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fabiano Rosas 提交于
The throttle driver is not a protocol so it should implement bdrv_open instead of bdrv_file_open and not provide a protocol_name. Attempts to invoke this driver using protocol syntax (i.e. throttle:<filename:options:...>) will now fail gracefully: $ qemu-img info throttle:foo qemu-img: Could not open 'throttle:foo': Unknown protocol 'throttle' Signed-off-by: NFabiano Rosas <farosas@linux.vnet.ibm.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fabiano Rosas 提交于
The quorum driver is not a protocol so it should implement bdrv_open instead of bdrv_file_open and not provide a protocol_name. Attempts to invoke this driver using protocol syntax (i.e. quorum:<filename:options:...>) will now fail gracefully: $ qemu-img info quorum:foo qemu-img: Could not open 'quorum:foo': Unknown protocol 'quorum' Signed-off-by: NFabiano Rosas <farosas@linux.vnet.ibm.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fabiano Rosas 提交于
The protocol_name field is used when selecting a driver via protocol syntax (i.e. <protocol_name>:<filename:options:...>). Drivers that are only selected explicitly (e.g. driver=replication,mode=primary,...) should not have a protocol_name. This patch removes the protocol_name field from the brdv_replication structure so that attempts to invoke this driver using protocol syntax will fail gracefully: $ qemu-img info replication:foo qemu-img: Could not open 'replication:': Unknown protocol 'replication' Buglink: https://bugs.launchpad.net/qemu/+bug/1726733Signed-off-by: NFabiano Rosas <farosas@linux.vnet.ibm.com> Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 20 3月, 2018 3 次提交
-
-
Set (and clear) histograms through new command block-latency-histogram-set and show new statistics in query-blockstats results. For now, the command is marked experimental with prefix 'x-', to gain experience with the interface without being stuck with design decisions. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180309165212.97144-3-vsementsov@virtuozzo.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> [eblake: fix typos, mention x- prefix in commit message] Signed-off-by: NEric Blake <eblake@redhat.com>
-
Introduce latency histogram statics for block devices. For each accounted operation type, the latency region [0, +inf) is divided into subregions by several points. Then, calculate hits for each subregion. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180309165212.97144-2-vsementsov@virtuozzo.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Max Reitz 提交于
This patch was generated using the following Coccinelle script: @@ expression Obj; @@ ( - qobject_to_qnum(Obj) + qobject_to(QNum, Obj) | - qobject_to_qstring(Obj) + qobject_to(QString, Obj) | - qobject_to_qdict(Obj) + qobject_to(QDict, Obj) | - qobject_to_qlist(Obj) + qobject_to(QList, Obj) | - qobject_to_qbool(Obj) + qobject_to(QBool, Obj) ) and a bit of manual fix-up for overly long lines and three places in tests/check-qjson.c that Coccinelle did not find. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-Id: <20180224154033.29559-4-mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> [eblake: swap order from qobject_to(o, X), rebase to master, also a fix to latent false-positive compiler complaint about hw/i386/acpi-build.c] Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 19 3月, 2018 3 次提交
-
-
由 Paolo Bonzini 提交于
This fails in Fedora 28. Reported-by: NAndreas Schwab <schwab@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Overriding flags violates the precedence rules of bdrv_reopen_queue_child. Just like the read-only option, no-flush should be put into the options. The same is done in bdrv_temp_snapshot_options. Reported-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Liang Li 提交于
When doing drive mirror to a low speed shared storage, if there was heavy BLK IO write workload in VM after the 'ready' event, drive mirror block job can't be canceled immediately, it would keep running until the heavy BLK IO workload stopped in the VM. Libvirt depends on the current block-job-cancel semantics, which is that when used without a flag after the 'ready' event, the command blocks until data is in sync. However, these semantics are awkward in other situations, for example, people may use drive mirror for realtime backups while still wanting to use block live migration. Libvirt cannot start a block live migration while another drive mirror is in progress, but the user would rather abandon the backup attempt as broken and proceed with the live migration than be stuck waiting for the current drive mirror backup to finish. The drive-mirror command already includes a 'force' flag, which libvirt does not use, although it documented the flag as only being useful to quit a job which is paused. However, since quitting a paused job has the same effect as abandoning a backup in a non-paused job (namely, the destination file is not in sync, and the command completes immediately), we can just improve the documentation to make the force flag obviously useful. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Jeff Cody <jcody@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Cc: Max Reitz <mreitz@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: John Snow <jsnow@redhat.com> Reported-by: NHuaitong Han <huanhuaitong@didichuxing.com> Signed-off-by: NHuaitong Han <huanhuaitong@didichuxing.com> Signed-off-by: NLiang Li <liliangleo@didichuxing.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-