- 28 4月, 2015 40 次提交
-
-
由 Michael S. Tsirkin 提交于
update virtio blk header from latest linux, include comment fixups. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Message-id: 1428854036-12806-1-git-send-email-mst@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Paolo Bonzini 提交于
bdrv_aio_* APIs can use coroutines to achieve asynchronicity. However, the coroutine may terminate without having yielded back to the caller (for example because of something that invokes a nested event loop, or because the coroutine is doing nothing at all). In this case, the bdrv_aio_* API must delay the completion to the next iteration of the main loop, because bdrv_aio_* will never invoke the callback before returning. This can be done with a bottom half, and indeed bdrv_aio_* is always using one for simplicity. It is possible to gain some performance (~3%) by avoiding this in the common case. A new field in the BlockAIOCBCoroutine struct is set to true until the first time the corotine has yielded to its creator, and completion goes through a new function bdrv_co_complete. If the flag is false, bdrv_co_complete invokes the callback immediately. If it is true, the caller will notice that the coroutine has completed and schedule the bottom half itself. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427524638-28157-1-git-send-email-pbonzini@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 1428069921-2957-5-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 1428069921-2957-4-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
This is necessary to suppress more IO requests from being generated from block job coroutines. Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 1428069921-2957-3-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
This patch changes block_job_pause to increase the pause counter and block_job_resume to decrease it. The counter will allow calling block_job_pause/block_job_resume unconditionally on a job when we need to suspend the IO temporarily. From now on, each block_job_resume must be paired with a block_job_pause to keep the counter balanced. The user pause from QMP or HMP will only trigger block_job_pause once until it's resumed, this is achieved by adding a user_paused flag in BlockJob. One occurrence of block_job_resume in mirror_complete is replaced with block_job_enter which does what is necessary. In block_job_cancel, the cancel flag is good enough to instruct coroutines to quit loop, so use block_job_enter to replace the unpaired block_job_resume. Upon block job IO error, user is notified about the entering to the pause state, so this pause belongs to user pause, set the flag accordingly and expect a matching QMP resume. [Extended doc comments as suggested by Paolo Bonzini <pbonzini@redhat.com>. --Stefan] Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Message-id: 1428069921-2957-2-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427852740-24315-4-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Reopen is used in block-commit. With this always-succeed operation, it is now possible to test committing to a null drive, by specifying "null-aio://" or "null-co://" as the backing image when creating the qcow2 image. Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427852740-24315-3-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Aio context switch should just work because the requests will be drained, so the scheduled timer(s) on the old context will be freed. Signed-off-by: NFam Zheng <famz@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427852740-24315-2-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The 'qemu coroutine <coroutine-address>' GDB command prints the backtrace for a CoroutineUContext. This is useful for peeking inside yielded coroutines that are waiting for file descriptor events, timers, etc. For example: $ gdb tests/test-coroutine (gdb) b test_yield (gdb) r (gdb) b qemu_coroutine_enter (gdb) c (gdb) c Continuing. Breakpoint 2, qemu_coroutine_enter (co=0x555555c66520, opaque=0x0) at qemu-coroutine.c:103 103 { (gdb) source scripts/qemu-gdb.py (gdb) qemu coroutine 0x555555c66520 #0 0x000055555557a740 in qemu_coroutine_switch (from_=<optimized out>, to_=0x7ffff7f90a70, action=COROUTINE_YIELD) at coroutine-ucontext.c:177 #1 0x0000555555566af9 in yield_5_times (opaque=0x7fffffffdbb7) at tests/test-coroutine.c:107 #2 0x000055555557a7aa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at coroutine-ucontext.c:80 #3 0x00007ffff08de000 in __start_context () at /lib64/libc.so.6 Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427409754-8556-1-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
This patch simplifies thread_pool_completion_bh(). The function first checks elem->state: if (elem->state != THREAD_DONE) { continue; } It then goes on to check elem->state == THREAD_DONE although we already know this must be the case. The QLIST_REMOVE() is duplicated down both branches of an if-else statement so that can be lifted out as well. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1427992762-10126-1-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
Fix the length of the zero-fill for the back, which was accidentally using the same value as for the front. This is caught by qemu-iotests 033. For consistency, change the code for the front as well to use the length stored in the iov (it is the same value, copied four lines above). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Acked-by: NJeff Cody <jcody@redhat.com>
-
由 Kevin Wolf 提交于
This is, amongst others, required for qemu-iotests 033 to run as intended on VHDX, which uses explicit bdrv_truncate() calls to bs->file when allocating new blocks. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com>
-
由 Kevin Wolf 提交于
This adds a regression test for some problems that the qemu-img convert rewrite just fixed. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
The implementation of qemu-img convert is (a) messy, (b) buggy, and (c) less efficient than possible. The changes required to beat some sense into it are massive enough that incremental changes would only make my and the reviewers' life harder. So throw it away and reimplement it from scratch. Let me give some examples what I mean by messy, buggy and inefficient: (a) The copying logic of qemu-img convert has two separate branches for compressed and normal target images, which roughly do the same - except for a little code that handles actual differences between compressed and uncompressed images, and much more code that implements just a different set of optimisations and bugs. This is unnecessary code duplication, and makes the code for compressed output (unsurprisingly) suffer from bitrot. The code for uncompressed ouput is run twice to count the the total length for the progress bar. In the first run it just takes a shortcut and runs only half the loop, and when it's done, it toggles a boolean, jumps out of the loop with a backwards goto and starts over. Works, but pretty is something different. (b) Converting while keeping a backing file (-B option) is broken in several ways. This includes not writing to the image file if the input has zero clusters or data filled with zeros (ignoring that the backing file will be visible instead). It also doesn't correctly limit every iteration of the copy loop to sectors of the same status so that too many sectors may be copied to in the target image. For -B this gives an unexpected result, for other images it just does more work than necessary. Conversion with a compressed target completely ignores any target backing file. (c) qemu-img convert skips reading and writing an area if it knows from metadata that copying isn't needed (except for the bug mentioned above that ignores a status change in some cases). It does, however, read from the source even if it knows that it will read zeros, and then search for non-zero bytes in the read buffer, if it's possible that a write might be needed. This reimplementation of the copying core reorganises the code to remove the duplication and have a much more obvious code flow, by essentially splitting the copy iteration loop into three parts: 1. Find the number of contiguous sectors of the same status at the current offset (This can also be called in a separate loop before the copying loop in order to determine the total sectors for the progress bar.) 2. Read sectors. If the status implies that there is no data there to read (zero or unallocated cluster), don't do anything. 3. Write sectors depending on the status. If it's data, write it. If we want the backing file to be visible (with -B), don't write it. If it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to optimise the write at least where possible. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-
由 Paolo Bonzini 提交于
This is the first step towards having fine-grained critical sections in dataplane threads, which resolves lock ordering problems between address_space_* functions (which need the BQL when doing MMIO, even after we complete RCU-based dispatch) and the AioContext. Because AioContext does not use contention callbacks anymore, the unit test has to be changed. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1424449612-18215-4-git-send-email-pbonzini@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Paolo Bonzini 提交于
This is the first step in pushing down acquire/release, and will let rfifolock drop the contention callback feature. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1424449612-18215-3-git-send-email-pbonzini@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Paolo Bonzini 提交于
By using thread-local storage, aio_poll can stop using global data during g_poll_ns. This will make it possible to drop callbacks from rfifolock. [Moved npfd = 0 assignment to end of walking_handlers region as suggested by Paolo. This resolves the assert(npfd == 0) assertion failure in pollfds_cleanup(). --Stefan] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1424449612-18215-2-git-send-email-pbonzini@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Fam Zheng 提交于
Currently, throttle timers won't make any progress when VCPU is not running, which would stall the request queue in utils, qtest, vm suspending, and live migration, without special handling. Block jobs are confusingly inconsistent between with and without throttling: if user sets a bps limit, stops the vm, then start a block job, the block job will not make any progress; in contrary, if user unsets the bps limit, or if it's not set, the block job will run normally. After this patch, with the host clock, even if the VCPUs are stopped, the throttle queues will be processed. This patch also enables potential to add throttle to bdrv_drain_all. Currently all requests are drained immediately. In other words whenever it is called, IO throttling goes ineffective (examples: system reset, migration and many block job operations.). This is a loophole that guest could exploit. If we use the host clock, we can later just trust the nested poll. This could be done on top. Note that for qemu-iotests case 093, which uses qtest, we still keep vm clock so the script can control the clock stepping in order to be deterministic. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NFam Zheng <famz@redhat.com> Message-id: 1427268446-6426-1-git-send-email-famz@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The ffs(3) family of functions is not portable. MinGW doesn't always provide the function. Use ctz32() or ctz64() instead. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-10-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The lack of ffs(3) in the MinGW headers is a hint that we shouldn't rely on it. MinGW 4.9.2 does not make it available for linking when QEMU's ./configure --enable-debug is used (release builds are fine though). Now that all QEMU code has been switched to ctz32() there is no need for ffs(3). Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-9-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Paolo Bonzini 提交于
Rewrite the loop using level &= level - 1 to clear the least significant bit after each iteration. This simplifies the loop and makes it easy to replace ffs(3) with ctz32(). Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-8-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
ffs() cannot be replaced with ctz32() when the argument might be zero, because ffs(0) returns 0 while ctz32(0) returns 32. The ffs(3) call in sd_normal_command() is a special case though. It can be converted to ctz32() + 1 because the argument is never zero: if (!(req.arg >> 8) || (req.arg >> (ctz32(req.arg & ~0xff) + 1))) { ~~~~~~~~~~~~~~~ ^--------------- req.arg cannot be zero Cc: Markus Armbruster <armbru@redhat.com> Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-7-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
There are a number of ffs(3) callers that do roughly: bit = ffs(val); if (bit) { do_something(bit - 1); } This pattern can be converted to ctz32() like this: zeroes = ctz32(val); if (zeroes != 32) { do_something(zeroes); } Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-6-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
This commit was generated mechanically by coccinelle from the following semantic patch: @@ expression val; @@ - (ffs(val) - 1) + ctz32(val) The call sites have been audited to ensure the ffs(0) - 1 == -1 case never occurs (due to input validation, asserts, etc). Therefore we don't need to worry about the fact that ctz32(0) == 32. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-5-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
It is not clear from the code how a 0 parameter should be handled by the hardware. Keep the same behavior as ffs(0) - 1 == -1. Cc: Alexander Graf <agraf@suse.de> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-4-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
It is not clear from the code how a 0 parameter should be handled by the hardware. Keep the same behavior as ffs(0) - 1 == -1. Cc: Andrzej Zaborowski <balrog@zabor.org> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-3-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The binary search in sdp_uuid_match() only works when the number of elements to search is a power of two. lo = record->uuid; hi = record->uuids; while (hi >>= 1) if (lo[hi] <= val) lo += hi; return *lo == val; I noticed that the record->uuids calculation in sdp_service_record_build() was suspect: record->uuids = 1 << ffs(record->uuids - 1); Unlike most ffs(val) - 1 users, the expression is ffs(val - 1)! Actually ffs() is the wrong function to use for power-of-2. Use pow2ceil() to achieve the correct effect. Now the record->uuid[] array is sized correctly and the binary search in sdp_uuid_match() should work. I'm not sure how to run/test this code. Cc: Andrzej Zaborowski <balrog@zabor.org> Cc: qemu-stable@nongnu.org Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 1427124571-28598-2-git-send-email-stefanha@redhat.com Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
Signed-off-by: NAlberto Garcia <berto@igalia.com> Message-id: 1426522925-14444-1-git-send-email-berto@igalia.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Yi Wang 提交于
The command "virsh create" will fail in such condition: vm has two disks: vda and vdb. vda has snapshot s1 with id "1", vdb doesn't have s1 but has snapshot s2 with id "1". When we want to run command "virsh create s1", del_existing_snapshots() only deletes s1 in vda, and bdrv_snapshot_create() tries to create vdb's snapshot s1 with id "1", but id "1" alreay exists in vdb with name "s2"! The simplest way is call find_new_snapshot_id() unconditionally. Signed-off-by: NYi Wang <up2wing@gmail.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Peter Maydell 提交于
X86 queue, 2015-04-27 (v2) # gpg: Signature made Mon Apr 27 19:42:39 2015 BST using RSA key ID 984DC5A6 # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost/tags/x86-pull-request: target-i386: Remove AMD feature flag aliases from CPU model table target-i386: X86CPU::xlevel2 QOM property target-i386: Make "level" and "xlevel" properties static qemu-config: Accept empty option values MAINTAINERS: Change status of X86 to Maintained MAINTAINERS: Add myself to X86 Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
NUMA queue, 2015-04-27 # gpg: Signature made Mon Apr 27 19:02:19 2015 BST using RSA key ID 984DC5A6 # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost/tags/numa-pull-request: MAINTAINERS: Add myself as NUMA code maintainer Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
target-arm queue: * memory system updates to support transaction attributes * set user-mode and secure attributes for accesses made by ARM CPUs * rename c1_coproc to cpacr_el1 * adjust id_aa64pfr0 when has_el3 CPU property disabled * allow ARMv8 SCR.SMD updates # gpg: Signature made Mon Apr 27 16:14:30 2015 BST using RSA key ID 14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" * remotes/pmaydell/tags/pull-target-arm-20150427: Allow ARMv8 SCR.SMD updates target-arm: Adjust id_aa64pfr0 when has_el3 CPU property disabled target-arm: rename c1_coproc to cpacr_el1 target-arm: Check watchpoints against CPU security state target-arm: Use attribute info to handle user-only watchpoints target-arm: Add user-mode transaction attribute target-arm: Use correct memory attributes for page table walks target-arm: Honour NS bits in page tables Switch non-CPU callers from ld/st*_phys to address_space_ld/st* exec.c: Capture the memory attributes for a watchpoint hit exec.c: Add new address_space_ld*/st* functions exec.c: Make address_space_rw take transaction attributes exec.c: Convert subpage memory ops to _with_attrs Add MemTxAttrs to the IOTLB Make CPU iotlb a structure rather than a plain hwaddr memory: Replace io_mem_read/write with memory_region_dispatch_read/write memory: Define API for MemoryRegionOps to take attrs and return status Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
spice: misc fixes. # gpg: Signature made Mon Apr 27 12:03:16 2015 BST using RSA key ID D3E87138 # gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>" # gpg: aka "Gerd Hoffmann <gerd@kraxel.org>" # gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>" * remotes/spice/tags/pull-spice-20150427-1: spice: learn to hide cursor spice: set pointer position on hotspot spice: fix mouse cursor position spice: fix simple display on bigendian hosts monitor: Make client_migrate_info synchronous Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Eduardo Habkost 提交于
When CPU vendor is AMD, the AMD feature alias bits on CPUID[0x80000001].EDX are already automatically copied from CPUID[1].EDX on x86_cpu_realizefn(). When CPU vendor is Intel, those bits are reserved and should be zero. On either case, those bits shouldn't be set in the CPU model table. Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Eduardo Habkost 提交于
We already have "level" and "xlevel", only "xlevel2" is missing. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Eduardo Habkost 提交于
Static properties require only 1 line of code, much simpler than the existing code that requires writing new getters/setters. As a nice side-effect, this fixes an existing bug where the setters were incorrectly allowing the properties to be changed after the CPU was already realized. Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Eduardo Habkost 提交于
Currently it is impossible to set an option in a config file to an empty string, because the parser matches only lines containing non-empty strings between double-quotes. As sscanf() "[" conversion specifier only matches non-empty strings, add a special case for empty strings. Reviewed-by: NEric Blake <eblake@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Eduardo Habkost 提交于
"Odd Fixes" doesn't reflect the current status of target-i386. We have people looking after it, now. Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-