- 09 4月, 2013 24 次提交
-
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This patch tackles all files that are compiled once, moving them to subdirectories of hw/. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Prepare the new directory structure. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Many of these should be cleaned up with proper qdev-/QOM-ification. Right now there are many catch-all headers in include/hw/ARCH depending on cpu.h, and this makes it necessary to compile these files per-target. However, fixing this does not belong in these patches. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 06 4月, 2013 15 次提交
-
-
git://git.linaro.org/people/pmaydell/qemu-arm由 Blue Swirl 提交于
* 'arm-devs.next' of git://git.linaro.org/people/pmaydell/qemu-arm: hw/nand.c: Fix nand erase operation cadence_uart: Flush queued characters on reset pl330: Don't inhibit ES bits on INTEN pflash_cfi01: Implement migration support pflash_cfi01: Drop unused 'bypass' field hw/arm_gic_common: Use vmstate struct rather than save/load functions arm_gic: Fix sizes of state fields in preparation for vmstate support vmstate: Add support for two dimensional arrays hw/onenand.c: fix migration of dynamically allocated buffer "otp" hw/sd.c: fix migration of dynamically allocated buffer "buf" vmstate.h: introduce VMSTATE_BUFFER_POINTER_UNSAFE macro hw/arm_mptimer: Save the timer state pl050: Don't send always-constant is_mouse field hw/arm/nseries: don't print to stdout or stderr
-
由 Anthony Liguori 提交于
The char-flow refactoring introduced a busy-wait that depended on an action from the VCPU thread. However, the VCPU thread could never take that action because the busy-wait starved the VCPU thread of the BQL because it never dropped the mutex while running select. Paolo doesn't want to drop this optimization for fear that we will stop detecting these busy waits. I'm afraid to keep this optimization even with the busy-wait fixed because I think a similar problem can occur just with heavy I/O thread load manifesting itself as VCPU pauses. As a compromise, introduce an artificial timeout after a thousand iterations but print a rate limited warning when this happens. This let's us still detect when this condition occurs without it being a fatal error. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Message-id: 1365169560-11012-1-git-send-email-aliguori@us.ibm.com
-
由 Paolo Bonzini 提交于
The character backend refactoring introduced an undesirable busy wait. The busy wait happens if can_read returns zero and there is data available on the character device's file descriptor. Then, the I/O watch will fire continuously and, with TCG, the CPU thread will never run. 1) Char backend asks front end if it can write 2) Front end says no 3) poll() finds the char backend's descriptor is available 4) Goto (1) What we really want is this (note that step 3 avoids the busy wait): 1) Char backend asks front end if it can write 2) Front end says no 3) poll() goes on without char backend's descriptor 4) Goto (1) until qemu_chr_accept_input() called 5) Char backend asks front end if it can write 6) Front end says yes 7) poll() finds the char backend's descriptor is available 8) Backend handler called After this patch, the IOWatchPoll source and the watch source are separated. The IOWatchPoll is simply a hook that runs during the prepare phase on each main loop iteration. The hook adds/removes the actual source depending on the return value from can_read. A simple reproducer is qemu-system-i386 -serial mon:stdio ... followed by banging on the terminal as much as you can. :) Without this patch, emulation will hang. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1365177573-11817-1-git-send-email-pbonzini@redhat.com Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
-
由 Anthony Liguori 提交于
# By Peter Crosthwaite (2) and others # Via Stefan Hajnoczi * stefanha/trivial-patches: xilinx_zynq: Cleanup ssi_create_slave petalogix_ml605_mmu: Cleanup ssi_create_slave() target-s390: Fix SRNMT linux-user: Don't omit comma for strace of rt_sigaction() test-visitor-serialization: Fix some memory leaks
-
由 Anthony Liguori 提交于
# By Alex Bligh (2) and Felipe Franciosi (2) # Via Stefano Stabellini * sstabellini/xen-2013-04-05: Allow xen guests to plug disks of 1 TiB or more Introduce 64 bit integer write interface to xenstore Xen PV backend: Disable use of O_DIRECT by default as it results in crashes. Xen PV backend: Move call to bdrv_new from blk_init to blk_connect
-
由 Anthony Liguori 提交于
# By Stefan Hajnoczi (4) and Kevin Wolf (3) # Via Kevin Wolf * kwolf/for-anthony: qcow2: Fix L1 write error handling in qcow2_update_snapshot_refcount qcow2: Return real error in qcow2_update_snapshot_refcount block: clean up I/O throttling wait_time code block: drop duplicated slice extension code block: keep I/O throttling slice time constant block: fix I/O throttling accounting blind spot usb-storage: Forward serial number to scsi-disk
-
由 Kevin Wolf 提交于
It ignored the error code, and at least the 'goto fail' is obvious nonsense as it creates an endless loop (if the next attempt doesn't magically succeed) and leaves the in-memory L1 table in big-endian instead of converting it back. In error cases, there's no point in writing an updated L1 table, so skip this part for them. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This fixes the error message triggered by the following script: cat > /tmp/blkdebug.cfg <<EOF [inject-error] event = "cluster_free" errno = "28" immediately = "off" EOF $qemu_img create -f qcow2 test.qcow2 10G $qemu_img snapshot -c snap test.qcow2 $qemu_img snapshot -d snap blkdebug:/tmp/blkdebug.cfg:test.qcow2 Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The wait_time variable is in seconds. Reflect this in a comment and use NANOSECONDS_PER_SECOND instead of BLOCK_IO_SLICE_TIME * 10 (which happens to have the right value). Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Tested-By: NBenoit Canet <benoit@irqsave.net> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The current slice is extended when an I/O request exceeds the limit. There is no need to extend the slice every time we check a request. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Tested-By: NBenoit Canet <benoit@irqsave.net> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
It is not necessary to adjust the slice time at runtime. We already extend the current slice in order to carry over accounting into the next slice. Changing the actual slice time value introduces oscillations. The guest may experience large changes in throughput or IOPS from one moment to the next when slice times are adjusted. Reported-by: NBenoît Canet <benoit@irqsave.net> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Tested-By: NBenoit Canet <benoit@irqsave.net> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
I/O throttling relies on bdrv_acct_done() which is called when a request completes. This leaves a blind spot since we only charge for completed requests, not submitted requests. For example, if there is 1 operation remaining in this time slice the guest could submit 3 operations and they will all be submitted successfully since they don't actually get accounted for until they complete. Originally we probably thought this is okay since the requests will be accounted when the time slice is extended. In practice it causes fluctuations since the guest can exceed its I/O limit and it will be punished for this later on. Account for I/O upon submission so that I/O limits are enforced properly. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Tested-By: NBenoit Canet <benoit@irqsave.net> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
usb-storage takes care to fetch the USB serial number from -drive options, but it neglected to pass its own 'serial' property to the scsi-disk it creates. With this patch, the 'serial' qdev property and the 'serial' option in -drive behave the same and correctly apply the serial number on both USB and SCSI level. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Acked-by: NGerd Hoffmann <kraxel@redhat.com>
-
由 Wendy Liang 提交于
Usually, nand erase operation has only 2 or 3 address cycles. We need to mask s->addr to zero unset stale high-order bytes in the nand address before using it as the erase address. This fixes the NAND erase operation in Linux. [PC: Generalised to work for any number of address cycles rather than just 3] Signed-off-by: NWendy Liang <jliang@xilinx.com> Signed-off-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Message-id: 1364967188-26711-1-git-send-email-peter.crosthwaite@xilinx.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Crosthwaite 提交于
Reset can be used to empty the rx-fifo. As the fifo full condition is used to return false from can_receive, queued rx data should be flushed on reset accordingly. Cc: Wendy Liang <jliang@xilinx.com> Cc: Jason Wu <huanyu@xilinx.com> Signed-off-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Reported-by: NJason Wu <huanyu@xilinx.com> Message-id: 494c1e005e225c915d295ddfd75d992ad2dabc3c.1364964526.git.peter.crosthwaite@xilinx.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 05 4月, 2013 1 次提交
-
-
由 Peter Crosthwaite 提交于
This if-else logic inhibits setting of the event status (ES) bits when interrupts are enabled. This is incorrect. ES should be set regardless on INTEN state. INTEN only inhibits the signalling of events to PL330 threads, not setting of the ES register. Signed-off-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-