- 15 8月, 2014 40 次提交
-
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the vhdx block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the vdi block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the rbd block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the raw-win32 block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the raw-posix block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the qed block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the qcow2 block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the qcow1 block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the parallels block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the nfs block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the iscsi block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the dmg block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the curl block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the cloop block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the bochs block driver. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Kevin Wolf 提交于
Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses bounce buffer allocations in block.c. While at it, convert bdrv_commit() from plain g_malloc() to qemu_try_blockalign(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This function returns NULL instead of aborting when an allocation fails. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NBenoit Canet <benoit@irqsave.net>
-
由 Jeff Cody 提交于
This updates the VDI corruption test to also test static VDI image creation, as well as the default dynamic image creation. Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Jeff Cody 提交于
Use the block layer to create, and write to, the image file in the VPC .bdrv_create() operation. This has a couple of benefits: Images can now be created over protocols, and hacks such as NOCOW are not needed in the image format driver, and the underlying file protocol appropriate for the host OS can be relied upon. Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Jeff Cody 提交于
Most QEMU code uses 'ret' for function return values. The VDI driver uses a mix of 'result' and 'ret'. This cleans that up, switching over to the standard 'ret' usage. Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Jeff Cody 提交于
Use the block layer to create, and write to, the image file in the VDI .bdrv_create() operation. This has a couple of benefits: Images can now be created over protocols, and hacks such as NOCOW are not needed in the image format driver, and the underlying file protocol appropriate for the host OS can be relied upon. Also some minor cleanup for error handling. Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Jeff Cody 提交于
If bdrv_unref() is passed a NULL BDS pointer, it is safe to exit with no operation. This will allow cleanup code to blindly call bdrv_unref() on a BDS that has been initialized to NULL. Reviewed-by: NMax Reitz <mreitz@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Paolo Bonzini 提交于
This can be used to compute the cost of coroutine operations. In the end the cost of the function call is a few clock cycles, so it's pretty cheap for now, but it may become more relevant as the coroutine code is optimized. For example, here are the results on my machine: Function call 100000000 iterations: 0.173884 s Yield 100000000 iterations: 8.445064 s Lifecycle 1000000 iterations: 0.098445 s Nesting 10000 iterations of 1000 depth each: 7.406431 s One yield takes 83 nanoseconds, one enter takes 97 nanoseconds, one coroutine allocation takes (roughly, since some of the allocations in the nesting test do hit the pool) 739 nanoseconds: (8.445064 - 0.173884) * 10^9 / 100000000 = 82.7 (0.098445 * 100 - 0.173884) * 10^9 / 100000000 = 96.7 (7.406431 * 10 - 0.173884) * 10^9 / 100000000 = 738.9 Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Jeff Cody 提交于
This patch contains several changes for endian conversion fixes for VHDX, particularly for big-endian machines (multibyte values in VHDX are all on disk in LE format). Tests were done with existing qemu-iotests on an IBM POWER7 (8406-71Y). This includes sample images created by Hyper-V, both with dirty logs and without. In addition, VHDX image files created (and written to) on a BE machine were tested on a LE machine, and vice-versa. Reported-by: NMarkus Armburster <armbru@redhat.com> Reported-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Jeff Cody 提交于
This add an error check for an invalid descriptor entry signature, when flushing the log descriptor entries. Signed-off-by: NJeff Cody <jcody@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The thread pool has a race condition if two elements complete before thread_pool_completion_bh() runs: If element A's callback waits for element B using aio_poll() it will deadlock since pool->completion_bh is not marked scheduled when the nested aio_poll() runs. Fix this by marking the BH scheduled while thread_pool_completion_bh() is executing. This way any nested aio_poll() loops will enter thread_pool_completion_bh() and complete the remaining elements. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
EventNotifier is implemented using an eventfd or pipe. It therefore consumes file descriptors, which can be limited by rlimits and should therefore be used sparingly. Switch from EventNotifier to QEMUBH in thread-pool.c. Originally EventNotifier was used because qemu_bh_schedule() was not thread-safe yet. Reported-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
When a BlockDriverState is associated with a storage controller DeviceState we expect guest I/O. Use this opportunity to bump the coroutine pool size by 64. This patch ensures that the coroutine pool size scales with the number of drives attached to the guest. It should increase coroutine pool usage (which makes qemu_coroutine_create() fast) without hogging too much memory when fewer drives are attached. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Stefan Hajnoczi 提交于
Allow coroutine users to adjust the pool size. For example, if the guest has multiple emulated disk drives we should keep around more coroutines. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Chrysostomos Nanakos 提交于
Signed-off-by: NChrysostomos Nanakos <cnanakos@grnet.gr> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Chrysostomos Nanakos 提交于
Introduce new enum BlockdevOptionsArchipelago. @volume: #Name of the Archipelago volume image @mport: #'mport' is the port number on which mapperd is listening. This is optional and if not specified, QEMU will make Archipelago to use the default port. @vport: #'vport' is the port number on which vlmcd is listening. This is optional and if not specified, QEMU will make Archipelago to use the default port. @segment: #optional The name of the shared memory segment Archipelago stack is using. This is optional and if not specified, QEMU will make Archipelago use the default value, 'archipelago'. Signed-off-by: NChrysostomos Nanakos <cnanakos@grnet.gr> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Chrysostomos Nanakos 提交于
qemu-img archipelago:<volumename>[/mport=<mapperd_port>[:vport=<vlmcd_port>] [:segment=<segment_name>]] [size] Signed-off-by: NChrysostomos Nanakos <cnanakos@grnet.gr> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Chrysostomos Nanakos 提交于
VM Image on Archipelago volume can also be specified like this: file=archipelago:<volumename>[/mport=<mapperd_port>[:vport=<vlmcd_port>][: segment=<segment_name>]] Examples: file=archipelago:my_vm_volume file=archipelago:my_vm_volume/mport=123 file=archipelago:my_vm_volume/mport=123:vport=1234 file=archipelago:my_vm_volume/mport=123:vport=1234:segment=my_segment Signed-off-by: NChrysostomos Nanakos <cnanakos@grnet.gr> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Chrysostomos Nanakos 提交于
VM Image on Archipelago volume is specified like this: file.driver=archipelago,file.volume=<volumename>[,file.mport=<mapperd_port>[, file.vport=<vlmcd_port>][,file.segment=<segment_name>]] 'archipelago' is the protocol. 'mport' is the port number on which mapperd is listening. This is optional and if not specified, QEMU will make Archipelago to use the default port. 'vport' is the port number on which vlmcd is listening. This is optional and if not specified, QEMU will make Archipelago to use the default port. 'segment' is the name of the shared memory segment Archipelago stack is using. This is optional and if not specified, QEMU will make Archipelago to use the default value, 'archipelago'. Examples: file.driver=archipelago,file.volume=my_vm_volume file.driver=archipelago,file.volume=my_vm_volume,file.mport=123 file.driver=archipelago,file.volume=my_vm_volume,file.mport=123, file.vport=1234 file.driver=archipelago,file.volume=my_vm_volume,file.mport=123, file.vport=1234,file.segment=my_segment Signed-off-by: NChrysostomos Nanakos <cnanakos@grnet.gr> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Chunyan Liu 提交于
Add nocow info in 'qemu-img info' output to show whether the file currently has NOCOW flag set or not. Signed-off-by: NChunyan Liu <cyliu@suse.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Fam Zheng 提交于
This drops the unnecessary bdrv_truncate() from, and also improves, cluster allocation code path. Before, when we need a new cluster, get_cluster_offset truncates the image to bdrv_getlength() + cluster_size, and returns the offset of added area, i.e. the image length before truncating. This is not efficient, so it's now rewritten as: - Save the extent file length when opening. - When allocating cluster, use the saved length as cluster offset. - Don't truncate image, because we'll anyway write data there: just write any data at the EOF position, in descending priority: * New user data (cluster allocation happens in a write request). * Filling data in the beginning and/or ending of the new cluster, if not covered by user data: either backing file content (COW), or zero for standalone images. One major benifit of this change is, on host mounted NFS images, even over a fast network, ftruncate is slow (see the example below). This change significantly speeds up cluster allocation. Comparing by converting a cirros image (296M) to VMDK on an NFS mount point, over 1Gbe LAN: $ time qemu-img convert cirros-0.3.1.img /mnt/a.raw -O vmdk Before: real 0m21.796s user 0m0.130s sys 0m0.483s After: real 0m2.017s user 0m0.047s sys 0m0.190s We also get rid of unchecked bdrv_getlength() and bdrv_truncate(), and get a little more documentation in function comments. Tested that this passes qemu-iotests for all VMDK subformats. Signed-off-by: NFam Zheng <famz@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Fam Zheng 提交于
It's possible that we diverge from the specification with our implementation. Having a reference image in the test cases may detect such problems when we introduce a bug that can read what it creates, but can't handle a real VMDK. Signed-off-by: NFam Zheng <famz@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
Update -device FOO,help to include QOM properties in addition to qdev properties. Devices are gradually adding more QOM properties that are not reflected as qdev properties. It is important to report all device properties since management tools like libvirt use this information (and device-list-properties QMP) to detect the presence of QEMU features. This patch reuses the device-list-properties QMP machinery to avoid code duplication. Reported-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Tested-by: NCole Robinson <crobinso@redhat.com>
-
由 Stefan Hajnoczi 提交于
The "hotplugged" device property was not reported before commit f4eb32b5 ("qmp: show QOM properties in device-list-properties"). Fix this difference. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Stefan Hajnoczi 提交于
This document explains how IOThreads and the main loop are related, especially how to write code that can run in an IOThread. Currently only virtio-blk-data-plane uses these techniques. The next obvious target is virtio-scsi; there has also been work on virtio-net. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-