- 28 3月, 2013 20 次提交
-
-
由 Kevin Wolf 提交于
Instead of just checking once in exactly this order if there are dependendies, non-COW clusters and new allocation, this starts looping around these. This way we can, for example, gather non-COW clusters after new allocations as long as the host cluster offsets stay contiguous. Once handle_dependencies() is extended so that COW areas of in-flight allocations can be overwritten, this allows to continue with gathering other clusters (we wouldn't be able to do that without this change because we would have missed a possible second dependency in one of the next clusters). This means that in the typical sequential write case, we can combine the COW overwrite of one cluster with the allocation of the next cluster as soon as something like Delayed COW gets actually implemented. It is only by avoiding splitting requests this way that Delayed COW actually starts improving performance noticably. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This patch is mainly to separate the indentation change from the semantic changes. All that really changes here is that everything moves into a while loop, all 'goto done' become 'break' and at the end of the loop a new 'break is inserted. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Instead of expecting a single l2meta, have a list of them. This allows to still have a single I/O request for the guest data, even though multiple l2meta may be needed in order to describe both a COW overwrite and a new cluster allocation (typical sequential write case). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This gets rid of the nb_clusters and keep_clusters and the associated complicated calculations. Just advance the number of bytes that have been processed and everything is fine. This patch advances the variables even after the last operation even though they aren't used any more afterwards to make things look more uniform. A later patch will turn the whole thing into a loop and then it actually starts making sense. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This makes handle_alloc() and handle_copied() return byte-granularity host offsets instead of returning always the cluster start. This is required so that qcow2_alloc_cluster_offset() can stop aligning everything to cluster boundaries. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Look only for clusters that start at a given physical offset. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Now *bytes is used to return the length of the area that can be written to without performing an allocation or COW. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
handle_copied() uses its bytes parameter now to determine how many clusters it should try to find. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Things can be simplified a bit now. No semantic changes. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
The interface works completely on a byte granularity now and duplicated parameters are removed. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
handle_alloc() is now called with the offset at which the actual new allocation starts instead of the offset at which the whole write request starts, part of which may already be processed. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
We already communicate the same information in *bytes. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This moves some code that prepares the allocation of new clusters to where the actual allocation happens. This is the minimum required to be able to move it to a separate function in the next patch. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This is a more precise description of what really constitutes a dependency. The behaviour doesn't change at this point because the COW area of the old request is still aligned to cluster boundaries and therefore an overlap is detected wheneven the requests touch any part of the same cluster. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
The old code detected an overlapping allocation even when the allocations didn't actually overlap, but were only adjacent. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Handling overlapping allocations isn't just a detail of cluster allocation. It is rather one of three ways to get the host cluster offset for a write request: 1. If a request overlaps an in-flight allocations, the cluster offset can be taken from there (this is what handle_dependencies will evolve into) or the request must just wait until the allocation has completed. Accessing the L2 is not valid in this case, it has outdated information. 2. Outside overlapping areas, check the clusters that can be written to as they are, with no COW involved. 3. If a COW is required, allocate new clusters Changing the code to reflect this doesn't change the behaviour because overlaps cannot exist for clusters that are kept in step 2. It does however make it easier for later patches to work on clusters that belong to an allocation that is still in flight. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
The unlock wakes up the next coroutine, but the currently running coroutine will lock it again before it yields, so this doesn't make a lot of sense. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
This should be based on the virtual disk size, not on the size of the image. Interesting observation: With some VM state stored in the image file, percentages higher than 100% are possible, even though snapshots themselves are ignored. This is a qcow2 bug to be fixed another day: The VM state should be discarded in the active L2 tables after completing the snapshot creation. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 25 3月, 2013 2 次提交
-
-
由 Stefan Weil 提交于
The new parameter is unused yet. This part was missing in commit 787e4a85. Cc: Kevin Wolf <kwolf@redhat.com> Cc: Eric Blake <eblake@redhat.com> Signed-off-by: NStefan Weil <sw@weilnetz.de> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Liu Yuan 提交于
Commit 787e4a85 [block: Add options QDict to bdrv_file_open() prototypes] didn't update rbd.c accordingly. Cc: Kevin Wolf <kwolf@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NLiu Yuan <tailai.ly@taobao.com> Reviewed-by: NStefan Weil <sw@weilnetz.de> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 23 3月, 2013 7 次提交
-
-
由 Kevin Wolf 提交于
A file name may only specified if no host or socket path is specified. The latter two may not appear at the same time either. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
The URL method already takes care to apply the default port when none is specfied. Directly specifying driver-specific options required the port number until now. Allow leaving it out and apply the default. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
In order to achieve this, the .bdrv_probe callbacks of all drivers must cope with this. The DMG driver is the only one that bases its decision on the filename and it needs to be changed. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
If a driver needs structured data and not just a string, it can provide a .bdrv_parse_filename callback now that parses the command line string into separate options. Keeping this separate from .bdrv_open_filename ensures that the preferred way of directly specifying the options always works as well if parsing the string works. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
The existing parsers for the file name now parse everything into the bdrv_open() options QDict. Instead of using these parsers, you can now directly specify the options on the command line, like this: qemu-system-x86_64 -drive file=nbd:,file.port=1234,file.host=::1 Clearly the file=... part could use further improvement, but it's a start. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
The NBD block supports an URL syntax, for which a URL parser returns separate hostname and port fields. It also supports the traditional qemu syntax encoded in a filename. Until now, after parsing the URL to get each piece of information, a new string is built to be fed to socket functions. Instead of building a string in the URL case that is immediately parsed again, parse the string in both cases and use the QemuOpts interface to qemu-sockets.c. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Kevin Wolf 提交于
The new parameter is unused yet. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 19 3月, 2013 2 次提交
-
-
由 Kevin Wolf 提交于
Need to pass an options QDict to qcow2_open() now. This fixes a segfault on the migration target with qcow2. Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Liu Yuan 提交于
Sheepdog (neither quorum nor unsafe mode) will refuse to serve IO requests when number of alive nodes is less than that of copies specified by users. This will return 0x19 to QEMU client which currently doesn't recognize it. This patch adds an error description when QEMU client receives it, other than plainly printing 'Invalid error code' Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Cc: Kevin Wolf <kwolf@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NLiu Yuan <tailai.ly@taobao.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NMORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 15 3月, 2013 9 次提交
-
-
由 Stefan Hajnoczi 提交于
Now that each AioContext has a ThreadPool and the main loop AioContext can be fetched with bdrv_get_aio_context(), we can eliminate the concept of a global thread pool from thread-pool.c. The submit functions must take a ThreadPool* argument. block/raw-posix.c and block/raw-win32.c use aio_get_thread_pool(bdrv_get_aio_context(bs)) to fetch the main loop's ThreadPool. tests/test-thread-pool.c must be updated to reflect the new thread_pool_submit() function prototypes. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 MORITA Kazutaka 提交于
If an io_flush handler is not set, qemu_aio_wait doesn't invoke callbacks. Signed-off-by: NMORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 MORITA Kazutaka 提交于
Using a blocking socket in the coroutine context reduces the chance of switching to other work. This patch makes the sheepdog driver use a non-blocking fd always. Signed-off-by: NMORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Paolo Bonzini 提交于
Otherwise, live migration of the top layer will miss zero clusters and let the backing file show through. This also matches what is done in qed. QCOW2_CLUSTER_ZERO clusters are invalid in v2 image files. Check this directly in qcow2_get_cluster_offset instead of replicating the test everywhere. Cc: qemu-stable@nongnu.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
We already flush when the function completes. There is no need to flush after every compressed cluster. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The update_cluster_refcount() function increments/decrements a cluster's refcount and then returns the new refcount value. There is no need to flush since both update_cluster_refcount() callers already take care of this: 1. qcow2_alloc_bytes() calls update_cluster_refcount() when compressed sectors will be appended to an existing cluster with enough free space. qcow2_alloc_bytes() already flushes so there is no need to do so in update_cluster_refcount(). 2. qcow2_update_snapshot_refcount() sets a cache dependency on refcounts if it needs to update L2 entries. It also flushes before completing. Removing this flush significantly speeds up qcow2 snapshot creation: $ qemu-img create -f qcow2 test.qcow2 -o size=50G,preallocation=metadata $ time qemu-img snapshot -c new test.qcow2 Time drops from more than 3 minutes to under 1 second. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
Users of qcow2_update_snapshot_refcount() do not flush consistently. qcow2_snapshot_create() flushes but qcow2_snapshot_goto() and qcow2_snapshot_delete() do not. Solve this by moving the bdrv_flush() into qcow2_update_snapshot_refcount(). Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
Compressed writes use qcow2_alloc_bytes() to allocate space with byte granularity. The affected clusters' refcounts will be incremented but we do not need to flush yet. Set a L2 cache dependency on the refcount block cache, so that the refcounts get written out before the L2 updates. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
Since qcow2 metadata is cached we need to flush the caches, not just the underlying file. Use bdrv_flush(bs) instead of bdrv_flush(bs->file). Also add the error return path when bdrv_flush() fails and move the flush after checking for qcow2_alloc_clusters() failure so that the qcow2_alloc_clusters() error return value takes precedence. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-