- 26 6月, 2017 40 次提交
-
-
由 Kevin Wolf 提交于
Most of the qed code is now synchronous and matches the coroutine model. One notable exception is the serialisation between requests which can still schedule a callback. Before we can replace this with coroutine locks, let's convert the driver's external interfaces to the coroutine versions. We need to be careful to handle both requests that call the completion callback directly from the calling coroutine (i.e. fully synchronous code) and requests that involve some callback, so that we need to yield and wait for the completion callback coming from outside the coroutine. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NManos Pitsidianakis <el13635@mail.ntua.gr> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Instead of calling itself recursively as the last thing, just convert qed_aio_next_io() into a loop. This patch is best reviewed with 'git show -w' because most of it is just whitespace changes. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
All callers pass ret = 0, so we can just remove it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but just return an error code and let the caller handle it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but just return an error code and let the caller handle it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but just return an error code and let the caller handle it. While refactoring qed_aio_write_alloc() to accomodate the change, qed_aio_write_zero_cluster() ended up with a single line, so I chose to inline that line and remove the function completely. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but just return an error code and let the caller handle it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but just return an error code and let the caller handle it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but just return an error code and let the caller handle it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
qed_commit_l2_update() is unconditionally called at the end of qed_aio_write_l1_update(). Inline it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
The GenericCB infrastructure isn't used any more. Remove it. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
With this change, qed_aio_write_prefill() and qed_aio_write_postfill() collapse into a single function. This is reflected by a rename of the combined function to qed_aio_write_cow(). Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Instead of passing the return value to a callback, return it to the caller so that the callback can be inlined there. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
Note that this code is generally not running in coroutine context, so this is an actual blocking synchronous operation. We'll fix this in a moment. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Kevin Wolf 提交于
The qed driver serialises allocating write requests. When the active allocation is finished, the AIO callback is called, but after this, the next allocating request is immediately processed instead of leaving the coroutine. Resuming another allocation request in the same request coroutine means that the request now runs in the wrong coroutine. The following is one of the possible effects of this: The completed request will generally reenter its request coroutine in a bottom half, expecting that it completes the request in bdrv_driver_pwritev(). However, if the second request actually yielded before leaving the coroutine, the reused request coroutine is in an entirely different place and is reentered prematurely. Not a good idea. Let's make sure that we exit the coroutine after completing the first request by resuming the next allocating request only with a bottom half. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Alberto Garcia 提交于
We already have functions for doing these calculations, so let's use them instead of doing everything by hand. This makes the code a bit more readable. Signed-off-by: NAlberto Garcia <berto@igalia.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
If the guest tries to write data that results on the allocation of a new cluster, instead of writing the guest data first and then the data from the COW regions, write everything together using one single I/O operation. This can improve the write performance by 25% or more, depending on several factors such as the media type, the cluster size and the I/O request size. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
Instead of passing a single buffer pointer to do_perform_cow_write(), pass a QEMUIOVector. This will allow us to merge the write requests for the COW regions and the actual data into a single one. Although do_perform_cow_read() does not strictly need to change its API, we're doing it here as well for consistency. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
Reading both COW regions requires two separate requests, but it's perfectly possible to merge them and perform only one. This generally improves performance, particularly on rotating disk drives. The downside is that the data in the middle region is read but discarded. This patch takes a conservative approach and only merges reads when the size of the middle region is <= 16KB. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
This patch splits do_perform_cow() into three separate functions to read, encrypt and write the COW regions. perform_cow() can now read both regions first, then encrypt them and finally write them to disk. The memory allocation is also done in this function now, using one single buffer large enough to hold both regions. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
Instead of calling perform_cow() twice with a different COW region each time, call it just once and make perform_cow() handle both regions. This patch simply moves code around. The next one will do the actual reordering of the COW operations. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
Qcow2COWRegion has two attributes: - The offset of the COW region from the start of the first cluster touched by the I/O request. Since it's always going to be positive and the maximum request size is at most INT_MAX, we can use a regular unsigned int to store this offset. - The size of the COW region in bytes. This is guaranteed to be >= 0, so we should use an unsigned type instead. In x86_64 this reduces the size of Qcow2COWRegion from 16 to 8 bytes. It will also help keep some assertions simpler now that we know that there are no negative numbers. The prototype of do_perform_cow() is also updated to reflect these changes. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
We are using the return value of qcow2_encrypt_sectors() to detect problems but we are throwing away the returned Error since we have no way to report it to the user. Therefore we can simply get rid of the local Error variable and pass NULL instead. Alternatively we could try to figure out a way to pass the original error instead of simply returning -EIO, but that would be more invasive, so let's keep the current approach. Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stephen Bates 提交于
Add the ability for the NVMe model to support both the RDS and WDS modes in the Controller Memory Buffer. Although not currently supported in the upstreamed Linux kernel a fork with support exists [1] and user-space test programs that build on this also exist [2]. Useful for testing CMB functionality in preperation for real CMB enabled NVMe devices (coming soon). [1] https://github.com/sbates130272/linux-p2pmem [2] https://github.com/sbates130272/p2pmem-testSigned-off-by: NStephen Bates <sbates@raithlin.com> Reviewed-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
Perform the savevm/loadvm test with both iothread on and off. This covers the recently found savevm/loadvm hang when iothread is enabled. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
The legacy -hda option does not support -drive/-device parameters. They will be required by the next patch that extends this test case. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
Avoid duplicating the QEMU command-line. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Stefan Hajnoczi 提交于
migration_incoming_state_destroy() uses qemu_fclose() on the vmstate file. Make sure to call it inside an AioContext acquire/release region. This fixes an 'qemu: qemu_mutex_unlock: Operation not permitted' abort in loadvm. This patch closes the vmstate file before ending the drained region. Previously we closed the vmstate file after ending the drained region. The order does not matter. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Alberto Garcia 提交于
There used to be throttle_timers_{detach,attach}_aio_context() calls in bdrv_set_aio_context(), but since 7ca7f0f6 they are now in blk_set_aio_context(). Signed-off-by: NAlberto Garcia <berto@igalia.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Kevin Wolf 提交于
This documents the driver-specific options for the raw, qcow2 and file block drivers for the man page. For everything else, we refer to the QAPI documentation. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NMax Reitz <mreitz@redhat.com>
-