- 23 7月, 2013 3 次提交
-
-
由 Michael R. Hines 提交于
Using the previous patches, we're now able to timestamp the SETUP state. Once we have this time, let the user know about it in the schema. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
Code that does need to be visible is kept well contained inside this file and this is the only new additional file to the entire patch. This file includes the entire protocol and interfaces required to perform RDMA migration. Also, the configure and Makefile modifications to link this file are included. Full documentation is in docs/rdma.txt Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
This gives RDMA shared access to madvise() on the destination side when an entire chunk is found to be zero. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 19 7月, 2013 1 次提交
-
-
由 Peter Lieven 提交于
this patch adds a efficient encoding for zero blocks by adding a new flag indicating a block is completely zero. additionally bdrv_write_zeros() is used at the destination to efficiently write these zeroes. depending on the implementation this avoids that the destination target gets fully provisioned. Signed-off-by: NPeter Lieven <pl@kamp.de> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 13 7月, 2013 1 次提交
-
-
由 Chegu Vinod 提交于
The auto-converge migration capability allows the user to specify if they choose live migration seqeunce to automatically detect and force convergence. Signed-off-by: NChegu Vinod <chegu_vinod@hp.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 28 6月, 2013 1 次提交
-
-
由 Peter Maydell 提交于
Fix compilation failures for linux-user targets following recent migration related commits bd2fa51f and 43487c67. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 1372362818-4740-1-git-send-email-peter.maydell@linaro.org Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
-
- 27 6月, 2013 6 次提交
-
-
由 Michael R. Hines 提交于
This capability allows you to disable dynamic chunk registration for better throughput on high-performance links. For example, using an 8GB RAM virtual machine with all 8GB of memory in active use and the VM itself is completely idle using a 40 gbps infiniband link: 1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps 2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps These numbers would of course scale up to whatever size virtual machine you have to migrate using RDMA. Enabling this feature does *not* have any measurable affect on migration *downtime*. This is because, without this feature, all of the memory will have already been registered already in advance during the bulk round and does not need to be re-registered during the successive iteration rounds. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Reviewed-by: NEric Blake <eblake@redhat.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
These are the prototypes and implementation of new hooks that RDMA takes advantage of to perform dynamic page registration. An optional hook is also introduced for a custom function to be able to override the default save_page function. Also included are the prototypes and accessor methods used by arch_init.c which invoke funtions inside savevm.c to call out to the hooks that may or may not have been overridden inside of QEMUFileOps. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
RDMA uses this to flush the control channel before sending its own message to handle page registrations. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
QEMUFileRDMA also has read and write modes. This function is now shared to reduce code duplication. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
This exposes throughput (in megabits/sec) through QMP. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Michael R. Hines 提交于
RDMA writes happen asynchronously, and thus the performance accounting also needs to be able to occur asynchronously. This allows anybody to call into savevm.c to update both f->pos as well as into arch_init.c to update the acct_info structure with up-to-date values when the RDMA transfer actually completes. Reviewed-by: NJuan Quintela <quintela@redhat.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 15 4月, 2013 1 次提交
-
-
由 Kevin Wolf 提交于
Instead of breaking up RAM state into many small chunks, pass the iovec to the block layer for better performance. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 05 4月, 2013 2 次提交
-
-
由 Peter Maydell 提交于
Add support for migrating two dimensional arrays, by defining a set of new macros VMSTATE_*_2DARRAY paralleling the existing VMSTATE_*_ARRAY macros. 2D arrays are handled the same for actual state serialization; the only difference is that the type check has to change for a 2D array. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NIgor Mitsyanko <i.mitsyanko@gmail.com> Message-id: 1363975375-3166-2-git-send-email-peter.maydell@linaro.org
-
由 Igor Mitsyanko 提交于
Macro could be used to migrate a dynamically allocated buffer of known size. Signed-off-by: NIgor Mitsyanko <i.mitsyanko@gmail.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 1362923278-4080-2-git-send-email-i.mitsyanko@gmail.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 26 3月, 2013 8 次提交
-
-
由 Orit Wasserman 提交于
This allows us to add a buffer to the iovec to send without copying it into the static buffer, the buffer will be sent later when qemu_fflush is called. Signed-off-by: NOrit Wasserman <owasserm@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Orit Wasserman 提交于
This will allow us to write an iovec Signed-off-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Lieven 提交于
during bulk stage of ram migration if a page is a zero page do not send it at all. the memory at the destination reads as zero anyway. even if there is an madvise with QEMU_MADV_DONTNEED at the target upon receipt of a zero page I have observed that the target starts swapping if the memory is overcommitted. it seems that the pages are dropped asynchronously. this patch also updates QMP to return the number of skipped pages in MigrationStats. Signed-off-by: NPeter Lieven <pl@kamp.de> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 David Gibson 提交于
The VMSTATE_BUFFER_MULTIPLY macro is misnamed - it actually specifies a variably sized buffer with VMS_VBUFFER, so should be named VMSTATE_VBUFFER_MULTIPLY. This patch fixes this (the macro had no current users under either name). In addition, unlike the other VMSTATE_VBUFFER variants, this macro did not specify VMS_POINTER. This patch fixes this bug as well. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 David Gibson 提交于
Currently the savevm code contains a VMSTATE_STRUCT_VARRAY_POINTER_INT32 helper (a variably sized array with the number of elements in an int32_t), but not VMSTATE_STRUCT_VARRAY_POINTER_UINT32 (... with the number of elements in a uint32_t). This patch (trivially) fixes the deficiency. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 David Gibson 提交于
The current savevm code includes VMSTATE helpers for a number of commonly used data types, but not for the float64 type used by the internal floating point emulation code. This patch fixes the deficiency. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 David Gibson 提交于
This adds an _EQUAL VMSTATE helper for target_ulongs, defined in terms of VMSTATE_UINT32_EQUAL or VMSTATE_UINT64_EQUAL as appropriate. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 David Gibson 提交于
The savevm code already includes a number of *_EQUAL helpers which act as sanity checks verifying that the configuration of the saved state matches that of the machine we're loading into to work. Variants already exist for 8 bit 16 bit and 32 bit integers, but not 64 bit integers. This patch fills that hole, adding a UINT64 version. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 12 3月, 2013 2 次提交
-
-
由 Andreas Färber 提交于
Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
This avoids adding a duplicate stub for CONFIG_USER_ONLY. Suggested-by: NEduardo Habkost <ehabkost@redhat.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 11 3月, 2013 15 次提交
-
-
由 Peter Lieven 提交于
The page cache frees all data on finish, on resize and if there is collision on insert. So it should be the caches responsibility to dup the data that is stored in the cache. Signed-off-by: NPeter Lieven <pl@kamp.de> Signed-off-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
The indirection is useless now. Backends can open s->file directly. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Rate limiting is now simply a byte counter; client call qemu_file_rate_limit() manually to determine if they have to exit. So it is possible and simple to move the functionality to QEMUFile. This makes the remaining functionality of s->file redundant; in the next patch we can remove it and write directly to s->migration_file. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Second, drop the file descriptor indirection, and write directly to the QEMUFile. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
As a start, use QEMUFile to store the destination and close it. qemu_get_fd gets a file descriptor that will be used by the write callbacks. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
There is no reason for outgoing exec migration to do popen manually anymore (the reason used to be that we needed the FILE* to make it non-blocking). Use qemu_popen_cmd. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Buffering was needed because blocking writes could take a long time and starve other threads seeking to grab the big QEMU mutex. Now that all writes (except within _complete callbacks) are done outside the big QEMU mutex, we do not need buffering at all. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Only the migration_bitmap_sync() call needs the iothread lock. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
This makes it possible to do blocking writes directly to the socket, with no buffer in the middle. For RAM, only the migration_bitmap_sync() call needs the iothread lock. For block migration, it is needed by the block layer (including bdrv_drain_all and dirty bitmap access), but because some code is shared between iterate and complete, all of mig_save_device_dirty is run with the lock taken. In the savevm case, the iterate callback runs within the big lock. This is annoying because it complicates the rules. Luckily we do not need to do anything about it: the RAM iterate callback does not need the iothread lock, and block migration never runs during savevm. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
This groups together the callbacks that later will have similar locking rules. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Perform final cleanup in a bottom half, and add joining the thread to the series of cleanup actions. migrate_fd_error remains for connection error, but it doesn't need to cleanup anything anymore. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Always use qemu_file_get_error to detect errors, since that is how QEMUFile itself drops I/O after an error occurs. There is no need to propagate and check return values all the time. Also remove the "complete" member, since we know that it is set (via migrate_fd_cleanup) only when the state changes. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Paolo Bonzini 提交于
Right now, migration cannot entirely rely on QEMUFile's automatic drop of I/O after an error, because it does its "real" I/O outside the put_buffer callback. To fix this until buffering is gone, expose qemu_file_set_error which we will use in buffered_flush. Similarly, buffered_flush is not a complete flush because some data may still reside in the QEMUFile's own buffer. This somewhat complicates the process of closing the migration thread. Again, when buffering is gone buffered_flush will disappear and calling qemu_fflush will not be needed; in the meanwhile, we expose the function for use in migration.c. Reviewed-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-