- 20 1月, 2020 1 次提交
-
-
由 Dr. David Alan Gilbert 提交于
When using hugepages, rate limiting is necessary within each huge page, since a 1G huge page can take a significant time to send, so you end up with bursty behaviour. Fixes: 4c011c37 ("postcopy: Send whole huge pages") Reported-by: NLin Ma <LMa@suse.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 30 10月, 2019 1 次提交
-
-
由 Jens Freimann 提交于
This patch adds a new migration state called wait-unplug. It is entered after the SETUP state if failover devices are present. It will transition into ACTIVE once all devices were succesfully unplugged from the guest. So if a guest doesn't respond or takes long to honor the unplug request the user will see the migration state 'wait-unplug'. In the migration thread we query failover devices if they're are still pending the guest unplug. When all are unplugged the migration continues. If one device won't unplug migration will stay in wait_unplug state. Signed-off-by: NJens Freimann <jfreimann@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20191029114905.6856-9-jfreimann@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 12 9月, 2019 1 次提交
-
-
由 Yury Kotov 提交于
This capability realizes simple source validation by UUID. It's useful for live migration between hosts. Signed-off-by: NYury Kotov <yury-kotov@yandex-team.ru> Message-Id: <20190903162246.18524-2-yury-kotov@yandex-team.ru> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 16 8月, 2019 3 次提交
-
-
由 Markus Armbruster 提交于
In my "build everything" tree, changing hw/qdev-properties.h triggers a recompile of some 2700 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Many places including hw/qdev-properties.h (directly or via hw/qdev.h) actually need only hw/qdev-core.h. Include hw/qdev-core.h there instead. hw/qdev.h is actually pointless: all it does is include hw/qdev-core.h and hw/qdev-properties.h, which in turn includes hw/qdev-core.h. Replace the remaining uses of hw/qdev.h by hw/qdev-properties.h. While there, delete a few superfluous inclusions of hw/qdev-core.h. Touching hw/qdev-properties.h now recompiles some 1200 objects. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Daniel P. Berrangé" <berrange@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Message-Id: <20190812052359.30071-22-armbru@redhat.com>
-
由 Markus Armbruster 提交于
Drop unnecessary inclusions from headers. Downgrade a few more to exec/hwaddr.h. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-17-armbru@redhat.com>
-
由 Markus Armbruster 提交于
migration/qemu-file.h neglects to include it even though it needs ram_addr_t. Fix that. Drop a few superfluous inclusions elsewhere. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-14-armbru@redhat.com>
-
- 15 8月, 2019 1 次提交
-
-
由 Wei Yang 提交于
MigrationState->bytes_xfer is only set to 0 in migrate_init(). Remove this unnecessary field. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190402003106.17614-1-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 15 7月, 2019 1 次提交
-
-
由 Peter Xu 提交于
Currently we are doing log_clear() right after log_sync() which mostly keeps the old behavior when log_clear() was still part of log_sync(). This patch tries to further optimize the migration log_clear() code path to split huge log_clear()s into smaller chunks. We do this by spliting the whole guest memory region into memory chunks, whose size is decided by MigrationState.clear_bitmap_shift (an example will be given below). With that, we don't do the dirty bitmap clear operation on the remote node (e.g., KVM) when we fetch the dirty bitmap, instead we explicitly clear the dirty bitmap for the memory chunk for each of the first time we send a page in that chunk. Here comes an example. Assuming the guest has 64G memory, then before this patch the KVM ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory. If after the patch, let's assume when the clear bitmap shift is 18, then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then instead of sending a big 64G ioctl, we'll send 64 small ioctls, each of the ioctl will cover 1G of the guest memory. For each of the 64 small ioctls, we'll only send if any of the page in that small chunk was going to be sent right away. Signed-off-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-12-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 12 6月, 2019 1 次提交
-
-
由 Markus Armbruster 提交于
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
-
- 15 5月, 2019 1 次提交
-
-
由 Wei Yang 提交于
MigrationState->xfer_limit is only set to 0 in migrate_init(). Remove this unnecessary field. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190326055726.10539-1-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 02 4月, 2019 1 次提交
-
-
由 Markus Armbruster 提交于
This reverts commit 3df663e5. This reverts commit b605c47b. Command line option --only-migratable is for disallowing any configuration that can block migration. Initially, --only-migratable set global variable @only_migratable. Commit 3df663e5 "migration: move only_migratable to MigrationState" replaced it by MigrationState member @only_migratable. That was a mistake. First, it doesn't make sense on the design level. MigrationState captures the state of an individual migration, but --only-migratable isn't a property of an individual migration, it's a restriction on QEMU configuration. With fault tolerance, we could have several migrations at once. --only-migratable would certainly protect all of them. Storing it in MigrationState feels inappropriate. Second, it contributes to a dependency cycle that manifests itself as a bug now. Putting @only_migratable into MigrationState means its available only after migration_object_init(). We can't set it before migration_object_init(), so we delay setting it with a global property (this is fixup commit b605c47b "migration: fix handling for --only-migratable"). We can't get it before migration_object_init(), so anything that uses it can only run afterwards. Since migrate_add_blocker() needs to obey --only-migratable, any code adding migration blockers can run only afterwards. This contributes to the following dependency cycle: * configure_blockdev() must run before machine_set_property() so machine properties can refer to block backends * machine_set_property() before configure_accelerator() so machine properties like kvm-irqchip get applied * configure_accelerator() before migration_object_init() so that Xen's accelerator compat properties get applied. * migration_object_init() before configure_blockdev() so configure_blockdev() can add migration blockers The cycle was closed when recent commit cda4aa9a "Create block backends before setting machine properties" added the first dependency, and satisfied it by violating the last one. Broke block backends that add migration blockers. Moving @only_migratable into MigrationState was a mistake. Revert it. This doesn't quite break the "migration_object_init() before configure_blockdev() dependency, since migrate_add_blocker() still has another dependency on migration_object_init(). To be addressed the next commit. Note that the reverted commit made -only-migratable sugar for -global migration.only-migratable=on below the hood. Documentation has only ever mentioned -only-migratable. This commit removes the arcane & undocumented alternative to -only-migratable again. Nobody should be using it. Conflicts: include/migration/misc.h migration/migration.c migration/migration.h vl.c Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190401090827.20793-3-armbru@redhat.com> Reviewed-by: NIgor Mammedov <imammedo@redhat.com>
-
- 26 3月, 2019 1 次提交
-
-
由 Juan Quintela 提交于
Libvirt don't want to expose (and explain it). From now on we measure the number of packages in bytes instead of pages, so it is the same independently of architecture. We choose the page size of x86. Notice that in the following patch we make this variable. Signed-off-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 06 3月, 2019 3 次提交
-
-
由 Juan Quintela 提交于
It will be used to store the uri parameters. We want this only for tcp, so we don't set it for other uris. We need it to know what port is migration running. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Removed DummyStruct as suggested by Eric & Markus --
-
由 Yury Kotov 提交于
If ignore-shared capability is set then skip shared RAMBlocks during the RAM migration. Also, move qemu_ram_foreach_migratable_block (and rename) to the migration code, because it requires access to the migration capabilities. Signed-off-by: NYury Kotov <yury-kotov@yandex-team.ru> Message-Id: <20190215174548.2630-4-yury-kotov@yandex-team.ru> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Yury Kotov 提交于
We want to use local migration to update QEMU for running guests. In this case we don't need to migrate shared (file backed) RAM. So, add a capability to ignore such blocks during live migration. Signed-off-by: NYury Kotov <yury-kotov@yandex-team.ru> Message-Id: <20190215174548.2630-3-yury-kotov@yandex-team.ru> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 05 3月, 2019 1 次提交
-
-
由 Dr. David Alan Gilbert 提交于
Switch the announcements to using the new announce timer. Move the code that does it to announce.c rather than savevm because it really has nothing to do with the actual migration. Migration starts the announce from bh's and so they're all in the main thread/bql, and so there's never any racing with the timers themselves. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NJason Wang <jasowang@redhat.com>
-
- 23 1月, 2019 2 次提交
-
-
由 Xiao Guangrong 提交于
It introduces a new statistic, pages-per-second, as bandwidth or mbps is not enough to measure the performance of posting pages out as we have compression, xbzrle, which can significantly reduce the amount of the data size, instead, pages-per-second is the one we want Signed-off-by: NXiao Guangrong <xiaoguangrong@tencent.com> Message-Id: <20190111063732.10484-2-xiaoguangrong@tencent.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> With typo's Eric spotted fixed
-
由 Fei Li 提交于
In our current code, when multifd is used during migration, if there is an error before the destination receives all new channels, the source keeps running, however the destination does not exit but keeps waiting until the source is killed deliberately. Fix this by dumping the specific error and let users decide whether to quit from the destination side when failing to receive packet via some channel. And update the comment for multifd_recv_new_channel(). Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Cc: Peter Xu <peterx@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by: NFei Li <fli@suse.com> Reviewed-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190113140849.38339-3-lifei1214@126.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 31 10月, 2018 1 次提交
-
-
由 Jia Lina 提交于
During an active background migration, snapshot will trigger a segmentfault. As snapshot clears the "current_migration" struct and updates "to_dst_file" before it finds out that there is a migration task, Migration accesses the null pointer in "current_migration" struct and qemu crashes eventually. Signed-off-by: NJia Lina <jialina01@baidu.com> Signed-off-by: NChai Wen <chaiwen@baidu.com> Signed-off-by: NZhang Yu <zhangyu31@baidu.com> Message-Id: <20181026083620.10172-1-jialina01@baidu.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 22 8月, 2018 2 次提交
-
-
由 Xiao Guangrong 提交于
Instead of putting the main thread to sleep state to wait for free compression thread, we can directly post it out as normal page that reduces the latency and uses CPUs more efficiently A parameter, compress-wait-thread, is introduced, it can be enabled if the user really wants the old behavior Reviewed-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NXiao Guangrong <xiaoguangrong@tencent.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Li Qiang 提交于
Currently, the default maximum CPU throttle for migration is 99(CPU_THROTTLE_PCT_MAX). This is too big and can make a remarkable performance effect for the guest. We see a lot of packets latency exceed 500ms when the CPU_THROTTLE_PCT_MAX reached. This patch set adds a new max-cpu-throttle parameter to limit the CPU throttle. Signed-off-by: NLi Qiang <liq3ea@gmail.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 15 6月, 2018 2 次提交
-
-
由 Dr. David Alan Gilbert 提交于
Rate limiting sleeps the migration thread for a while when it runs out of bandwidth; but sometimes we want to wake up to get on with something more urgent (like a postcopy request). Here we use a semaphore with a timedwait instead of a simple sleep; Incrementing the sempahore will wake it up sooner. Anything that consumes these urgent events must decrement the sempahore. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20180613102642.23995-3-dgilbert@redhat.com> Reviewed-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Dr. David Alan Gilbert 提交于
The migration code should be using the RAMBLOCK_FOREACH_MIGRATABLE and qemu_ram_foreach_block_migratable not the all-block versions; poison them so that we can't accidentally use them. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20180605162545.80778-3-dgilbert@redhat.com> Reviewed-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 04 6月, 2018 1 次提交
-
-
由 Xiao Guangrong 提交于
QEMU 3.0 enables strict check for compression & decompression to make the migration more robust, that depends on the source to fix the internal design which triggers the unexpected error conditions To make it work for migrating old version QEMU to 2.13 QEMU, we introduce this parameter to disable the error check on the destination which is the default behavior of the machine type which is older than 2.13, alternately, the strict check can be enabled explicitly as followings: -M pc-q35-2.11 -global migration.decompress-error-check=true Signed-off-by: NXiao Guangrong <xiaoguangrong@tencent.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 16 5月, 2018 10 次提交
-
-
由 Peter Xu 提交于
Let's introduce a lock for that QEMUFile since we are going to operate on it in multiple threads. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-23-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
The first allow-oob=true command. It's used on destination side when the postcopy migration is paused and ready for a recovery. After execution, a new migration channel will be established for postcopy to continue. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-21-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com> --- s/2.12/2.13/
-
由 Peter Xu 提交于
This patch implements the first part of core RAM resume logic for postcopy. ram_resume_prepare() is provided for the work. When the migration is interrupted by network failure, the dirty bitmap on the source side will be meaningless, because even the dirty bit is cleared, it is still possible that the sent page was lost along the way to destination. Here instead of continue the migration with the old dirty bitmap on source, we ask the destination side to send back its received bitmap, then invert it to be our initial dirty bitmap. The source side send thread will issue the MIG_CMD_RECV_BITMAP requests, once per ramblock, to ask for the received bitmap. On destination side, MIG_RP_MSG_RECV_BITMAP will be issued, along with the requested bitmap. Data will be received on the return-path thread of source, and the main migration thread will be notified when all the ramblock bitmaps are synchronized. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-17-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Creating new message to reply for MIG_CMD_POSTCOPY_RESUME. One uint32_t is used as payload to let the source know whether destination is ready to continue the migration. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-15-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Introducing new return path message MIG_RP_MSG_RECV_BITMAP to send received bitmap of ramblock back to source. This is the reply message of MIG_CMD_RECV_BITMAP, it contains not only the header (including the ramblock name), and it was appended with the whole ramblock received bitmap on the destination side. When the source receives such a reply message (MIG_RP_MSG_RECV_BITMAP), it parses it, convert it to the dirty bitmap by inverting the bits. One thing to mention is that, when we send the recv bitmap, we are doing these things in extra: - converting the bitmap to little endian, to support when hosts are using different endianess on src/dst. - do proper alignment for 8 bytes, to support when hosts are using different word size (32/64 bits) on src/dst. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-13-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Allows the fault thread to stop handling page faults temporarily. When network failure happened (and if we expect a recovery afterwards), we should not allow the fault thread to continue sending things to source, instead, it should halt for a while until the connection is rebuilt. When the dest main thread noticed the failure, it kicks the fault thread to switch to pause state. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-7-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Let the thread pause for network issues. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-6-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
When there is IO error on the incoming channel (e.g., network down), instead of bailing out immediately, we allow the dst vm to switch to the new POSTCOPY_PAUSE state. Currently it is still simple - it waits the new semaphore, until someone poke it for another attempt. One note is that here on ram loading thread we cannot detect the POSTCOPY_ACTIVE state, but we need to detect the more specific POSTCOPY_INCOMING_RUNNING state, to make sure we have already loaded all the device states. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-5-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Now when network down for postcopy, the source side will not fail the migration. Instead we convert the status into this new paused state, and we will try to wait for a rescue in the future. If a recovery is detected, migration_thread() will reset its local variables to prepare for that. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20180502104740.12123-4-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
We need to make sure that we have started all the multifd threads. Signed-off-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 26 4月, 2018 3 次提交
-
-
由 Alexey Perevalov 提交于
Postcopy total blocktime is available on destination side only. But query-migrate was possible only for source. This patch adds ability to call query-migrate on destination. To be able to see postcopy blocktime, need to request postcopy-blocktime capability. The query-migrate command will show following sample result: {"return": "postcopy-vcpu-blocktime": [115, 100], "status": "completed", "postcopy-blocktime": 100 }} postcopy_vcpu_blocktime contains list, where the first item is the first vCPU in QEMU. This patch has a drawback, it combines states of incoming and outgoing migration. Ongoing migration state will overwrite incoming state. Looks like better to separate query-migrate for incoming and outgoing migration or add parameter to indicate type of migration. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NAlexey Perevalov <a.perevalov@samsung.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com> Message-Id: <1521742647-25550-7-git-send-email-a.perevalov@samsung.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Alexey Perevalov 提交于
This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in case this feature is provided by kernel. PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c, due to it being a postcopy-only feature. Also it defines PostcopyBlocktimeContext's instance live time. Information from PostcopyBlocktimeContext instance will be provided much after postcopy migration end, instance of PostcopyBlocktimeContext will live till QEMU exit, but part of it (vcpu_addr, page_fault_vcpu_time) used only during calculation, will be released when postcopy ended or failed. To enable postcopy blocktime calculation on destination, need to request proper compatibility (Patch for documentation will be at the tail of the patch set). As an example following command enable that capability, assume QEMU was started with -chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock option to control it [root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \ {\"execute\": \"migrate-set-capabilities\" , \"arguments\": { \"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\": true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock Or just with HMP (qemu) migrate_set_capability postcopy-blocktime on Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NAlexey Perevalov <a.perevalov@samsung.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com> Message-Id: <1521742647-25550-3-git-send-email-a.perevalov@samsung.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Alexey Perevalov 提交于
Right now it could be used on destination side to enable vCPU blocktime calculation for postcopy live migration. vCPU blocktime - it's time since vCPU thread was put into interruptible sleep, till memory page was copied and thread awake. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NAlexey Perevalov <a.perevalov@samsung.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com> Message-Id: <1521742647-25550-2-git-send-email-a.perevalov@samsung.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 20 3月, 2018 2 次提交
-
-
由 Dr. David Alan Gilbert 提交于
Provide a helper to be used by shared waker functions to request shared pages from the source. The last_rb pointer is moved into the incoming state since this helper can update it as well as the main fault thread function. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Dr. David Alan Gilbert 提交于
Allow other userfaultfd's to be registered into the fault thread so that handlers for shared memory can get responses. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 14 3月, 2018 1 次提交
-
-
Postcopy migration of dirty bitmaps. Only named dirty bitmaps are migrated. If destination qemu is already containing a dirty bitmap with the same name as a migrated bitmap (for the same node), then, if their granularities are the same the migration will be done, otherwise the error will be generated. If destination qemu doesn't contain such bitmap it will be created. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-id: 20180313180320.339796-12-vsementsov@virtuozzo.com [Changed '+' to '*' as per list discussion. --js] Signed-off-by: NJohn Snow <jsnow@redhat.com>
-