- 19 6月, 2015 5 次提交
-
-
由 Jiri Denemark 提交于
We don't have an async job when reconnecting to existing domains after libvirtd restart. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Abort migration as soon as we detect that some of the disk mirrors failed. There's no sense in trying to finish memory migration first. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Instead of cancelling disk mirrors sequentially, let's just call block-job-cancel for all migrating disks and then wait until all disappear. In case we cancel disk mirrors at the end of successful migration we also need to check all block jobs completed successfully. Otherwise we have to abort the migration. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
By switching block jobs to use domain conditions, we can drop some pretty complicated code in NBD storage migration. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Because we are polling we may detect some errors after we asked QEMU for migration status even though they occurred before. If this happens and QEMU reports migration completed successfully, we would happily report the migration succeeded even though we should have cancelled it because of the other error. In practise it is not a big issue now but it will become a much bigger issue once the check for storage migration status is moved inside the loop in qemuMigrationWaitForCompletion. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 18 6月, 2015 3 次提交
-
-
由 Pavel Boldin 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1203032 Implement a `migrate_disks' parameters for the QEMU driver. This multi- value parameter can be used to explicitly specify what block devices are to be migrated using the NBD server. Tunnelled migration using NBD is to be done. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When playing with disk migration lately, I've noticed this warning in domain logs: WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw. Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions. So I started digging into qemu source code to see what has triggered the warning. I'd expect qemu to know formats of guest's disks since we tell them on command line. This lead me to qmp_drive_mirror() where the following can be found: if (!has_format) { format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name; } So, format is automatically initialized from the disk iff mode != "existing". Unfortunately, in migration we are tied to use this mode (NBD doesn't support creating new images). Therefore the only way to avoid this warning is to pass format. The discussion on the mail-list [1] resulted in the code that always forces NBD export as "raw" format. [1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.htmlSigned-off-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NPavel Boldin <pboldin@mirantis.com>
-
由 Michal Privoznik 提交于
This function is returning a string (domain XML). Since d3ce7363 when it was first introduced, it was indented incorrectly: static char *qemuMigrationBeginPhase(..) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 6月, 2015 1 次提交
-
-
由 Daniel P. Berrange 提交于
By default, getaddrinfo() will return addresses for both IPv4 and IPv6 if both protocols are enabled, and so the RPC code will listen/connect to both protocols too. There may be cases where it is desirable to restrict this to just one of the two protocols, so add an 'int family' parameter to all the TCP related APIs. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 04 6月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
-
- 21 5月, 2015 1 次提交
-
-
由 Jiri Denemark 提交于
Most virDomainDiskIndexByName callers do not care about the index; what they really want is a disk def pointer. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 15 5月, 2015 4 次提交
-
-
由 Jiri Denemark 提交于
When cancelling drive mirror, always try to do that for all disks even if it fails for some of them. Report the first error we saw. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Instead of redoing the same filtering over and over everytime we need to walk through all disks which are being migrated. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
And move it to qemu_domain.[ch] because this API is QEMU-only. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 13 5月, 2015 1 次提交
-
-
由 zhang bo 提交于
As of eeb008db the variable is not used anymore. Drop it. Signed-off-by: NWang Yufei <james.wangyufei@huawei.com> Signed-off-by: NZhang Bo <oscar.zhangbo@huawei.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 05 5月, 2015 1 次提交
-
-
由 Jiri Denemark 提交于
When migrating a domain while changing its name and using VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the name in the persistent definition. The inconsistency results in weird behavior when dumping domain XML, destroying the domain, restarting libvirtd and likely in several other situations. Since the new name is already stored in vm->def->name, we just need to make sure the persistent definition uses this new name too. https://bugzilla.redhat.com/show_bug.cgi?id=1076354Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 04 5月, 2015 1 次提交
-
-
由 Jiri Denemark 提交于
Neither migrate URI nor lister address make any sense for tunnelled migration. https://bugzilla.redhat.com/show_bug.cgi?id=1066375 https://bugzilla.redhat.com/show_bug.cgi?id=1073233Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 29 4月, 2015 1 次提交
-
-
由 Michael Chapman 提交于
In qemuMigrationDriveMirror we can start all disk mirrors in parallel. We wait until they are all ready, or one of them aborts. In qemuMigrationCancelDriveMirror, we wait until all mirrors are properly stopped. This is necessary to ensure that destination VM is fully in sync with the (paused) source VM. If a drive mirror can not be cancelled, then the destination is not in a consistent state. In this case it is not safe to continue with the migration. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 24 4月, 2015 2 次提交
-
-
由 Jiri Denemark 提交于
virDomainGetJobStats is able to report statistics of a completed migration, however to get usable downtime and total time statistics both hosts have to keep synchronized time. To provide at least some estimation of the times even when NTP daemons are not running on both hosts we can just ignore the time needed to transfer a migration cookie to the destination host. The result will be also inaccurate but a bit more predictable. The total/down time will just be at least what we report. https://bugzilla.redhat.com/show_bug.cgi?id=1213434
-
由 Michal Privoznik 提交于
This is basically turning qemuDomObjEndAPI into a more general function. Other drivers which gets a reference to domain objects may benefit from this function too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 22 4月, 2015 1 次提交
-
-
由 Peter Krempa 提交于
The hostdev check can error out right away.
-
- 21 4月, 2015 1 次提交
-
-
由 Cole Robinson 提交于
This needs to specified in way too many places for a simple validation check. The ostype/arch/virttype validation checks later in DomainDefParseXML should catch most of the cases that this was covering.
-
- 14 4月, 2015 2 次提交
-
-
由 Huanle Han 提交于
1. 'last_good_net' indicates the index of last successfully configured net. so def->nets[last_good_net] should also be clean up if error occurs. 2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback' (second 'goto err_exit' in loop), we should also do 'virNetDevVPortProfileDisassociate' cleanup for the 'virNetDevVPortProfileAssociate'(first code block in loop). So we should consider the net is successfully configured after first code block in loop finishes. Signed-off-by: NHuanle Han <hanxueluo@gmail.com>
-
由 Peter Krempa 提交于
Sacrifice a few lines of code in favor of the code being more readable.
-
- 13 4月, 2015 2 次提交
-
-
由 Michal Privoznik 提交于
When pre-creating storage for domains, we need to find corresponding disk in the XML on the destination (domain XML may differ there, e.g. disk is accessible under different path). For better debugging, I'm printing all info I received on a disk. But there was a typo when printing the disk capacity: "%lluu" instead of "%llu". Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Xing Lin 提交于
The problem with the previous implementation is, even when qemuMigrationUpdateJobStatus() detects a migration job has completed, it will do a sleep for 50 ms (which is unnecessary and only adds up to the VM pause time). Signed-off-by: NXing Lin <xinglin@cs.utah.edu> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 08 4月, 2015 3 次提交
-
-
由 Michael Chapman 提交于
qemuMigrationCookieAddNBD is usually called from within an async MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job. (The one exception is during the Begin phase when change protection isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same as qemuDomainObjEnterMonitor in this case.) This bug was encountered with a libvirt client that repeatedly queries the disk mirroring block job info during a migration. If one of these queries occurs just as the Perform migration cookie is baked, libvirt crashes. Relevant logs are as follows: 6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"} [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"} [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"} [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"} 6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}' At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits on mon->notify. At [2] the request is written out to the monitor socket. At [3] qemuMonitorBlockJobInfo sends its request, and also waits on mon->notify. The reply from the first request is received at [4]. However, qemuMonitorJSONIOProcessLine is not expecting this reply since the second request hadn't completed sending. The reply is dropped and an error is returned. qemuMonitorIO signals mon->notify twice during its error handling, waking up both of the threads waiting on it. One of them clears mon->msg as it exits qemuMonitorSend; the other crashes: qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975 975 while (!mon->msg->finished) { (gdb) print mon->msg $1 = (qemuMonitorMessagePtr) 0x0 Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
If a VM migration is aborted, a disk mirror may be failed by QEMU before libvirt has a chance to cancel it. The disk->mirrorState remains at _ABORT in this case, and this breaks subsequent mirrorings of that disk. We should instead check the mirrorState directly and transition to _NONE if it is already aborted. Do the check *after* aborting the block job in QEMU to avoid a race. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to indicate an error occurred. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 02 4月, 2015 1 次提交
-
-
由 Shanzhi Yu 提交于
virDomainHasDiskMirror() currently detects only jobs that add the mirror elements. Since some operations like migration are interlocked by existing block jobs on the given domain the check needs to be instrumented to check regular jobs too. This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob and adds an argument that allows to select that it returns true only for block copy jobs as those interlock making the domain persistent. Other two uses trigger on any block job type. Signed-off-by: NShanzhi Yu <shyu@redhat.com> Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
- 23 3月, 2015 2 次提交
-
-
由 Peter Krempa 提交于
Make sure that libvirt has all vital information needed to reliably represent configuration of guest's memory devices in case of a migration. This patch forbids migration in case the required slot number and module base address are not present (failed to be loaded from qemu via monitor).
-
由 Michael Chapman 提交于
Commit cf54c606 introduced the ability to create missing storage volumes during migration. For network disks, however, we may not necessarily be able to detect whether they already exist -- there is no straight-forward way to map the disk to a storage volume, and even if there were it's possible no configured storage pool actually contains the disk. It is better to assume the network disk exists in this case, rather than aborting the migration completely. If the volume really is missing, QEMU will generate an appropriate error later in the migration. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 19 3月, 2015 2 次提交
-
-
由 Eric Blake 提交于
In qemu 2.3, the migration status will include 'cancelling' in the window between when an asynchronous cancel has been requested and when the migration is actually halted. Previously, qemu hid this state and reported 'active'. Libvirt manages the sequence okay even when the string is unrecognized (that is, it will report an unknown state: Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling. but the migration is still cancelled), but recognizing the string makes for a smoother user experience. * src/qemu/qemu_monitor.h (QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum. * src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it. * src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust clients. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetMigrationStatusReply): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Laine Stump 提交于
virnetdevopenvswitch.h declares a few functions that can be called to add ports to and remove them from OVS bridges, and retrieve the migration data for a port. It does not contain any data definitions that are used by domain_conf.h. But for some reason, domain_conf.h virnetdevopenvswitch.h should be directly #including it. This adds a few lines to the project, but saves all the files that don't need it from the extra computing, and makes the dependencies more clear cut.
-
- 06 3月, 2015 1 次提交
-
-
由 Pavel Hrdina 提交于
There was a mess in the way how we store unlimited value for memory limits and how we handled values provided by user. Internally there were two possible ways how to store unlimited value: as 0 value or as VIR_DOMAIN_MEMORY_PARAM_UNLIMITED. Because we chose to store memory limits as unsigned long long, we cannot use -1 to represent unlimited. It's much easier for us to say that everything greater than VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid value despite that it makes no sense to set limit to 0. Remove unnecessary function virCompareLimitUlong. The update of test is to prevent the 0 to be miss-used as unlimited in future. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 20 2月, 2015 1 次提交
-
-
由 Michal Privoznik 提交于
It will come handy in the near future when we will filter some capabilities based on it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 19 2月, 2015 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1179678 When migrating with storage, libvirt iterates over domain disks and instruct qemu to migrate the ones we are interested in (shared, RO and source-less disks are skipped). The disks are migrated in series. No new disk is transferred until the previous one hasn't been quiesced. This is checked on the qemu monitor via 'query-jobs' command. If the disk has been quiesced, it practically went from copying its content to mirroring state, where all disk writes are mirrored to the other side of migration too. Having said that, there's one inherent error in the design. The monitor command we use reports only active jobs. So if the job fails for whatever reason, we will not see it anymore in the command output. And this can happen fairly simply: just try to migrate a domain with storage. If the storage migration fails (e.g. due to ENOSPC on the destination) we resume the host on the destination and let it run on partly copied disk. The proper fix is what even the comment in the code says: listen for qemu events instead of polling. If storage migration changes state an event is emitted and we can act accordingly: either consider disk copied and continue the process, or consider disk mangled and abort the migration. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 2月, 2015 1 次提交
-
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1191355 When we attempt to migrate a vm with a migrateuri that has no scheme: # virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1 target libvirtd will crash because uri->scheme is NULL in qemuMigrationPrepareDirect on this line: if (STRNEQ(uri->scheme, "tcp") && Add a value check before this line. Also fix a bug like this in doNativeMigrate, that could only happen when destination libvirtd returned an incorrect URI. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 05 2月, 2015 1 次提交
-
-
由 Luyao Huang 提交于
Add the missing jump to the error label when the uuid in the migration cookie XML does not match the uuid of the migrated domain. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-