- 19 2月, 2015 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1179678 When migrating with storage, libvirt iterates over domain disks and instruct qemu to migrate the ones we are interested in (shared, RO and source-less disks are skipped). The disks are migrated in series. No new disk is transferred until the previous one hasn't been quiesced. This is checked on the qemu monitor via 'query-jobs' command. If the disk has been quiesced, it practically went from copying its content to mirroring state, where all disk writes are mirrored to the other side of migration too. Having said that, there's one inherent error in the design. The monitor command we use reports only active jobs. So if the job fails for whatever reason, we will not see it anymore in the command output. And this can happen fairly simply: just try to migrate a domain with storage. If the storage migration fails (e.g. due to ENOSPC on the destination) we resume the host on the destination and let it run on partly copied disk. The proper fix is what even the comment in the code says: listen for qemu events instead of polling. If storage migration changes state an event is emitted and we can act accordingly: either consider disk copied and continue the process, or consider disk mangled and abort the migration. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 2月, 2015 1 次提交
-
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1191355 When we attempt to migrate a vm with a migrateuri that has no scheme: # virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1 target libvirtd will crash because uri->scheme is NULL in qemuMigrationPrepareDirect on this line: if (STRNEQ(uri->scheme, "tcp") && Add a value check before this line. Also fix a bug like this in doNativeMigrate, that could only happen when destination libvirtd returned an incorrect URI. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 05 2月, 2015 1 次提交
-
-
由 Luyao Huang 提交于
Add the missing jump to the error label when the uuid in the migration cookie XML does not match the uuid of the migrated domain. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 19 1月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
Depending on the context, either error out if the domain has disappeared in the meantime, or just ignore the value to allow marking the function as ATTRIBUTE_RETURN_CHECK.
-
- 15 1月, 2015 1 次提交
-
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1181182 When we meet error in qemuMigrationPrepareAny and goto cleanup with rc < 0, we forget clear the priv->origname and this will make this vm migrate fail next time because leave a wrong origname in priv, and will Generate a wrong cookie when do migrate next time. This patch will make priv->origname is NULL when migrate fail in target host. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 14 1月, 2015 1 次提交
-
-
由 Daniel P. Berrange 提交于
The virDomainDefParse* and virDomainDefFormat* methods both accept the VIR_DOMAIN_XML_* flags defined in the public API, along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags defined in domain_conf.c. This is seriously confusing & error prone for a number of reasons: - VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the formatting operation - Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply to parse or to format, but not both. This patch cleanly separates out the flags. There are two distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_* flags that are used by the corresponding methods. The VIR_DOMAIN_XML_* flags received via public API calls must be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where needed. The various calls to virDomainDefParse which hardcoded the use of the VIR_DOMAIN_XML_INACTIVE flag change to use the VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
-
- 23 12月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 21 12月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that). The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere. This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 17 12月, 2014 1 次提交
-
-
由 Eric Blake 提交于
A future patch will allow recursion into backing chains when collecting block stats. This patch should not change behavior, but merely moves out the common code that will be reused once recursion is enabled, and adds the parameter that will turn on recursion. * src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo) (qemuMonitorBlockStatsUpdateCapacity): Add recursion parameter, although it is ignored for now. * src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo) (qemuMonitorBlockStatsUpdateCapacity): Likewise. * src/qemu/qemu_monitor_json.h (qemuMonitorJSONGetAllBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacity): Likewise. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetAllBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacity): Add parameter, and split... (qemuMonitorJSONGetOneBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacityOne): ...into helpers. (qemuMonitorJSONGetBlockStatsInfo): Update caller. * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Update caller. * src/qemu/qemu_migration.c (qemuMigrationCookieAddNBD): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 03 12月, 2014 2 次提交
-
-
由 Michal Privoznik 提交于
Based on previous commit, we can now precreate missing volumes. While digging out the functionality from storage driver would be nicer, if you've seen the code it's nearly impossible. So I'm going from the other end: 1) For given disk target, disk path is looked up. 2) For the disk path, storage pool is looked up, a volume XML is constructed and then passed to virStorageVolCreateXML() which has all the knowledge how to create raw images, (encrypted) qcow(2) images, etc. One of the advantages of this approach is, we don't have to care about image conversion - qemu does that for us. So for instance, users can transform qcow2 into raw on migration (if the correct XML is passed to the migration API). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Up 'til now, users need to precreate non-shared storage on migration themselves. This is not very friendly requirement and we should do something about it. In this patch, the migration cookie is extended, so that <nbd/> section does not only contain NBD port, but info on disks being migrated. This patch sends a list of pairs of: <disk target; disk size> to the destination. The actual storage allocation is left for next commit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 01 12月, 2014 2 次提交
-
-
由 Wang Rui 提交于
If job is failed in qemuMigrationRun, we expect the jobinfo type as FAILED. But jobinfo type won't be updated until entering qemuMigrationWaitForCompletion. We should make it updated in all conditions. Moreover, we can't use qemuMigrationUpdateJobStatus here because job may fail in libvirt, so we can't query job status from QEMU. Signed-off-by: NWang Rui <moon.wangrui@huawei.com>
-
由 Wang Rui 提交于
The migration job status is traced in qemuMigrationUpdateJobStatus which is called in qemuMigrationRun. But if migration is cancelled before the trace such as in qemuMigrationDriveMirror, the jobinfo type won't be updated to CANCELLED. After this patch, we can get jobinfo type CANCELLED if migration is cancelled during drive mirror. Moreover, we can't use qemuMigrationUpdateJobStatus because from qemu's point of view it's just the drive mirror being cancelled and the migration hasn't even started yet. Signed-off-by: NWang Rui <moon.wangrui@huawei.com>
-
- 28 11月, 2014 1 次提交
-
-
由 Jiri Denemark 提交于
virReportSystemError is reserved for reporting system errors, calling it with VIR_ERR_* error codes produces error messages that do not make any sense, such as internal error: guest failed to start: Kernel doesn't support user namespace: Link has been severed We should prohibit wrong usage with a syntax-check rule. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 20 11月, 2014 2 次提交
-
-
由 Jiri Denemark 提交于
Oops, I forgot to squash one more instance of the same check in the previous commit (v1.2.10-144-g52691f99). https://bugzilla.redhat.com/show_bug.cgi?id=1147331Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Any attempt to start a tunnelled migration with libvirtd that supports RDMA migration (specifically commit v1.2.8-226-ged22a474) crashes libvirtd on the destination host. The crash is inevitable because qemuMigrationPrepareAny is always called with NULL protocol in case of tunnelled migration. https://bugzilla.redhat.com/show_bug.cgi?id=1147331Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 15 11月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 14 11月, 2014 1 次提交
-
-
由 Jiri Denemark 提交于
We used to set migration capabilities only when a user asked for them in flags. This is fine when migration succeeds since the QEMU process is killed in the end but in case migration fails or if it's cancelled, some capabilities may remain turned on with no way to turn them off. To fix that, migration capabilities have to be turned on if requested but explicitly turned off in case they were not requested but QEMU supports them. https://bugzilla.redhat.com/show_bug.cgi?id=1163953Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 13 11月, 2014 1 次提交
-
-
由 Pavel Hrdina 提交于
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine} and starting of guest, but this same deadlock exists for updating/attaching network device to domain. The deadlock was introduced by removing global QEMU driver lock because nwfilter was counting on this lock and ensure that all driver locks are locked inside of nwfilter{Define,Undefine}. This patch extends usage of virNWFilterReadLockFilterUpdates to prevent the deadlock for all possible paths in QEMU driver. LXC and UML drivers still have global lock. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 06 11月, 2014 1 次提交
-
-
由 Ján Tomko 提交于
==404== 232 bytes in 1 blocks are definitely lost in loss record 669 of 758 ==404== at 0x4C2B934: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==404== by 0x52A2BF3: virAlloc (viralloc.c:144) ==404== by 0x1D49AD70: qemuMigrationCookieAddStatistics (qemu_migration.c:554) ==404== by 0x1D49AD70: qemuMigrationBakeCookie (qemu_migration.c:1228) ==404== by 0x1D4A43B8: qemuMigrationFinish (qemu_migration.c:5002) ==404== by 0x1D4C9339: qemuDomainMigrateFinish3Params (qemu_driver.c:11526) Introduced by commit 5d6fb963
-
- 04 11月, 2014 1 次提交
-
-
由 Weiwei Li 提交于
In qemuMigrationFinish mig->nbd can not be initialized by qemuMigrationEatCookie without the QEMU_MIGRATION_COOKIE_NBD flag. That causes qemuMigrationStopNBDServer to return early without stopping the NBD server properly. Signed-off-by: NWeiwei Li <nuonuoli@tencent.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 31 10月, 2014 1 次提交
-
-
由 weiwei li 提交于
commit 3e1e16aa (Use a port from the migration range for NBD as well) changed ndb port allocation from remotePorts to migrationPorts, but did not change the port releasing process, which makes an error when migrating several times (above 64): error: internal error: Unable to find an unused port in range 'migration' (49152-49215) https://bugzilla.redhat.com/show_bug.cgi?id=1159245Signed-off-by: NWeiwei Li <nuonuoli@tencent.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 22 10月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
Also consider whitespace only strings returned from the hook as empty result.
-
- 15 10月, 2014 1 次提交
-
-
由 Chen Fan 提交于
if specifying migration_host to an Ipv6 address without brackets, it was resolved to an incorrect address, such as: tcp:2001:0DB8::1428:4444, but the correct address should be: tcp:[2001:0DB8::1428]:4444 so we should add brackets when parsing it. Signed-off-by: NChen Fan <chen.fan.fnst@cn.fujitsu.com>
-
- 30 9月, 2014 1 次提交
-
-
由 Ján Tomko 提交于
Commit fba6bc47 introduced the non-migratable invtsc feature, breaking save/migration with host-model and host-passthrough. On hosts with this feature present it was automatically included in the CPU definition, regardless of QEMU support. Commit de0aeafe stopped including it by default for host-model, but failed to fix host-passthrough. This commit ignores checking of CPU features with host-passthrough, since we don't pass them to QEMU (only -cpu host is passed), allowing domains using host-passthrough that were saved with the broken version of libvirtd to be restored. https://bugzilla.redhat.com/show_bug.cgi?id=1147584
-
- 23 9月, 2014 6 次提交
-
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Michael R. Hines 提交于
RDMA Live migration requires registering memory with the hardware, and thus QEMU offers a new 'capability' to pre-register / mlock() the guest memory in advance for higher RDMA performance before the migration begins. This capability is disabled by default, which means QEMU will register the memory with the hardware in an on-demand basis. This patch exposes this capability with the following example usage: virsh migrate --live --rdma-pin-all --migrateuri rdma://hostname domain qemu+ssh://hostname/systemSigned-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Michael R. Hines 提交于
This patch adds support for RDMA protocol in migration URIs. USAGE: $ virsh migrate --live --migrateuri rdma://hostname domain qemu+ssh://hostname/system Since libvirt runs QEMU in a pretty restricted environment, several files needs to be added to cgroup_device_acl (in qemu.conf) for QEMU to be able to access the host's infiniband hardware. Full documenation of the feature can be found on QEMU wiki: http://wiki.qemu.org/Features/RDMALiveMigrationSigned-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Currently we only support TCP protocol for native QEMU migration but this is going to be changed. Let's make the code more general and remove hardcoded TCP protocol from several places. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
For compatibility with old libvirt we need to support both tcp:host and tcp://host migration URIs. Let's make the code that parses them a bit cleaner. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Michael R. Hines 提交于
RDMA migration uses the 'setup' state in QEMU to optionally lock all memory before the migration starts. The total time spent in this state is exposed as VIR_DOMAIN_JOB_SETUP_TIME. Additionally, QEMU also exports migration throughput (mbps) for both memory and disk, so let's add them too: VIR_DOMAIN_JOB_MEMORY_BPS, VIR_DOMAIN_JOB_DISK_BPS. Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 12 9月, 2014 1 次提交
-
-
由 Eric Blake 提交于
Upstream qemu 1.4 added some drive-mirror tunables not present when it was first introduced in 1.3. Management apps may want to set these in some cases (for example, without tuning granularity down to sector size, a copy may end up occupying more bytes than the original because an entire cluster is copied even when only a sector within the cluster is dirty, although tuning it down results in more CPU time to do the copy). I haven't personally needed to use the parameters, but since they exist, and since the new API supports virTypedParams, we might as well expose them. Since the tuning parameters aren't often used, and omitted from the QMP command when unspecified, I think it is safe to rely on qemu 1.3 to issue an error about them being unsupported, rather than trying to create a new capability bit in libvirt. Meanwhile, all versions of qemu from 1.4 to 2.1 have a bug where a bad granularity (such as non-power-of-2) gives a poor message: error: internal error: unable to execute QEMU command 'drive-mirror': Invalid parameter 'drive-virtio-disk0' because of abuse of QERR_INVALID_PARAMETER (which is supposed to name the parameter that was given a bad value, rather than the value passed to some other parameter). I don't see that a capability check will help, so we'll just live with it (and it has since been improved in upstream qemu). * src/qemu/qemu_monitor.h (qemuMonitorDriveMirror): Add parameters. * src/qemu/qemu_monitor.c (qemuMonitorDriveMirror): Likewise. * src/qemu/qemu_monitor_json.h (qemuMonitorJSONDriveMirror): Likewise. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONDriveMirror): Likewise. * src/qemu/qemu_driver.c (qemuDomainBlockCopyCommon): Likewise. (qemuDomainBlockRebase, qemuDomainBlockCopy): Adjust callers. * src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Likewise. * tests/qemumonitorjsontest.c (qemuMonitorJSONDriveMirror): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 11 9月, 2014 1 次提交
-
-
由 John Ferlan 提交于
If the qemuMigrationEatCookie() fails to set mig, we jump to cleanup: which will call qemuMigrationCancelDriveMirror() without first checking if mig == NULL Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 10 9月, 2014 5 次提交
-
-
由 Jiri Denemark 提交于
After the previous commit, migration statistics on the source and destination hosts are not equal because the destination updated time statistics. Let's send the result back so that the same data can be queried on both sides of the migration. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Total time of a migration and total downtime transfered from a source to a destination host do not count with the transfer time to the destination host and with the time elapsed before guest CPUs are resumed. Thus, source libvirtd remembers when migration started and when guest CPUs were paused. Both timestamps are transferred to destination libvirtd which uses them to compute total migration time and total downtime. Obviously, this requires the time to be synchronized between the two hosts. The reported times are useless otherwise but they would be equally useless if we didn't do this recomputation so don't lose anything by doing it. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
When migrating a transient domain or with VIR_MIGRATE_UNDEFINE_SOURCE flag, the domain may disappear from source host. And so will migration statistics associated with the domain. We need to transfer the statistics at the end of a migration so that they can be queried at the destination host. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
virDomainGetJobStats gains new VIR_DOMAIN_JOB_STATS_COMPLETED flag that can be used to fetch statistics of a completed job rather than a currently running job. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Job statistics data were tracked in several structures and variables. Let's make a new qemuDomainJobInfo structure which can be used as a single source of statistics data as a preparation for storing data about completed a job. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 08 9月, 2014 2 次提交
-
-
由 Jiri Denemark 提交于
When QEMU fails during incoming migration after we successfully started it (i.e., during Perform or Finish phase), we report a rather unhelpful message Unable to read from monitor: Connection reset by peer We already have a code that takes error messages from QEMU's error output but we disable it once QEMU successfully starts. This patch postpones this until the end of Finish phase during incoming migration so that we can report a much better error message: internal error: early end of file from monitor: possible problem: Unknown savevm section or instance '0000:00:05.0/virtio-balloon' 0 load of migration failed https://bugzilla.redhat.com/show_bug.cgi?id=1090093Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Peter Krempa 提交于
Be consistent with naming of private defines. Also line up code correctly in few places where the macro is used.
-