- 19 6月, 2015 19 次提交
-
-
由 Jiri Denemark 提交于
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which polls query-spice and replace it with waiting for the event. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
To avoid polling for asyncAbort flag changes. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
When libvirtd is restarted during migration, we properly cancel the ongoing migration (unless it managed to almost finished before the restart). But if we were also migrating storage using NBD, we would completely forget about the running disk mirrors. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
"query-block-jobs" QMP command returns all running block jobs at once, while qemuMonitorBlockJobInfo would only report one. This is not very nice in case we need to check several block jobs. This patch refactors the monitor code to always parse all block jobs and store them in a hash. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
We don't have an async job when reconnecting to existing domains after libvirtd restart. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
So that they can format private data (e.g., disk private data) stored elsewhere in the domain object. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
This patch reverts commit 76c61cdc. VIR_DOMAIN_DISK_MIRROR_STATE_ABORT says we asked for a block job to be aborted rather than saying it was aborted. Let's just use VIR_DOMAIN_DISK_MIRROR_STATE_NONE consistently whenever a block job finishes since no caller depends on VIR_DOMAIN_DISK_MIRROR_STATE_ABORT (anymore) to check whether a block job failed or it was cancelled. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Abort migration as soon as we detect that some of the disk mirrors failed. There's no sense in trying to finish memory migration first. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Instead of cancelling disk mirrors sequentially, let's just call block-job-cancel for all migrating disks and then wait until all disappear. In case we cancel disk mirrors at the end of successful migration we also need to check all block jobs completed successfully. Otherwise we have to abort the migration. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
By switching block jobs to use domain conditions, we can drop some pretty complicated code in NBD storage migration. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Because we are polling we may detect some errors after we asked QEMU for migration status even though they occurred before. If this happens and QEMU reports migration completed successfully, we would happily report the migration succeeded even though we should have cancelled it because of the other error. In practise it is not a big issue now but it will become a much bigger issue once the check for storage migration status is moved inside the loop in qemuMigrationWaitForCompletion. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The wrapper is useful for calling qemuBlockJobEventProcess with the event details stored in disk's privateData, which is the most likely usage of qemuBlockJobEventProcess. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Complex jobs, such as migration, need to monitor several events at once, which is impossible when each of the event uses its own condition variable. This patch adds a single condition variable to each domain object. This variable can be used instead of the other event specific conditions. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Michal Privoznik 提交于
Well, if a server is being destructed, all underlying services and their sockets should disappear with it. But due to bug in our implementation this is not the case. Yes, we are closing the sockets, but that's not enough. We must also: 1) Unregister them from the event loop 2) Unref the service for each socket The last step is needed, because each socket callback holds a reference to the service object. Since in the first step we are unregistering the callbacks, they no longer need the reference. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Although highly unlikely, nobody says that virEventAddHandle() can't return 0 as a handle to socket callback. It can't happen with our default implementation since all watches will have value 1 or greater, but users can register their own callback functions (which can re-use unused watch IDs for instance). If this is the case, weird things may happen. Also, there's a little bug I'm fixing too, upon virNetSocketRemoveIOCallback(), the variable holding callback ID was not reset. Therefore calling AddIOCallback() once again would fail. Not that we are doing it right now, but we might. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When going through the code I've notice that virNetSocketAddIOCallback() increases the reference counter of @socket. However, its counter part RemoveIOCallback does not. It took me a while to realize this disproportion. The AddIOCallback registers our own callback which eventually calls the desired callback and then unref the @sock. Yeah, a bit complicated but it works. So, lets note this hard learned fact in a comment in RemoveIOCallback(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When setting up the daemon networking, new services are created. These services then have sockets to listen on. Once created, the service objects are added to corresponding server object. However, during that process, server increases reference counter of the service object. So, at the end of the function, we should decrease it again. This way the service objects will have only 1 reference, but that's okay since servers are the only objects having a reference. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Since VIR_DOMAIN_AFFECT_CURRENT is 0 the flag check does not make sense as masking @flags with 0 will always equal to false.
-
- 18 6月, 2015 21 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1224233 Currently it's not possible to determine the difference between a fatal memory allocation or failure to open/read the directory error with a perhaps less fatal, I didn't find the "block" device in the directory (which may be a disk entry without a block device). In the case of the latter, we shouldn't cause failure to continue searching in the caller (virStorageBackendSCSIFindLUs), rather we should allow trying reading the next directory entry. Signed-off-by: NJohn Ferlan <jferlan@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Boldin 提交于
Add `virsh migrate' option `--migrate-disks' that allows CLI user to explicitly specify block devices to migrate. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Boldin 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1203032 Implement a `migrate_disks' parameters for the QEMU driver. This multi- value parameter can be used to explicitly specify what block devices are to be migrated using the NBD server. Tunnelled migration using NBD is to be done. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Boldin 提交于
The `virTypedParamsAddStringList' function provides interface to add a NULL-terminated array of string values as a multi-value to the params. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Boldin 提交于
Add multikey API: * virTypedParamsFilter that filters all the parameters with specified name. * virTypedParamsGetStringList that returns a list with all the values for specified name and string type. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Boldin 提交于
Allow multi-value parameters to be build using virTypedParamsAdd* functions by removing check for duplicates. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Boldin 提交于
The `virTypedParamsValidate' function now can be instructed to allow multiple entries for some of the keys. For this flag the type with the `VIR_TYPED_PARAM_MULTIPLE' flag. Add unit tests for this new behaviour. Signed-off-by: NPavel Boldin <pboldin@mirantis.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When playing with disk migration lately, I've noticed this warning in domain logs: WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw. Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions. So I started digging into qemu source code to see what has triggered the warning. I'd expect qemu to know formats of guest's disks since we tell them on command line. This lead me to qmp_drive_mirror() where the following can be found: if (!has_format) { format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name; } So, format is automatically initialized from the disk iff mode != "existing". Unfortunately, in migration we are tied to use this mode (NBD doesn't support creating new images). Therefore the only way to avoid this warning is to pass format. The discussion on the mail-list [1] resulted in the code that always forces NBD export as "raw" format. [1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.htmlSigned-off-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NPavel Boldin <pboldin@mirantis.com>
-
由 Michal Privoznik 提交于
This function is returning a string (domain XML). Since d3ce7363 when it was first introduced, it was indented incorrectly: static char *qemuMigrationBeginPhase(..) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The disk is not changed anywhere in the function. Mark this fact in the function header too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
If virDomainObjGetDefs used in qemuDomainPinIOThread would fail the code would jump to the 'cleanup' label after acquiring the job, thus the VM would be locked forever. Introduced in commit cac6d639.
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
The privileged flag will not change while the configuration might change. Make the 'privileged' flag member of the driver again and mark it immutable. Should that ever change add an accessor that will group reads of the state.
-
由 Peter Krempa 提交于
Simplify the code by restructuring control flow and reusing the better helper.
-
由 Peter Krempa 提交于
Replace the for loops with case inside with temp variables and a macro.
-
由 Peter Krempa 提交于
Use virDomainObjGetDefs and sanitize the control flow.
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
virDomainObjGetOneDef is simpler to use than virDomainObjGetDefs
-
由 Peter Krempa 提交于
virDomainObjGetOneDef is simpler to use than virDomainObjGetDefs
-
由 Peter Krempa 提交于
virDomainObjGetOneDef is simpler to use than virDomainObjGetDefs
-