- 23 7月, 2019 1 次提交
-
-
由 John Ferlan 提交于
It's already dereffed in the initialization and shouldn't be NULL unless virJSONValueArraySize after a virJSONValueObjectGetArray could return a NULL data entry. Found by Coverity Signed-off-by: NJohn Ferlan <jferlan@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com>
-
- 19 7月, 2019 12 次提交
-
-
由 Jiri Denemark 提交于
Ever since --parallel-connections option for virsh migrate was introduced we did not properly check the return value of vshCommandOptInt. We would set VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS parameter even if vshCommandOptInt returned 0 (which means --parallel-connections was not specified) when another int option which was checked earlier was specified with a nonzero value. Specifically, running virsh migrate with either --auto-converge-increment, --auto-converge-initial, --comp-mt-dthreads, --comp-mt-threads, or --comp-mt-level would set VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS parameter and if --parallel option was not used, libvirt would complain error: invalid argument: Turn parallel migration on to tune it even though --parallel-connections option was not used at all. https://bugzilla.redhat.com/show_bug.cgi?id=1726643Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Report in logs when we don't find existing block job data and create it just to handle the job. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
While this function does start a block job in case when we'd not be able to get our internal data for it, the handler sets the job state to QEMU_BLOCKJOB_STATE_RUNNING anyways, thus qemuBlockJobStartupFinalize would just unref the job. Since the other usage of qemuBlockJobStartupFinalize in the other part of the event handler was a bug replace this one anyways even if it would not cause problems. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
The block job event handler qemuProcessHandleBlockJob looks at the block job data to see whether the job requires synchronous handling. Since the block job event may arrive before we continue the job handling (if the job has no data to copy) we could hit the state when the job is still set as QEMU_BLOCKJOB_STATE_NEW (as we move it to the QEMU_BLOCKJOB_STATE_RUNNING state only after returning from monitor). If the event handler uses qemuBlockJobStartupFinalize it would unregister and free the job. Thankfully this is not a big problem for legacy blockjobs as we don't need much data for them but since we'd re-instantiate the job data structure we'd report wrong job type for active commit as qemu reports it as a regular commit job. Fix it by not using qemuBlockJobStartupFinalize function in qemuProcessHandleBlockJob as it is not starting the job anyways. https://bugzilla.redhat.com/show_bug.cgi?id=1721375Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Wang Yechao 提交于
virCgroupRemove return -1 when removing cgroup failed. But there are retry code to remove cgroup in QemuProcessStop: retry: if ((ret = qemuRemoveCgroup(vm)) < 0) { if (ret == -EBUSY && (retries++ < 5)) { usleep(200*1000); goto retry; } VIR_WARN("Failed to remove cgroup for %s", vm->def->name); } The return value of qemuRemoveCgroup will never be equal to "-EBUSY", so change the return value of virCgroupRemove if failed. Signed-off-by: NWang Yechao <wang.yechao255@zte.com.cn> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel P. Berrangé 提交于
Shutting down the daemon after 30 seconds of being idle is a little bit too aggressive. Especially when using 'virsh' in single-shot mode, as opposed to interactive shell mode, it would not be unusual to have more than 30 seconds between commands. This will lead to the daemon shutting down and starting up between a series of commands. Increasing the shutdown timer to 2 minutes will make it less likely that the daemon will shutdown while the user is in the middle of a series of commands. Reviewed-by: NJim Fehlig <jfehlig@suse.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
Instead of having each caller pass in the desired logfile name, pass in the binary name instead. The logging code can then just derive a logfile name by appending ".log". Reviewed-by: NJán Tomko <jtomko@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Jonathon Jongsma 提交于
Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Silvan Kaiser 提交于
This adds detection of a Quobyte as a shared file system for live migration. Signed-off-by: NSilvan Kaiser <silvan@quobyte.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
We have two functions: virPCIDeviceAddressIsEqual() defined only on Linux and virPCIDeviceAddressEqual() defined everywhere. And both of them do the same. Drop the former in favour of the latter. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
由 Peter Krempa 提交于
Commit 5ff46aaa added a new parameter but neglected to fix the NONNULL declarations. Reported-by: NJulio Faracco <jcfaracco@gmail.com> Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Erik Skultety 提交于
virPCIGetSysfsFile is conditionally compiled only on Linux platforms. Signed-off-by: NErik Skultety <eskultet@redhat.com>
-
- 18 7月, 2019 27 次提交
-
-
由 Peter Krempa 提交于
When removing the disk fronted while any block job is still active we need to transfer the ownership of the backing chain to the job itself as the job still holds the reference to the chain members and thus attempts to remove them would fail. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
In cases when the disk frontend was unplugged while a blockjob was running the blockjob inherits the backing chain. When the blockjob is then terminated we need to unplug the chain as it will not be used any more. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
The PR manager is a property of the format layer in qemu so we need to be able to track it also in the chains of orphaned block jobs. Add a helper for qemu to look also into the blockjob state. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
When the guest unplugs the disk frontend libvirt is responsible for deleting the backend. Since a blockjob may still have a reference to the backing chain when it is running we'll have to store the metadata for the unplugged disk for future reference. This patch adds 'chain' and 'mirrorChain' fields to 'qemuBlockJobData' to keep them around with the job along with status XML machinery and tests. Later patches will then add code to change the ownership of the chain when unplugging the disk backend. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Refresh the state of the jobs and process any events that might have happened while libvirt was not running. The job state processing requires some care to figure out if a job needs to be bumped. For any invalid job try doing our best to cancel it. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Add the infrastructure to handle block job events in the -blockdev era. Some complexity is required as qemu does not bother to notify whether the job was concluded successfully or failed. Thus it's necessary to re-query the monitor. To minimize the possibility of stuck jobs save the state into the XML prior to handling everything so that the reconnect code can potentially continue with the cleanup. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Add support for handling the event either synchronously or asynchronously using the event thread. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
With blockdev we'll need to use the JOB_STATUS_CHANGE so gate the old events by the blockdev capability. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
This new state is entered when qemu finished the job but libvirt does not know whether it was successful or not. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Now that the blockjob handling code deals with the status XML we don't need to save it explicitly when starting blockjobs. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Now that block job data is stored in the status XML portion we need to make sure that everything which changes the state also saves the status XML. The job registering function is used while parsing the status XML so in that case we need to skip the XML saving. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
We need to store the block job state in the status XML so that we can properly recover any data when reconnecting after startup and also in the end to be able to do any transition of the backing chain that happened while libvirt was not connected to the monitor. First step is to note the name, type, state and corresponding disk into the status XML. We also need to make sure that a broken blockjob does not make libvirt lose the VM, thus many of the errors just mark the job as invalid. Later on we'll cancel all invalid jobs. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
The job data saved in the XML may be partially invalid e.g. if something is missing. To prevent losing a domain with such a job add a flag to the job data so that job APIs can ignore such a job and we can just cancel it. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
When parsing the status XML we need to register all existing jobs. Export the functions so that they are usable in other modules. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Later on we'll format these values into the status XML so the from/to string functions will come handy. The implementation also notes that these will be used in the status XML to avoid somebody changing the values. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Add the job structure to the table when instantiating a new job and remove it when it terminates/fails. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Block jobs currently belong to disks only so we can look up the block job data for them in the corresponding disks. This won't be the case when using blockdev as certain jobs don't even correspond to a disk and most of them can run on a part of the backing chain. Add a global table of blockjobs which can be used to look up the data for the blockjobs when the job events need to be processed. The table is a hash table organized by job name and has a reference to the job. New and running jobs will later be added to this table. Reference counting will allow to reap job state for synchronous callers. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
The legacy job handler does not look at the old job state so we can update it earlier. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
There's no need to do it if the job is not completed. The new helper allows to do this with much less hassle in the correct place. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Rename and move qemuBlockJobTerminate to qemuBlockJobUnregister and separate bits from qemuBlockJobDiskNew which register the job with the disk. This creates an unified interface for other APIs to use. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Simplify error paths. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Similarly to qemuDomainSaveStatus add a helper to save the config XML named qemuDomainSaveConfig. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Rename qemuDomainObjSaveJob and create a wrapper for it which does not require 'driver' to be passed and export it so that other palces can easily save the status XML without having to invoke virDomainSaveStatus which has unpleasing parameters. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-