- 17 12月, 2019 2 次提交
-
-
由 Michal Privoznik 提交于
With NVMe disks, one can start a blockjob with a NVMe disk that is not visible in domain XML (at least right away). Usually, it's fairly easy to override this limitation of qemuDomainGetMemLockLimitBytes() - for instance for hostdevs we temporarily add the device to domain def, let the function calculate the limit and then remove the device. But it's not so easy with virStorageSourcePtr - in some cases they don't necessarily are attached to a disk. And even if they are it's done later in the process and frankly, I find it too complicated to be able to use the simple trick we use with hostdevs. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Now that all callers of qemuDomainGetHostdevPath() handle /dev/vfio/vfio on their own, we can safely drop handling in this function. In near future the decision whether domain needs VFIO file is going to include more device types than just virDomainHostdev. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
- 10 12月, 2019 4 次提交
-
-
由 Peter Krempa 提交于
Store the data of a backup job along with the index counter for new backup jobs in the status XML. Currently we will support only one backup job and thus there's no necessity to add arrays of jobs. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
We will want to use the async job infrastructure along with all the APIs and event for the backup job so add the backup job as a new async job type. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Introduce QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP and the convertors and other plumbing to be able to report statistics for the backup job. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 09 12月, 2019 1 次提交
-
-
由 Daniel P. Berrangé 提交于
Now that the domain XML APIs don't use virCapsPtr we can stop passing it around many QEMU driver methods. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 03 12月, 2019 1 次提交
-
-
由 Peter Krempa 提交于
The function is now used only in qemu_process.c so move it there and name it 'qemuProcessPrepareQEMUCaps' which is more appropriate to what it's doing. The reworded comment now mentions that it will also post-process the caps for VM startup. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 22 11月, 2019 2 次提交
-
-
由 Peter Krempa 提交于
Add a helper which will covert the PFLASH code file and variable file into the virStorageSource objects stored in private data so that we can use them with -blockdev while keeping the infrastructure to determine the path to the loaders intact. This is a temporary solution until we will want to do snapshots of the pflash where we will be forced do track the full backing chain in the XML. In the meanwhile just convert it partially so that we can stop using -drive. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Peter Krempa 提交于
To allow converting the pflash drives to blockdev we will need a virStorageSource to allow using our helpers. Temporarily prior to coverting loader data to a virStorageSoruce add private data which will house this. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 15 11月, 2019 1 次提交
-
-
由 Jonathon Jongsma 提交于
Some layered products such as oVirt have requested a way to avoid being blocked by guest agent commands when querying a loaded vm. For example, many guest agent commands are polled periodically to monitor changes, and rather than blocking the calling process, they'd prefer to simply time out when an agent query is taking too long. This patch adds a way for the user to specify a custom agent timeout that is applied to all agent commands. One special case to note here is the 'guest-sync' command. 'guest-sync' is issued internally prior to calling any other command. (For example, when libvirt wants to call 'guest-get-fsinfo', we first call 'guest-sync' and then call 'guest-get-fsinfo'). Previously, the 'guest-sync' command used a 5-second timeout (VIR_DOMAIN_QEMU_AGENT_COMMAND_DEFAULT), whereas the actual command that followed always blocked indefinitely (VIR_DOMAIN_QEMU_AGENT_COMMAND_BLOCK). As part of this patch, if a custom timeout is specified that is shorter than 5 seconds, this new timeout is also used for 'guest-sync'. If there is no custom timeout or if the custom timeout is longer than 5 seconds, we will continue to use the 5-second timeout. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 13 11月, 2019 1 次提交
-
-
由 Jiri Denemark 提交于
The pconfig feature was enabled in QEMU by accident in 3.1.0. All other newer versions do not support it and it was removed from the Icelake-Server CPU model in QEMU. We don't normally change our CPU models even when QEMU does so to avoid breaking migrations between different versions of libvirt. But we can safely do so in this specific case. QEMU never supported enabling pconfig so any domain which was able to start has pconfig disabled. With a small compatibility hack which explicitly disables pconfig when CPU model equals Icelake-Server in migratable domain definition, only one migration scenario stays broken (and there's nothing we can do about it): from any host to a host with libvirt < 5.10.0 and QEMU > 3.1.0. https://bugzilla.redhat.com/show_bug.cgi?id=1749672Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 12 11月, 2019 1 次提交
-
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
- 21 10月, 2019 3 次提交
-
-
由 Peter Krempa 提交于
The function does not do anything that could fail. Remove the return value. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Peter Krempa 提交于
qemuDomainPrepareDiskSourceData historically prepared everything but we've split out the majority of the functionality so that it sets up predominately only according to the configuration of the disk. There was one leftover bit of setting the gluster debug level from the config. Split this out into a separate function so that qemuDomainPrepareDiskSourceData only prepares based on the disk. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> ACKed-by: NEric Blake <eblake@redhat.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
- 15 10月, 2019 3 次提交
-
-
由 Ján Tomko 提交于
Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Mores 提交于
When undefining a UEFI domain its nvram file has to be properly handled as well. It's mandatory to use one of --nvram and --keep-nvram options when 'virsh undefine <domain>' is issued for a UEFI domain. To fix the bug as reported, virsh should return an error message if neither option is used and the nvram file should be removed when --nvram is given. The cause of the problem is that when qemuDomainUndefineFlags() is invoked on an inactive domain the path to its nvram file is empty. This commit aims to fix this by formatting and filling in the path in time for the nvram removal code to run properly. https://bugzilla.redhat.com/show_bug.cgi?id=1751596Signed-off-by: NPavel Mores <pmores@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
Introduced in GLib 2.10. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 30 9月, 2019 2 次提交
-
-
由 Peter Krempa 提交于
Rather than having to fix 5 places once we support the combination, add a function called by all the blockjob/snapshot APIs. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Finish the refactor by moving and renaming functions from qemu_domain.c Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 27 9月, 2019 1 次提交
-
-
由 Peter Krempa 提交于
Move it to qemu_domain.c and rename it to qemuDomainObjFromDomain. This will allow reusing it after splitting out checkpoint code from qemu_driver.c. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 25 9月, 2019 2 次提交
-
-
由 Marc-André Lureau 提交于
Similar to the qemu_tpm.c, add a unit with a few functions to start/stop and setup the cgroup of the external vhost-user-gpu process. See function documentation. The vhost-user connection fd is set on qemuDomainVideoPrivate struct. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Marc-André Lureau 提交于
Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
- 16 9月, 2019 1 次提交
-
-
由 Laine Stump 提交于
The same validation should be done for both static network devices and hotplugged devices, but they are currently inconsistent. Move all the relevant validation from qemuBuildInterfaceCommandLine() into the new function qemuDomainValidateActualNetDef() and call the latter from the former. Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 09 9月, 2019 1 次提交
-
-
由 Eric Farman 提交于
Let's pull this hunk out into a function, so it can be reused in another codepath that needs to do the same thing. Signed-off-by: NEric Farman <farman@linux.ibm.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NPavel Hrdina <phrdina@redhat.com>
-
- 06 9月, 2019 5 次提交
-
-
由 Marc-André Lureau 提交于
For VM started and migrated/saved without slirp-helpers, let's prevent the automatic setup (as it would fail to migrate otherwise). Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
Save & restore the slirp helper PID associated with a network interface & the probed features. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
Add dbusVMStates to keep a list of dbus-vmstate objects needed for migration. They are populated on the command line during start or qemuDBusVMStateAdd/Remove() will hotplug them as needed. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Since libvirt stores the backing chain into the XML in a nested way it is the prime possibility to hit libxml2's parsing limit of 256 layers. Introduce code which will crawl the backing chain and verify that it's not too deep. The maximum nesting is set to 200 layers so that there's still some space left for additional properties or nesting into snapshot XMLs. The check is applied to all disk use cases (starting, hotplug, media change) as well as block copy which changes image and snapshots. We simply report an error and refuse the operation. Without this check a restart of libvirtd would result in the status XML failing to be parsed and thus losing the VM. https://bugzilla.redhat.com/show_bug.cgi?id=1524278Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 29 8月, 2019 1 次提交
-
-
由 Peter Krempa 提交于
In addition to the data that libvirt needs and extracts internally, copy and store the whole 'props' JSON sub-object of the data returned by query-hotpluggable-cpus for future use. Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
- 21 8月, 2019 1 次提交
-
-
由 Ján Tomko 提交于
Now that virDomainXMLNamespace matches virXMLNamespace, we no longer need to keep both around. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NJiri Denemark <jdenemar@redhat.com>
-
- 09 8月, 2019 2 次提交
-
-
由 Jiri Denemark 提交于
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make sure it gets the capabilities stored in the domain's private data if the domain is running. Passing NULL may cause QEMU capabilities probing to be triggered in case QEMU binary changed in the meantime. When this happens while a running domain object is locked, QMP event delivered to the domain before QEMU capabilities probing finishes will deadlock the event loop. This patch fixes all paths leading to qemuDomainDefFormatBufInternal. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jiri Denemark 提交于
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make sure it gets the capabilities stored in the domain's private data if the domain is running. Passing NULL may cause QEMU capabilities probing to be triggered in case QEMU binary changed in the meantime. When this happens while a running domain object is locked, QMP event delivered to the domain before QEMU capabilities probing finishes will deadlock the event loop. This patch fixes all paths leading to qemuDomainDefCopy. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 29 7月, 2019 2 次提交
-
-
由 Eric Blake 提交于
Qemu bitmap operations require knowing the node name associated with the format layer (the qcow2 file); as upcoming patches will be grabbing that information frequently, make a helper function to access it. Another potential benefit of this function is that we have a single place where we could insert a QMP node-name scraping call if we don't currently know the node name, when -blockdev is not supported; however, the goal is that we hopefully don't ever have to do that because we instead scrape node names only at the point where they change. Signed-off-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Eric Blake 提交于
A lot of this work heavily copies from the existing snapshot APIs. What's more, this patch is (intentionally) very similar to the checkpoint code just added in the test driver, to the point that qemu checkpoints are not fully usable in this patch, but it at least bisects and builds cleanly. The separation between patches is done because the grunt work of saving and restoring XML and tracking relations between checkpoints is common to the test driver, while the later patch adding integration with QMP is specific to qemu. Also note that the interlocking to prevent checkpoints and snapshots from existing at the same time will be a separate patch, to make it easier to revert that restriction when we finally round out the design for supporting interaction between the two concepts. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 18 7月, 2019 3 次提交
-
-
由 Peter Krempa 提交于
The PR manager is a property of the format layer in qemu so we need to be able to track it also in the chains of orphaned block jobs. Add a helper for qemu to look also into the blockjob state. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Add support for handling the event either synchronously or asynchronously using the event thread. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Block jobs currently belong to disks only so we can look up the block job data for them in the corresponding disks. This won't be the case when using blockdev as certain jobs don't even correspond to a disk and most of them can run on a part of the backing chain. Add a global table of blockjobs which can be used to look up the data for the blockjobs when the job events need to be processed. The table is a hash table organized by job name and has a reference to the job. New and running jobs will later be added to this table. Reference counting will allow to reap job state for synchronous callers. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-