- 27 4月, 2020 4 次提交
-
-
由 Daniel Henrique Barboza 提交于
This patch adds the implementation of the SBBC pSeries feature, using the QEMU_CAPS_MACHINE_PSERIES_CAP_SBBC capability added in the previous patch. Like the previously added CFPC feature, SBBC can have the values "broken", "workaround" or "fixed". Extra code is required to handle it since it's not a regular tristate capability. This is the XML format for the cap: <features> <sbbc value='workaround'/> </features> Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel Henrique Barboza 提交于
SBBC (Speculation Barrier Bounds Checking) is another capability related to Spectre mitigation efforts in Power processors. It was implemented in QEMU 2.12 by commit 09114fd81799. This patch introduces it as QEMU_CAPS_MACHINE_PSERIES_CAP_SBBC to be implemented in the next patch. Like the case with the now implemented CFPC, exposing this feature in the XML allows for a cleaner way for users to tune the SBBC accordingly, given that not all hypervisor and guest setups supports this Spectre mitigation. Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel Henrique Barboza 提交于
This patch adds the implementation of the CFPC pSeries feature, using the QEMU_CAPS_MACHINE_PSERIES_CAP_CFPC capability added in the previous patch. CPFC can have the values "broken", "workaround" or "fixed". Extra code is required to handle it since it's not a regular tristate capability. This is the XML format for the cap: <features> <cfpc value='workaround'/> </features> Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel Henrique Barboza 提交于
CFPC (Cache Flush on Privilege Change) is one of the capabilities added to QEMU to mitigate Spectre vulnerabilities in Power chips. It was implemented in QEMU 2.12 by commit 6898aed77f46. This capability is still used today due to differences in how the host setup (hardware and firmware/kernel) can handle this mitigation. Its default value also varies with the pseries machine version of the time. There's also certain OSes, like AIX, that might not support the default value of the pseries machine the guest uses. Exposing this in the Libvirt XML as a feature will allow users to tune CFPC values in a cleaner way, instead of hacking parameters in <qemu:commandline> elements. Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 24 4月, 2020 6 次提交
-
-
由 Han Han 提交于
This aio mode was added since Linux 5.1[1], QEMU 5.0.0[2], which will deliever faster and more efficient I/O operations for the file, host_device, host_cdrom backends. Reference: [1]: https://lwn.net/Articles/810414/ [2]: https://lists.gnu.org/archive/html/qemu-devel/2020-01/msg07686.htmlSigned-off-by: NHan Han <hhan@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Han Han 提交于
Add io_uring value to capability replies. The capability QEMU_CAPS_AIO_IO_URING will be used for io_uring aio mode, introduced from QEMU 5.0, linux 5.1. Signed-off-by: NHan Han <hhan@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Peter Krempa 提交于
qemuDomainSupportsCheckpointsBlockjobs checks if the QEMU_CAPS_INCREMENTAL_BACKUP capability is supported to do the interlocking. Capabilities are not present when the VM isn't running though which would create false errors. Move the checks after the liveness check in block job implementations. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NPavel Mores <pmores@redhat.com>
-
由 Peter Krempa 提交于
If a backup job fails midway it's hard to figure out what happened as it's running asynchronous. Use the VIR_DOMAIN_JOB_ERRMSG job statistics field to pass through the error from the first failed backup-blockjob so that both the consumer of the virDomainGetJobStats and the corresponding event can see the error. event 'job-completed' for domain backup-test: operation: 9 time_elapsed: 46 disk_total: 104857600 disk_processed: 10158080 disk_remaining: 94699520 success: 0 errmsg: No space left on device virsh domjobinfo backup-test --completed --anystats Job type: Failed Operation: Backup Time elapsed: 46 ms File processed: 9.688 MiB File remaining: 90.312 MiB File total: 100.000 MiB Error message: No space left on device https://bugzilla.redhat.com/show_bug.cgi?id=1812827Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
The field can be used by jobs to add an optional error message to a completed (failed) job. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
In order to add a string to qemuDomainJobInfo we must ensure that it's freed and copied properly. Add helpers to copy and free the structure and adjust the code to use them properly for the new semantics. Additionally also allocation is changed to g_new0 as it includes the type and thus it's very easy to grep for all the allocations of a given type. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 23 4月, 2020 8 次提交
-
-
由 Peter Krempa 提交于
The function is mocked in qemuhotplugmock.so. Recent clang versions decided to inline it so the mock stopped working resulting in qemuhotplugtest wasting 15 seconds waiting for timeouts. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Return error codes directly and fix weird reporting of errors via temporary variable. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Use VIR_AUTOCLOSE to declare it and remove all internal closing of the filedescriptor. This will allow getting rid of 'error' completely. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
In an attempt to simplify qemuDomainSaveImageOpen we need to add automatic pointer clearing for virQEMUSaveData. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Commit 21ad56e9 introduced a regression where a VM with a corrupted save image file would fail to start on the first attempt. This was caused by returning a wrong return code as 'fd' was abused to also hold the return code. Since it's easy to miss this nuance, introduce a 'ret' variable for the return code and return it' value in the error section. https://bugzilla.redhat.com/show_bug.cgi?id=1791522Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NPavel Mores <pmores@redhat.com>
-
由 Ján Tomko 提交于
Catch the individual usage not removed in previous commits. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 22 4月, 2020 6 次提交
-
-
由 Marc-André Lureau 提交于
The file doesn't use virSystemd functions directly. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
External devices are started before cgroup is created. Add the DBus daemon to the VM cgroup with the rest of the external devices. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
Allow calling qemuDBusStart() multiple times (as may be done by qemu-slirp already). Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
The slirp helper process should be associated with the VM cgroup, like other helpers. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Marc-André Lureau 提交于
Don't stop the DBus daemon if a slirp helper failed to start, as it may be shared with other helpers. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 21 4月, 2020 1 次提交
-
-
e820_host is a Xen-specific option, only available for PV domains, that provides the domain a virtual e820 memory map based on the host one. It is enabled with a new Xen hypervisor feature, e.g. <features> <xen> <e820_host state='on'/> </xen> </features> e820_host is required when using PCI passthrough and is generally considered safe for any PV kernel. e820_host is silently ignored if set in HVM domain configuration. See xl.cfg(5) man page in the Xen documentation for more details. Signed-off-by: NMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: NJim Fehlig <jfehlig@suse.com>
-
- 17 4月, 2020 1 次提交
-
-
由 Michal Privoznik 提交于
As explained in the previous commit, we need to relabel the file we are restoring the domain from. That is the FD that is passed to QEMU. If the file is not under /dev then the file inside the namespace is the very same as the one in the host. And regardless of using transactions, the file will be relabeled. But, if the file is under /dev then when using transactions only the copy inside the namespace is relabeled and the one in the host is not. But QEMU is reading from the one in the host, actually. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1772838Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
- 16 4月, 2020 3 次提交
-
-
由 Michal Privoznik 提交于
Previously, we used virCapabilitiesDomainDataLookup() to fill machine type in post parse callback if none was provided in the domain XML. If machine type couldn't be filled in an error was reported. After 4a4132b4 we've changed it to virQEMUCapsGetPreferredMachine() which returns NULL, but we no longer report an error and proceed with the post parse callbacks processing. This may lead to a crash because the code later on assumes def->os.machine is not NULL. Fixes: 4a4132b4Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NPavel Mores <pmores@redhat.com>
-
由 Michal Privoznik 提交于
When preparing to do a blockcopy, the mirror image is modified so that QEMU can access it. For instance, the mirror has seclabels set, if it is a NVMe disk it is detached from the host and so on. And usually, the restore is done upon successful finish of the blockcopy operation. But, if something fails then we need to explicitly revoke the access to the mirror image (and thus reattach NVMe disk back to the host). Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1822538Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NPavel Mores <pmores@redhat.com>
-
由 Lin Ma 提交于
When we do parallel migration, The multifd-channels migration parameter needs to be set on the destination side as well before incoming migration URI, unless we accept the default number of connections(2). Usually, This can be correctly handled by libvirtd. But in this case if we use p2p + xbzrle compression without parameter '--comp-xbzrle-cache', qemuMigrationParamsDump returns too early, The corresponding migration parameter will not be set on the destination side, It results QEMU hangs. Reproducer: virsh migrate --live --p2p --comp-methods xbzrle \ --parallel --parallel-connections 3 GUEST qemu+ssh://dsthost/system or virsh migrate --live --p2p --compressed \ --parallel --parallel-connections 3 GUEST qemu+ssh://dsthost/systemSigned-off-by: NLin Ma <lma@suse.com> Message-Id: <20200416044451.21134-1-lma@suse.com> Reviewed-by: NJiri Denemark <jdenemar@redhat.com>
-
- 15 4月, 2020 1 次提交
-
-
由 Peter Krempa 提交于
We always tried to install backing store for the image even if it didn't make sense, e.g. for a full backup into a raw image. Additionally we didn't record the backing file into the qcow2 metadata so the image itself contained the diff of data but reading from it would be incomplete as it depends on the backing image. This patch fixes both issues by carefully installing the correct backing file when appropriate and also recording it into the metadata when creating the image. https://bugzilla.redhat.com/show_bug.cgi?id=1813310Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 13 4月, 2020 4 次提交
-
-
由 Laine Stump 提交于
virDomainPCIAddressBusSetModel() is called for each PCI controller when building an address set prior to assiging PCI addresses to devices. This patch adds a new argument, allowHotplug, to that function that can be set to false if we know for certain that a particular controller won't support hotplug The most interesting case is in qemuDomainPCIAddressSetCreate(), where the config of each existing controller is available while building the address set, so we can appropriately set allowHotplug = false when the user has "hotplug='off'" in the config of a controller that normally would support hotplug. In all other cases, it is set to true or false in accordance with the capability of the controller model. So far we aren't doing anything with this bus flag in the address set. Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Laine Stump 提交于
When the HOTPLUGGABLE flag was originally added, it was set for all the PCI controllers that accepted hotplugged devices, and requested for all devices that were auto-assigned to a controller. While we're still autoassigning to the same list of controllers, those controllers may or may not support hotplug, so let's use the flag that fits what we're actually doing. Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Laine Stump 提交于
If a pcie-root-port or pcie-downstream-port has hotplug='off' in its <target> subelement, and if the qemu binary supports the hotplug=false option, then it will be added to the commandline for the pcie controller. This controller will then not allow any hotplug/unplug of devices while the guest is running (and the hotplug capability won't be advertised to the guest OS, so the guest OS also won't present unplugging of PCI devices as an option). <controller type='pci' model='pcie-root-port'> <target hotplug='off'/> </controller> For any PCI controllers other than pcie-downstream-port and pcie-root-port, of for qemu binaries that don't support the hotplug commandline option, an error will be logged during validation. Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Laine Stump 提交于
This caps flag is set when the qemu binary supports the option "hotplug" for pcie-root-port, ioh3420 (Intel pcie-root-port) and xio3130-downstream (Intel pcie-downstream-port). If it's available, it's possible to disable hotplugging/unplugging devices on a particular port by adding ",hotplug=off" to the qemu device commandline. This option first appears in qemu-5.0.0. Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 08 4月, 2020 3 次提交
-
-
由 Bjoern Walk 提交于
Pass the packed option on the QEMU command line of the capability for packed virtqueues is detected and the parameter is set explicitly. Reviewed-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.ibm.com> Signed-off-by: NBjoern Walk <bwalk@linux.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 Bjoern Walk 提交于
Add the capability for QEMU's packed virtqueues for virtio that supposedly have better cache utilization and performance compared to the default split queues. Reviewed-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.ibm.com> Signed-off-by: NBjoern Walk <bwalk@linux.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 Yi Li 提交于
Replace vm->def->disks[i] with dom_disk variable which is initialized to the same disk. Signed-off-by: NYi Li <yili@winhong.com> Reviewed-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 07 4月, 2020 3 次提交
-
-
由 Michal Privoznik 提交于
So far, libvirt generates the following path for automatic dumps: $autoDumpPath/$id-$shortName-$timestamp where $autoDumpPath is where libvirt stores dumps of guests (e.g. /var/lib/libvirt/qemu/dump), $id is domain ID and $shortName is shortened version of domain name. So for instance, the generated path may look something like this: /var/lib/libvirt/qemu/dump/1-QEMUGuest-2020-03-25-10:40:50 While in case of embed driver the following path would be generated by default: $root/lib/libvirt/qemu/dump/1-QEMUGuest-2020-03-25-10:40:50 which is not clashing with other embed drivers, we allow users to override the default and have all embed drivers use the same prefix. This can create clashing paths. Fortunately, we can reuse the approach for machined name generation (v6.1.0-178-gc9bd08ee) and include part of hash of the root in the generated path. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Michal Privoznik 提交于
So far, libvirt generates the following path for memory: $memoryBackingDir/$id-$shortName/ram-nodeN where $memoryBackingDir is the path where QEMU mmaps() memory for the guest (e.g. /var/lib/libvirt/qemu/ram), $id is domain ID and $shortName is shortened version of domain name. So for instance, the generated path may look something like this: /var/lib/libvirt/qemu/ram/1-QEMUGuest/ram-node0 While in case of embed driver the following path would be generated by default: $root/lib/qemu/ram/1-QEMUGuest/ram-node0 which is not clashing with other embed drivers, we allow users to override the default and have all embed drivers use the same prefix. This can create clashing paths. Fortunately, we can reuse the approach for machined name generation (v6.1.0-178-gc9bd08ee) and include part of hash of the root in the generated path. Note, the important change is in qemuGetMemoryBackingBasePath(). The rest is needed to pass driver around. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Michal Privoznik 提交于
So far, libvirt generates the following path for hugepages: $mnt/libvirt/qemu/$id-$shortName where $mnt is the mount point of hugetlbfs corresponding to hugepages of desired size (e.g. /dev/hugepages), $id is domain ID and $shortName is shortened version of domain name. So for instance, the generated path may look something like this: /dev/hugepages/libvirt/qemu/1-QEMUGuest But this won't work with embed driver really, because if there are two instances of embed driver, and they both want to start a domain with the same name and with hugepages, both drivers will generate the same path which is not desired. Fortunately, we can reuse the approach for machined name generation (v6.1.0-178-gc9bd08ee) and include part of hash of the root in the generated path. Note, the important change is in qemuGetBaseHugepagePath(). The rest is needed to pass driver around. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-