- 14 8月, 2019 11 次提交
-
-
由 Ján Tomko 提交于
Similar to how we cache the availability of machined. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
Split it out from virSystemdPMSupportTarget. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
Look up the binary name upfront to avoid the error: Cannot find 'pm-is-supported' in path: No such file or directory In that case, we just assume nodesuspend is not available. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Use a 'binary' variable to hold it. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Get rid of the ret variable as well as the cleanup label. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Jiri Denemark 提交于
When QEMU supports flushing caches at the end of migration, we can safely allow migration even if disk/driver/@cache is not none nor directsync. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Acked-By: NPeter Krempa <pkrempa@redhat.com>
-
由 Jiri Denemark 提交于
QEMU 4.0.0 and newer automatically drops caches at the end of migration. Let's check for this capability so that we can allow migration when disk cache is turned on. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Acked-By: NPeter Krempa <pkrempa@redhat.com>
-
由 Jiri Denemark 提交于
The original message was logically incorrect: cache != none or cache != directsync is always true. But even replacing "or" with "and" doesn't make it more readable for humans. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Acked-By: NPeter Krempa <pkrempa@redhat.com>
-
由 Jiri Denemark 提交于
In the first stage of incoming migration (qemuMigrationDstPrepareAny) we call qemuMigrationEatCookie when there's no vm object created yet and thus we don't have any private data to pass. Broken by me in commit v5.6.0-109-gbf15b145. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
由 Jiri Denemark 提交于
This reverts commit 47cbc929. The section is no longer correct when the patch switching to gnulib's make coverage was reverted. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Acked-By: NEric Blake <eblake@redhat.com>
-
由 Jiri Denemark 提交于
This reverts commit f38d553e. Gnulib's make coverage (or init-coverage, build-coverage, gen-coverage) is not a 1-1 replacement for the original configure option. Our old --enable-test-coverage seems to be close to gnulib's make build-coverage except gnulib runs lcov in that phase and the build actually fails for me even before lcov is run. And since we want to be able to just build libvirt without running lcov, I suggest reverting to our own implementation. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Acked-By: NEric Blake <eblake@redhat.com>
-
- 13 8月, 2019 2 次提交
-
-
由 Daniel Henrique Barboza 提交于
If /etc/qemu/firmware directory exists, but is not readable then qemuxml2xmltest fails. This is because once domain XML is parsed it is validated. For that domain capabilities are needed. However, when constructing domain capabilities, FW descriptors are loaded and this is the point where the test fails, because it fails to open one of the directories. Fixes: 5b9819ee domain capabilities: Expose firmware auto selection feature Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Laine Stump 提交于
Back in July 2010, commit 6ea90b84 (meant to resolve https://bugzilla.redhat.com/571991 ) added code to set the MAC address of any tap device to the associated guest interface's MAC, but with the first byte replaced with 0xFE. This was done in order to assure that 1) the tap MAC and guest interface MAC were different (otherwise L2 forwarding through the tap would not work, and the kernel would repeatedly issue a warning stating as much). 2) any bridge device that had one of these taps attached would *not* take on the MAC of the tap (leading to network instability as guests started and stopped) A couple years later, https://bugzilla.redhat.com/798467 was filed, complaining that a user could configure a tap-based guest interface to have a MAC address that itself had a first byte of 0xFE, silently (other than the kernel warning messages) resulting in a non-working configuration. This was fixed by commit 5d571045, which logged an error and failed the guest start / interface attach if the MAC's first byte was 0xFE. Although this restriction only reduces the potential pool of MAC addresses from 2^46 (last two bits of byte 1 must be set to 10) by 2^32 (still 4 orders of magnitude larger than the entire IPv4 address space), it also means that management software that autogenerates MAC addresses must have special code to avoid an 0xFE prefix. Now after 7 years, someone has noticed this restriction and requested that we remove it. So instead of failing when 0xFE is found as the first byte, this patch removes the restriction by just replacing the first byte in the tap device MAC with 0xFA if the first byte in the guest interface is 0xFE. 0xFA is the next-highest value that still has 10 as the lowest two bits, and still 2) meets the requirement of "tap MAC must be different from guest interface MAC", and 3) is high enough that there should never be an issue of the attached bridge device taking on the MAC of the tap. The result is that *any* MAC can be chosen by management software (although it would still not work correctly if a multicast MAC (lowest bit of first byte set to 1) was chosen), but that's a different issue). Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com
-
- 12 8月, 2019 6 次提交
-
-
由 Andrea Bolognani 提交于
We have some early replies that don't quite match with how QEMU 2.12.0 as released behaves. Signed-off-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Wim ten Have 提交于
Update the KVM feature tests for QEMU's kvm-hint-dedicated performance hint. Signed-off-by: NWim ten Have <wim.ten.have@oracle.com> Signed-off-by: NMenno Lageman <menno.lageman@oracle.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Wim ten Have 提交于
QEMU version 2.12.1 introduced a performance feature under commit be7773268d98 ("target-i386: add KVM_HINTS_DEDICATED performance hint") This patch adds a new KVM feature 'hint-dedicated' to set this performance hint for KVM guests. The feature is off by default. To enable this hint and have libvirt add "-cpu host,kvm-hint-dedicated=on" to the QEMU command line, the following XML code needs to be added to the guest's domain description in conjunction with CPU mode='host-passthrough'. <features> <kvm> <hint-dedicated state='on'/> </kvm> </features> ... <cpu mode='host-passthrough ... /> Signed-off-by: NWim ten Have <wim.ten.have@oracle.com> Signed-off-by: NMenno Lageman <menno.lageman@oracle.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ilias Stamatis 提交于
Signed-off-by: NIlias Stamatis <stamatis.iliass@gmail.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
由 Ilias Stamatis 提交于
Signed-off-by: NIlias Stamatis <stamatis.iliass@gmail.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
由 Andrea Bolognani 提交于
We don't include this information for any other library, and having it there means there are two places we need to change every time the required version is bumped. configure will provide the user with a nice error message, which includes the required version, if libxml2 found on the system is too old. Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
-
- 10 8月, 2019 1 次提交
-
-
由 Daniel P. Berrangé 提交于
The various distros have the following libxml2 vesions: CentOS 7: 2.9.1 Debian Stretch: 2.9.4 FreeBSD Ports: 2.9.9 Ubuntu 16.04 LTS: 2.9.3 Based on this sampling, we can reasonably bump libxml2 min version to 2.9.1 The 'query_raw' struct field was added in version 2.6.28, so can be assumed to exist. Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 09 8月, 2019 20 次提交
-
-
由 Maxiwell S. Garcia 提交于
QEMU shows a warning message if partial NUMA mapping is set. This patch adds a warning message in libvirt when editing the XML. It must be an error in future, when QEMU remove this ability. Signed-off-by: NMaxiwell S. Garcia <maxiwell@linux.ibm.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel P. Berrangé 提交于
Historically URIs handled by the remote driver will always connect to the libvirtd UNIX socket. There will now be one daemon per driver, and each of these has its own UNIX sockets to connect to. It will still be possible to run the traditional monolithic libvirtd though, which will have the original UNIX socket path. In addition there is a virproxyd daemon that doesn't run any drivers, but provides proxying for clients accessing libvirt over IP sockets, or tunnelling to the legacy libvirtd UNIX socket path. Finally when running inside a daemon, the remote driver must not reject connections unconditionally. For example, the QEMU driver needs to be able to connect to the network driver. The remote driver must thus be willing to handle connections even when inside the daemon, provided no local driver is registered. This refactoring enables the remote driver to be able to connect to the per-driver daemons. The URI parameter "mode" accepts the values "auto", "direct" and "legacy" to control which daemons are connected to. The client side libvirt.conf config file also supports a "remote_mode" setting which is used if the URI parameter is not set. If neither the config file or URI parameter set a mode, then "auto" is used, whereby the client looks to see which sockets actually exist right now. The remote driver will only ever spawn the per-driver daemons, or the legacy libvirtd. It won't ever try to spawn virtproxyd, as that is only there for IP based connectivity, or for access from legacy remote clients. If connecting to a remote host over any kind of ssh tunnel, for now we must assume only the legacy socket exists. A future patch will introduce a netcat replacement that is tailored for libvirt to make remote tunnelling easier. The configure arg '--with-remote-default-mode=legacy|direct' allows packagers to set a default at build time. If not given, it will default to legacy mode. Eventually the default will switch to direct mode. Distros can choose to do the switch earlier if desired. The main blocker is testing and suitable SELinux/AppArmor policies. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The ssh, libssh, libssh2 & unix transports all need to use a UNIX socket path, and duplicate some of the same logic for error checking. Pull this out into a separate method to increase code sharing. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
Instead of open-coding a string -> enum conversion, use the enum helpers for the remote driver transport. The old code uses STRCASEEQ, so we must force the URI transport to lowercase for sake of back-compatibility. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtproxyd daemon is merely responsible for forwarding RPC calls to one of the other per-driver daemons. As such, it does not have any drivers loaded and so regular auto-probing logic will not work. We need it to be able to handle NULL URIs though, so must implement some kind of alternative probing logic. When running as root this is quite crude. If a per-driver daemon is running, its UNIX socket will exist and we can assume it will accept connections. If the per-driver daemon is not running, but socket autostart is enabled, we again just assume it will accept connections. The is not great, however, because a default install may well have all sockets available for activation. IOW, the virtxend socket may exist, despite the fact that the libxl driver will not actually work. When running as non-root this is slightly easier as we only have two drivers, QEMU and VirtualBox. These daemons will likely not be running and socket activation won't be used either, as libvirt spawns the daemon on demand. So we just check whether the daemon actually is installed. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
When the client has a connection to one of the hypervisor specific daemons (eg virtqemud), the app may still expect to use the secondary driver APIs (storage, network, etc). None of these will be registered in the hypervisor daemon, so we must explicitly open a connection to each of the daemons for the secondary drivers we need. We don't want to open these secondary driver connections at the same time as the primary connection is opened though. That would mean that establishing a connection to virtqemud would immediately trigger activation of virtnetworkd, virnwfilterd, etc despite that that these drivers may never be used by the app. Thus we only open the secondary driver connections at time of first use by an API call. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The driver dispatch methods access the priv->conn variables directly. In future we want to dynamically open the connections for the secondary driver. Thus we want the methods to call a method to get the connection handle instead of assuming the private variable is non-NULL. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
If the event (un)registration methods are invoked while no connection is open, they jump to a cleanup block which unlocks a mutex which is not currently locked. Reviewed-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The driver dispatch methods access the priv->conn variables directly. In future we want to dynamically open the connections for the secondary driver. Thus we want the methods to call a method to get the connection handle instead of assuming the private variable is non-NULL. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The client parameter is always used to get access to the private data struct. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The admin client now supports addressing the per-driver daemons using the obvious URI schemes for each daemon. eg virtqemud:///system virtqemud:///session, etc. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtvzd daemon will be responsible for providing the vz API driver functionality. The vz driver is still loaded by the main libvirtd daemon at this stage, so virtvzd must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtbhyved daemon will be responsible for providing the bhyve API driver functionality. The bhyve driver is still loaded by the main libvirtd daemon at this stage, so virtbhyved must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtvboxd daemon will be responsible for providing the vbox API driver functionality. The vbox driver is still loaded by the main libvirtd daemon at this stage, so virtvboxd must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtlxcd daemon will be responsible for providing the lxc API driver functionality. The lxc driver is still loaded by the main libvirtd daemon at this stage, so virtlxcd must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtqemud daemon will be responsible for providing the qemu API driver functionality. The qemu driver is still loaded by the main libvirtd daemon at this stage, so virtqemud must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtxend daemon will be responsible for providing the libxl API driver functionality. The libxl driver is still loaded by the main libvirtd daemon at this stage, so virtxend must not be running at the same time. This naming is slightly different than other drivers. With the libxl driver, the user still has a 'xen:///system' URI, and we provide it in a libvirt-daemon-xen RPM, which pulls in a libvirt-daemon-driver-libxl RPM. Arguably we could rename the libxl driver to "xen" since it is the only xen driver we have these days, and that matches how we expose it to users in the URI naming. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtnwfilterd daemon will be responsible for providing the nwfilter API driver functionality. The nwfilter driver is still loaded by the main libvirtd daemon at this stage, so virtnwfilterd must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtnodedevd daemon will be responsible for providing the nodedev API driver functionality. The nodedev driver is still loaded by the main libvirtd daemon at this stage, so virtnodedevd must not be running at the same time. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The virtstoraged daemon will be responsible for providing the storage API driver functionality. The storage driver is still loaded by the main libvirtd daemon at this stage, so virtstoraged must not be running at the same time. Reviewed-by: NChristophe de Dinechin <dinechin@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-