- 24 4月, 2017 1 次提交
-
-
由 Yuri Chornoivan 提交于
-
- 21 4月, 2017 1 次提交
-
-
由 Martin Kletzander 提交于
We are currently parsing only rx/frames/max because that's the only value that makes sense for us. The tun device just added support for this one and the others are only supported by hardware devices which we don't need to worry about as the only way we'd pass those to the domain is using <hostdev/> or <interface type='hostdev'/>. And in those cases the guest can modify the settings itself. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 20 4月, 2017 1 次提交
-
-
由 Pavel Hrdina 提交于
The history of USB controller for ppc64 guest is complex and goes back to libvirt 1.3.1 where the fun started. Prior Libvirt 1.3.1 if no model for USB controller was specified we've simply passed "-usb" on QEMU command line. Since Libvirt 1.3.1 there is a patch (8156493d) that fixes this issue by using "-device pci-ohci,..." but it breaks migration with older Libvirts which was agreed that's acceptable. However this patch didn't reflect this change in the domain XML and the model was still missing. Since Libvirt 2.2.0 there is a patch (f55eaccb) that fixes the issue with not setting the USB model into domain XML which we need to know about to not break the migration and since the default model was *pci-ohci* it was used as default in this patch as well. This patch tries to take all the previous changes into account and also change the default for newly defined domains that don't specify any model for USB controller. The VIR_DOMAIN_DEF_PARSE_ABI_UPDATE is set only if new domain is defined or new device is added into a domain which means that in all other cases we will use the old *pci-ohci* model instead of the better and not broken *nec-usb-xhci* model. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1373184Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 18 4月, 2017 1 次提交
-
-
由 Pavel Hrdina 提交于
Introduce new wrapper functions without *Machine* in the function name that take the whole virDomainDef structure as argument and call the existing functions with *Machine* in the function name. Change the arguments of existing functions to *machine* and *arch* because they don't need the whole virDomainDef structure and they could be used in places where we don't have virDomainDef. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 10 4月, 2017 1 次提交
-
-
由 Marc Hartmayer 提交于
This way qemuDomainLogContextRef() and qemuDomainLogContextFree() is no longer needed. The naming qemuDomainLogContextFree() was also somewhat misleading. Additionally, it's easier to turn qemuDomainLogContext in a self-locking object. Signed-off-by: NMarc Hartmayer <mhartmay@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com>
-
- 03 4月, 2017 2 次提交
-
-
由 Andrea Bolognani 提交于
Depending on the architecture, requirements for ACPI and UEFI can be different; more specifically, while on x86 UEFI requires ACPI, on aarch64 it's the other way around. Enforce these requirements when validating the domain, and make the error message more accurate by mentioning that they're not necessarily applicable to all architectures. Several aarch64 test cases had to be tweaked because they would have failed the validation step otherwise.
-
由 Michal Privoznik 提交于
Currently, if we want to zero out disk source (e,g, due to startupPolicy when starting up a domain) we use virDomainDiskSetSource(disk, NULL). This works well for file based storage (storage type file, dir, or block). But it doesn't work at all for other types like volume and network. So imagine that you have a domain that has a CDROM configured which source is a volume from an inactive pool. Because it is startupPolicy='optional', the CDROM is empty when the domain starts. However, the source element is not cleared out in the status XML and thus when the daemon restarts and tries to reconnect to the domain it refreshes the disks (which fails - the storage pool is still not running) and thus the domain is killed. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 29 3月, 2017 1 次提交
-
-
由 Peter Krempa 提交于
When idx is 0 virStorageFileChainLookup returns the base (bottom) of the backing chain rather than the top. This is expected by the callers of qemuDomainGetStorageSourceByDevstr. Add a special case for idx == 0
-
- 28 3月, 2017 3 次提交
-
-
由 Andrea Bolognani 提交于
For guests that use <memoryBacking><locked>, our only option is to remove the memory locking limit altogether. Partially-resolves: https://bugzilla.redhat.com/1431793
-
由 Andrea Bolognani 提交于
Instead of having a separate function, we can simply return zero from the existing qemuDomainGetMemLockLimitBytes() to signal the caller that the memory locking limit doesn't need to be set for the guest. Having a single function instead of two makes it less likely that we will use the wrong value, which is exactly what happened when we started applying the limit that was meant for VFIO-using guests to <memoryBacking><locked>-using guests.
-
由 Andrea Bolognani 提交于
This reverts commit c2e60ad0. Turns out this check is excessively strict: there are ways other than <memtune><hard_limit> to raise the memory locking limit for QEMU processes, one prominent example being tweaking /etc/security/limits.conf. Partially-resolves: https://bugzilla.redhat.com/1431793
-
- 27 3月, 2017 6 次提交
-
-
由 Erik Skultety 提交于
Since mdevs are just another type of VFIO devices, we should increase the memory locking limit the same way we do for VFIO PCI devices. Signed-off-by: NErik Skultety <eskultet@redhat.com>
-
由 Erik Skultety 提交于
As goes for all the other hostdev device types, grant the qemu process access to /dev/vfio/<mediated_device_iommu_group>. Signed-off-by: NErik Skultety <eskultet@redhat.com>
-
由 Erik Skultety 提交于
A mediated device will be identified by a UUID (with 'model' now being a mandatory <hostdev> attribute to represent the mediated device API) of the user pre-created mediated device. We also need to make sure that if user explicitly provides a guest address for a mdev device, the address type will be matching the device API supported on that specific mediated device and error out with an incorrect XML message. The resulting device XML: <devices> <hostdev mode='subsystem' type='mdev' model='vfio-pci'> <source> <address uuid='c2177883-f1bb-47f0-914d-32a22e3a8804'> </source> </hostdev> </devices> Signed-off-by: NErik Skultety <eskultet@redhat.com>
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
The code is currently simple, but if we later add node names, it will be necessary to generate the names based on the node name. Add a helper so that there's a central point to fix once we add self-generated node names.
-
由 Peter Krempa 提交于
Looks up a disk and its corresponding backing chain element by node name.
-
- 25 3月, 2017 1 次提交
-
-
由 John Ferlan 提交于
If the migration flags indicate this migration will be using TLS, then set up the destination during the prepare phase once the target domain has been started to add the TLS objects to perform the migration. This will create at least an "-object tls-creds-x509,endpoint=server,..." for TLS credentials and potentially an "-object secret,..." to handle the passphrase response to access the TLS credentials. The alias/id used for the TLS objects will contain "libvirt_migrate". Once the objects are created, the code will set the "tls-creds" and "tls-hostname" migration parameters to signify usage of TLS. During the Finish phase we'll be sure to attempt to clear the migration parameters and delete those objects (whether or not they were created). We'll also perform the same reset during recovery if we've reached FINISH3. If the migration isn't using TLS, then be sure to check if the migration parameters exist and clear them if so.
-
- 17 3月, 2017 1 次提交
-
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 16 3月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 15 3月, 2017 2 次提交
-
-
由 Michal Privoznik 提交于
So, majority of the code is just ready as-is. Well, with one slight change: differentiate between dimm and nvdimm in places like device alias generation, generating the command line and so on. Speaking of the command line, we also need to append 'nvdimm=on' to the '-machine' argument so that the nvdimm feature is advertised in the ACPI tables properly. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
NVDIMM is new type of memory introduced into QEMU 2.6. The idea is that we have a Non-Volatile memory module that keeps the data persistent across domain reboots. At the domain XML level, we already have some representation of 'dimm' modules. Long story short, NVDIMM will utilize the existing <memory/> element that lives under <devices/> by adding a new attribute 'nvdimm' to the existing @model and introduce a new <path/> element for <source/> while reusing other fields. The resulting XML would appear as: <memory model='nvdimm'> <source> <path>/tmp/nvdimm</path> </source> <target> <size unit='KiB'>523264</size> <node>0</node> </target> <address type='dimm' slot='0'/> </memory> So far, this is just a XML parser/formatter extension. QEMU driver implementation is in the next commit. For more info on NVDIMM visit the following web page: http://pmem.io/Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 13 3月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1431112 Yeah, that's right. A mount point doesn't have to be a directory. It can be a file too. However, the code that tries to preserve mount points under /dev for new namespace for qemu does not count with that option. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 10 3月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1430634 If a qemu process has died, we get EOF on its monitor. At this point, since qemu process was the only one running in the namespace kernel has already cleaned the namespace up. Any attempt of ours to enter it has to fail. This really happened in the bug linked above. We've tried to attach a disk to qemu and while we were in the monitor talking to qemu it just died. Therefore our code tried to do some roll back (e.g. deny the device in cgroups again, restore labels, etc.). However, during the roll back (esp. when restoring labels) we still thought that domain has a namespace. So we used secdriver's transactions. This failed as there is no namespace to enter. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 09 3月, 2017 4 次提交
-
-
由 Pavel Hrdina 提交于
When libvirtd is started we call qemuDomainRecheckInternalPaths to detect whether a domain has VNC socket path generated by libvirt based on option from qemu.conf. However if we are parsing status XML for running domain the existing socket path can be generated also if the config XML uses the new <listen type='socket'/> element without specifying any socket. The current code doesn't make difference how the socket was generated and always marks it as "fromConfig". We need to store the "autoGenerated" value in the status XML in order to preserve that information. The difference between "fromConfig" and "autoGenerated" is important for migration, because if the socket is based on "fromConfig" we don't print it into the migratable XML and we assume that user has properly configured qemu.conf on both hosts. However if the socket is based on "autoGenerated" it means that a new feature was used and therefore we need to leave the socket in migratable XML to make sure that if this feature is not supported on destination the migration will fail. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 John Ferlan 提交于
Rename 'secretUsageType' to 'usageType' since it's superfluous in an API qemu*Secret*
-
由 John Ferlan 提交于
Building upon the qemuDomainSecretInfoNew, create a helper which will build the secret used for TLS. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Create a helper which will create the secinfo used for disks, hostdevs, and chardevs. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 07 3月, 2017 2 次提交
-
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 06 3月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
Now that we have some qemuSecurity wrappers over virSecurityManager APIs, lets make sure everybody sticks with them. We have them for a reason and calling virSecurityManager API directly instead of wrapper may lead into accidentally labelling a file on the host instead of namespace. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 21 2月, 2017 1 次提交
-
-
由 Marc Hartmayer 提交于
The functions in virCommand() after fork() must be careful with regard to accessing any mutexes that may have been locked by other threads in the parent process. It is possible that another thread in the parent process holds the lock for the virQEMUDriver while fork() is called. This leads to a deadlock in the child process when 'virQEMUDriverGetConfig(driver)' is called and therefore the handshake never completes between the child and the parent process. Ultimately the virDomainObjectPtr will never be unlocked. It gets much worse if the other thread of the parent process, that holds the lock for the virQEMUDriver, tries to lock the already locked virDomainObject. This leads to a completely unresponsive libvirtd. It's possible to reproduce this case with calling 'virsh start XXX' and 'virsh managedsave XXX' in a tight loop for multiple domains. This commit fixes the deadlock in the same way as it is described in commit 61b52d2e. Signed-off-by: NMarc Hartmayer <mhartmay@linux.vnet.ibm.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
- 20 2月, 2017 4 次提交
-
-
由 Michal Privoznik 提交于
When enabling virgl, qemu opens /dev/dri/render*. So far, we are not allowing that in devices CGroup nor creating the file in domain's namespace and thus requiring users to set the paths in qemu.conf. This, however, is suboptimal as it allows access to ALL qemu processes even those which don't have virgl configured. Now that we have a way to specify render node that qemu will use we can be more cautious and enable just that. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
So far, qemuDomainGetHostdevPath has no knowledge of the reasong it is called and thus reports /dev/vfio/vfio for every VFIO backed device. This is suboptimal, as we want it to: a) report /dev/vfio/vfio on every addition or domain startup b) report /dev/vfio/vfio only on last VFIO device being unplugged If a domain is being stopped then namespace and CGroup die with it so no need to worry about that. I mean, even when a domain that's exiting has more than one VFIO devices assigned to it, this function does not clean /dev/vfio/vfio in CGroup nor in the namespace. But that doesn't matter. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
-
由 Michal Privoznik 提交于
So far, we are allowing /dev/vfio/vfio in the devices cgroup unconditionally (and creating it in the namespace too). Even if domain has no hostdev assignment configured. This is potential security hole. Therefore, when starting the domain (or hotplugging a hostdev) create & allow /dev/vfio/vfio too (if needed). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
-
由 Michal Privoznik 提交于
Since these two functions are nearly identical (with qemuSetupHostdevCgroup actually calling virCgroupAllowDevicePath) we can have one function call the other and thus de-duplicate some code. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
-
- 15 2月, 2017 2 次提交
-
-
由 Michal Privoznik 提交于
The bare fact that mnt namespace is available is not enough for us to allow/enable qemu namespaces feature. There are other requirements: we must copy all the ACL & SELinux labels otherwise we might grant access that is administratively forbidden or vice versa. At the same time, the check for namespace prerequisites is moved from domain startup time to qemu.conf parser as it doesn't make much sense to allow users to start misconfigured libvirt just to find out they can't start a single domain. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Andrea Bolognani 提交于
mknod() is affected my the current umask, so we're not guaranteed the newly-created device node will have the right permissions. Call chmod(), which is not affected by the current umask, immediately afterwards to solve the issue. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1421036
-
- 13 2月, 2017 1 次提交
-
-
由 Ján Tomko 提交于
This controller only allows up to 15 ports. https://bugzilla.redhat.com/show_bug.cgi?id=1375417
-
- 09 2月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
Just like we need wrappers over other virSecurityManager APIs, we need one for virSecurityManagerSetImageLabel and virSecurityManagerRestoreImageLabel. Otherwise we might end up relabelling device in wrong namespace. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-