- 08 2月, 2017 2 次提交
-
-
由 Michal Privoznik 提交于
Since we have qemuSecurity wrappers over virSecurityManagerSetHostdevLabel and virSecurityManagerRestoreHostdevLabel we ought to use them instead of calling secdriver APIs directly. Without those wrappers the labelling won't be done in the correct namespace and thus won't apply to the nodes seen by qemu itself. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Laine Stump 提交于
libvirt was able to set the host_mtu option when an MTU was explicitly given in the interface config (with <mtu size='n'/>), set the MTU of a libvirt network in the network config (with the same named subelement), and would automatically set the MTU of any tap device to the MTU of the network. This patch ties that all together (for networks based on tap devices and either Linux host bridges or OVS bridges) by learning the MTU of the network (i.e. the bridge) during qemuInterfaceBridgeConnect(), and returning that value so that it can then be passed to qemuBuildNicDevStr(); qemuBuildNicDevStr() then sets host_mtu in the interface's commandline options. The result is that a higher MTU for all guests connecting to a particular network will be plumbed top to bottom by simply changing the MTU of the network (in libvirt's config for libvirt-managed networks, or directly on the bridge device for simple host bridges or OVS bridges managed outside of libvirt). One question I have about this - it occurred to me that in the case of migrating a guest from a host with an older libvirt to one with a newer libvirt, the guest may have *not* had the host_mtu option on the older machine, but *will* have it on the newer machine. I'm curious if this could lead to incompatibilities between source and destination (I guess it all depends on whether or not the setting of host_mtu has a practical effect on a guest that is already running - Maxime?) Likewise, we could run into problems when migrating from a newer libvirt to older libvirt - The guest would have been told of the higher MTU on the newer libvirt, then migrated to a host that didn't understand <mtu size='blah'/>. (If this really is a problem, it would be a problem with or without the current patch).
-
- 07 2月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
The current ordering is as follows: 1) set label 2) create the device in namespace 3) allow device in the cgroup While this might work for now, it will definitely not work if the security driver would use transactions as in that case there would be no device to relabel in the domain namespace as the device is created in the second step. Swap steps 1) and 2) to allow security driver to use more transactions. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 30 1月, 2017 1 次提交
-
-
- 20 1月, 2017 1 次提交
-
-
由 Peter Krempa 提交于
The event needs to be emitted after the last monitor call, so that it's not possible to find the device in the XML accidentally while the vm object is unlocked. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1414393
-
- 18 1月, 2017 1 次提交
-
-
由 Peter Krempa 提交于
Move all the worker code into the appropriate file. This will also allow testing of cpu hotplug.
-
- 04 1月, 2017 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1405269 If a secret was not provided for what was determined to be a LUKS encrypted disk (during virStorageFileGetMetadata processing when called from qemuDomainDetermineDiskChain as a result of hotplug attach qemuDomainAttachDeviceDiskLive), then do not attempt to look it up (avoiding a libvirtd crash) and do not alter the format to "luks" when adding the disk; otherwise, the device_add would fail with a message such as: "unable to execute QEMU command 'device_add': Property 'scsi-hd.drive' can't find value 'drive-scsi0-0-0-0'" because of assumptions that when the format=luks that libvirt would have provided the secret to decrypt the volume. Access to unlock the volume will thus be left to the application.
-
- 15 12月, 2016 4 次提交
-
-
由 Michal Privoznik 提交于
When attaching a device to a domain that's using separate mount namespace we must maintain /dev entries in order for qemu process to see them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When attaching a device to a domain that's using separate mount namespace we must maintain /dev entries in order for qemu process to see them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When attaching a device to a domain that's using separate mount namespace we must maintain /dev entries in order for qemu process to see them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When attaching a device to a domain that's using separate mount namespace we must maintain /dev entries in order for qemu process to see them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 01 12月, 2016 3 次提交
-
-
由 gaohaifeng 提交于
Two reasons: 1.in none hotplug, we will pass it. We can see from libvirt function qemuBuildVhostuserCommandLine 2.qemu will use this vetcor num to init msix table. If we don't pass, qemu will use default value, this will cause VM can only use default value interrupts at most. Signed-off-by: Ngaohaifeng <gaohaifeng.gao@huawei.com>
-
由 Eric Farman 提交于
Consider the following XML snippets: $ cat scsicontroller.xml <controller type='scsi' model='virtio-scsi' index='0'/> $ cat scsihostdev.xml <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host0'/> <address bus='0' target='8' unit='1074151456'/> </source> </hostdev> If we create a guest that includes the contents of scsihostdev.xml, but forget the virtio-scsi controller described in scsicontroller.xml, one is silently created for us. The same holds true when attaching a hostdev before the matching virtio-scsi controller. (See qemuDomainFindOrCreateSCSIDiskController for context.) Detaching the hostdev, followed by the controller, works well and the guest behaves appropriately. If we detach the virtio-scsi controller device first, any associated hostdevs are detached for us by the underlying virtio-scsi code (this is fine, since the connection is broken). But all is not well, as the guest is unable to receive new virtio-scsi devices (the attach commands succeed, but devices never appear within the guest), nor even be shutdown, after this point. While this is not libvirt's problem, we can prevent falling into this scenario by checking if a controller is being used by any hostdev devices. The same is already done for disk elements today. Applying this patch and then using the XML snippets from earlier: $ virsh detach-device guest_01 scsicontroller.xml error: Failed to detach device from scsicontroller.xml error: operation failed: device cannot be detached: device is busy $ virsh detach-device guest_01 scsihostdev.xml Device detached successfully $ virsh detach-device guest_01 scsicontroller.xml Device detached successfully Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
由 Laine Stump 提交于
If libvirtd is running unprivileged, it can open a device's PCI config data in sysfs, but can only read the first 64 bytes. But as part of determining whether a device is Express or legacy PCI, qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond the first 64 bytes of the PCI config data and fails with an error log if the read is unsuccessful. In order to avoid creating a parallel "quiet" version of virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down through all the call chains that initialize the qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver pointer with the rest of the iterdata so that it can be used by qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used yet, but will be used in an upcoming patch (that detects Express vs legacy PCI for VFIO assigned devices) to examine driver->privileged.
-
- 25 11月, 2016 2 次提交
-
-
由 Eric Farman 提交于
Adjust the device string that is built for vhost-scsi devices so that it can be invoked from hotplug. From the QEMU command line, the file descriptors are expect to be numeric only. However, for hotplug, the file descriptors are expected to begin with at least one alphabetic character else this error occurs: # virsh attach-device guest_0001 ~/vhost.xml error: Failed to attach device from /root/vhost.xml error: internal error: unable to execute QEMU command 'getfd': Parameter 'fdname' expects a name not starting with a digit We also close the file descriptor in this case, so that shutting down the guest cleans up the host cgroup entries and allows future guests to use vhost-scsi devices. (Otherwise the guest will silently end.) Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
-
由 Eric Farman 提交于
We already have a "scsi" hostdev subsys type, which refers to a single LUN that is passed through to a guest. But what of things where multiple LUNs are passed through via a single SCSI HBA, such as with the vhost-scsi target? Create a new hostdev subsys type that will carry this. Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
-
- 23 11月, 2016 2 次提交
-
-
由 Michal Privoznik 提交于
Just like in the previous commit, we are not updating CGroups on chardev hot(un-)plug and thus leaving qemu unable to access any non-default device users are trying to hotplug. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
If users try to hotplug RNG device with a backend different to /dev/random or /dev/urandom the whole operation fails as qemu is unable to access the device. The problem is we don't update device CGroups during the operation. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 15 11月, 2016 1 次提交
-
-
由 Laine Stump 提交于
Before now, all the qemu hotplug functions assumed that all devices to be hotplugged were legacy PCI endpoint devices (VIR_PCI_CONNECT_TYPE_PCI_DEVICE). This worked out "okay", because all devices *are* legacy PCI endpoint devices on x86/440fx machinetypes, and hotplug didn't work properly on machinetypes using PCIe anyway (hotplugging onto a legacy PCI slot doesn't work, and until commit b87703cf any attempt to manually specify a PCIe address for a hotplugged device would be erroneously rejected). This patch makes all qemu hotplug operations honor the pciConnectFlags set by the single all-knowing function qemuDomainDeviceCalculatePCIConnectFlags(). This is done in 3 steps, but in a single commit since we would have to touch the other points at each step anyway: 1) add a flags argument to the hypervisor-agnostic virDomainPCIAddressEnsureAddr() (previously it hardcoded ..._PCI_DEVICE) 2) add a new qemu-specific function qemuDomainEnsurePCIAddress() which gets the correct pciConnectFlags for the device from qemuDomainDeviceConnectFlags(), then calls virDomainPCIAddressEnsureAddr(). 3) in qemu_hotplug.c replace all calls to virDomainPCIAddressEnsureAddr() with calls to qemuDomainEnsurePCIAddress() So in effect, we're putting a "shim" on top of all calls to virDomainPCIAddressEnsureAddr() that sets the right pciConnectFlags.
-
- 14 11月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
Coverity identified that this variable might be leaked. And it's right. If an error occurred and we have to roll back the control jumps to try_remove label where we save the current error (see 0e82fa4c for more info). However, inside the code a jump onto other label is possible thus leaking the error object. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 11月, 2016 2 次提交
-
-
由 Peter Krempa 提交于
The memory device alias needs to be treated as machine ABI as qemu is using it in the migration stream for section labels. To simplify this generate the alias from the slot number unless an existing broken configuration is detected. With this patch the aliases are predictable and even certain configurations which would not be migratable previously are fixed. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1359135
-
由 Peter Krempa 提交于
As with other devices assign the slot number right away when adding the device. This will make the slot numbers static as we do with other addressing elements and it will ultimately simplify allocation of the alias in a static way which does not break with qemu.
-
- 10 11月, 2016 2 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1386976 We have everything ready. Actually the only limitation was our check that denied hotplug of vhost-user. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
If there is an error hotpluging a net device (for whatever reason) a rollback operation is performed. However, whilst doing so various helper functions that are called report errors on their own. This results in the original error to be overwritten and thus misleading the user. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 09 11月, 2016 1 次提交
-
-
由 Prasanna Kumar Kalever 提交于
Propagate the selected or default level to qemu if it's supported. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1376009Signed-off-by: NPrasanna Kumar Kalever <prasanna.kalever@redhat.com> Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
- 03 11月, 2016 1 次提交
-
-
由 Martin Kletzander 提交于
This is needed in order to migrate a domain with shmem devices as that is not allowed to migrate. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 27 10月, 2016 5 次提交
-
-
由 Ján Tomko 提交于
For domains with no USB address cache, we should not attempt to generate a USB address. https://bugzilla.redhat.com/show_bug.cgi?id=1387665
-
由 Ján Tomko 提交于
This function should never need a cleanup section.
-
由 Ján Tomko 提交于
Commit 19a148b7 dropped the cache from QEMU's private domain object. Assume the callers do not have the cache by default and use a longer name for the internal ones that do. This makes the shorter 'virDomainVirtioSerialAddrAutoAssign' name availabe for a function that will not require the cache.
- 26 10月, 2016 6 次提交
-
-
由 Gema Gomez 提交于
Support for virtio disks was added in commit id 'fceeeda2', but not for SCSI drives. Add the secret for the server when hotplugging a SCSI drive. No need to make any adjustments for unplug since that's handled during the qemuDomainDetachDiskDevice call to qemuDomainRemoveDiskDevice in the qemuDomainDetachDeviceDiskLive switch. Added a test to/for the command line processing to show the command line options when adding a SCSI drive for the guest.
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1300776 Complete the implementation of support for TLS encryption on chardev TCP transports by adding the hotplug ability of a secret to generate the passwordid for the TLS object for chrdev, RNG, and redirdev. Fix up the order of object removal on failure to be the inverse of the attempted attach (for redirdev, chr, rng) - for each the tls object was being removed before the chardev backend. Likewise, add the ability to hot unplug that secret object as well and be sure the order of unplug matches that inverse order of plug. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Add the secret object so the 'passwordid=' can be added if the command line if there's a secret defined in/on the host for TCP chardev TLS objects. Preparation for the secret involves adding the secinfo to the char source device prior to command line processing. There are multiple possibilities for TCP chardev source backend usage. Add test for at least a serial chardev as an example.
-
由 John Ferlan 提交于
Commit id '6e6b4bfc' added the object, but forgot the other end.
-
由 John Ferlan 提交于
Need to remove the drive first, then the secobj and/or encobj if they exist. This is because the drive has a dependency on secobj (or the secret for the networked storage server) and/or the encobj (or the secret for the LUKS encrypted volume). Deleting either object first leaves an drive without it's respective objects. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Commit id '2c322378' added the TLS object removal to the DetachChrDevice all when it should have been added to the RemoveChrDevice since that's the norm for similar processing (e.g. disk) Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 24 10月, 2016 3 次提交
-
-
由 Pavel Hrdina 提交于
Add an optional "tls='yes|no'" attribute for a TCP chardev. For QEMU, this will allow for disabling the host config setting of the 'chardev_tls' for a domain chardev channel by setting the value to "no" or to attempt to use a host TLS environment when setting the value to "yes" when the host config 'chardev_tls' setting is disabled, but a TLS environment is configured via either the host config 'chardev_tls_x509_cert_dir' or 'default_tls_x509_cert_dir' Signed-off-by: NJohn Ferlan <jferlan@redhat.com> Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
Currently the union has only one member so remove that union. If there is a need to add a new type of source for new bus in the future this will force the author to add a union and properly check bus type before any access to union member. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 John Ferlan 提交于
Commit id '2c322378' missed the nuance that the rng backend could be using a TCP chardev and if TLS is enabled on the host, thus will need to have the TLS object added.
-