- 01 12月, 2016 3 次提交
-
-
由 Eric Farman 提交于
Consider the following XML snippets: $ cat scsicontroller.xml <controller type='scsi' model='virtio-scsi' index='0'/> $ cat scsihostdev.xml <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host0'/> <address bus='0' target='8' unit='1074151456'/> </source> </hostdev> If we create a guest that includes the contents of scsihostdev.xml, but forget the virtio-scsi controller described in scsicontroller.xml, one is silently created for us. The same holds true when attaching a hostdev before the matching virtio-scsi controller. (See qemuDomainFindOrCreateSCSIDiskController for context.) Detaching the hostdev, followed by the controller, works well and the guest behaves appropriately. If we detach the virtio-scsi controller device first, any associated hostdevs are detached for us by the underlying virtio-scsi code (this is fine, since the connection is broken). But all is not well, as the guest is unable to receive new virtio-scsi devices (the attach commands succeed, but devices never appear within the guest), nor even be shutdown, after this point. While this is not libvirt's problem, we can prevent falling into this scenario by checking if a controller is being used by any hostdev devices. The same is already done for disk elements today. Applying this patch and then using the XML snippets from earlier: $ virsh detach-device guest_01 scsicontroller.xml error: Failed to detach device from scsicontroller.xml error: operation failed: device cannot be detached: device is busy $ virsh detach-device guest_01 scsihostdev.xml Device detached successfully $ virsh detach-device guest_01 scsicontroller.xml Device detached successfully Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
由 Laine Stump 提交于
Although nearly all host devices that are assigned to guests using VFIO ("<hostdev>" devices in libvirt) are physically PCI Express devices, until now libvirt's PCI address assignment has always assigned them addresses on legacy PCI controllers in the guest, even if the guest's machinetype has a PCIe root bus (e.g. q35 and aarch64/virt). This patch tries to assign them to an address on a PCIe controller instead, when appropriate. First we do some preliminary checks that might allow setting the flags without doing any extra work, and if those conditions aren't met (and if libvirt is running privileged so that it has proper permissions), we perform the (relatively) time consuming task of reading the device's PCI config to see if it is an Express device. If this is successful, the connect flags are set based on the result, but if we aren't able to read the PCI config (most likely due to the device not being present on the system at the time of the check) we assume it is (or will be) an Express device, since that is almost always the case anyway.
-
由 Laine Stump 提交于
If libvirtd is running unprivileged, it can open a device's PCI config data in sysfs, but can only read the first 64 bytes. But as part of determining whether a device is Express or legacy PCI, qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond the first 64 bytes of the PCI config data and fails with an error log if the read is unsuccessful. In order to avoid creating a parallel "quiet" version of virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down through all the call chains that initialize the qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver pointer with the rest of the iterdata so that it can be used by qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used yet, but will be used in an upcoming patch (that detects Express vs legacy PCI for VFIO assigned devices) to examine driver->privileged.
-
- 29 11月, 2016 2 次提交
-
-
由 Jiri Denemark 提交于
Restarting libvirtd on the source host at the end of migration when a domain is already running on the destination would cause image labels to be reset effectively killing the domain. Commit e8d0166e fixed similar issue on the destination host, but kept the source always resetting the labels, which was mostly correct except for the specific case handled by this patch. https://bugzilla.redhat.com/show_bug.cgi?id=1343858Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Post-copy migration needs bi-directional communication between the source and the destination QEMU processes, which is not supported by tunnelled migration. https://bugzilla.redhat.com/show_bug.cgi?id=1371358Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 28 11月, 2016 2 次提交
-
-
由 Peter Krempa 提交于
Thanks to the complex capability caching code virQEMUCapsProbeQMP was never called when we were starting a new qemu VM. On the other hand, when we are reconnecting to the qemu process we reload the capability list from the status XML file. This means that the flag preventing the function being called was not set and thus we partially reprobed some of the capabilities. The recent addition of CPU hotplug clears the QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine does not support it. The partial re-probe on reconnect results into attempting to call the unsupported command and then killing the VM. Remove the partial reprobe and depend on the stored capabilities. If it will be necessary to reprobe the capabilities in the future, we should do a full reprobe rather than this partial one.
-
由 Jiri Denemark 提交于
QEMU 2.8.0 adds support for unavailable-features in query-cpu-definitions reply. The unavailable-features array lists CPU features which prevent a corresponding CPU model from being usable on current host. It can only be used when all the unavailable features are disabled. Empty array means the CPU model can be used without modifications. We can use unavailable-features for providing CPU model usability info in domain capabilities XML: <domainCapabilities> ... <cpu> <mode name='host-passthrough' supported='yes'/> <mode name='host-model' supported='yes'> <model fallback='allow'>Skylake-Client</model> ... </mode> <mode name='custom' supported='yes'> <model usable='yes'>qemu64</model> <model usable='yes'>qemu32</model> <model usable='no'>phenom</model> <model usable='yes'>pentium3</model> <model usable='yes'>pentium2</model> <model usable='yes'>pentium</model> <model usable='yes'>n270</model> <model usable='yes'>kvm64</model> <model usable='yes'>kvm32</model> <model usable='yes'>coreduo</model> <model usable='yes'>core2duo</model> <model usable='no'>athlon</model> <model usable='yes'>Westmere</model> <model usable='yes'>Skylake-Client</model> ... </mode> </cpu> ... </domainCapabilities> Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 26 11月, 2016 13 次提交
-
-
由 Jiri Denemark 提交于
"host" CPU model is supported by a special host-passthrough CPU mode and users is not allowed to specify this model directly with custom mode. Thus we should not advertise "host" CPU model in domain capabilities. This worked well on architectures for which libvirt provides a list of supported CPU models in cpu_map.xml (since "host" is not in the list). But we need to explicitly filter "host" model out for all other architectures. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
CPU models (and especially some additional details which we will start probing for later) differ depending on the accelerator. Thus we need to call query-cpu-definitions in both KVM and TCG mode to get all data we want. Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid having to squash even more stuff into this single patch. They will all be switched back later in separate commits. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
This patch moves the CPU models formatting code from virQEMUCapsFormatCache into a separate function. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
This patch moves the CPU models parsing code from virQEMUCapsLoadCache into a separate function. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The function just returned cached capabilities without checking whether they are still valid. We should check that and refresh the capabilities to make sure we don't return stale data. In other words, we should do what all other lookup functions do. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The function is made a little bit more readable and the code which refreshes cached capabilities if they are not valid any more was moved into a separate function (virQEMUCapsCacheValidate) so that it can be reused in other places. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
If a user asked for a KVM domain capabilities when KVM is not available, we would happily return data we got when probing through TCG and pretended they were relevant for KVM. Let's just report KVM is not supported to avoid confusion. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
When domain capabilities were introduced we did not have enough data to decide whether KVM works on the host or not and thus working legacy/VFIO device assignment was used as a witness. Now that we know whether KVM was enabled when probing QEMU capabilities (and thus we know it's working), we can use this knowledge to provide better default value for virttype. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Since some may depend on the accelerator used when probing QEMU the cache becomes invalid when KVM becomes available or if it is not available anymore. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
CPU related capabilities may differ depending on accelerator used when probing. Let's use KVM if available when probing QEMU and fall back to TCG. The created capabilities already contain all we need to distinguish whether KVM or TCG was used: - KVM was used when probing capabilities: QEMU_CAPS_KVM is set QEMU_CAPS_ENABLE_KVM is not set - TCG was used and QEMU supports KVM, but it failed (e.g., missing kernel module or wrong /dev/kvm permissions) QEMU_CAPS_KVM is not set QEMU_CAPS_ENABLE_KVM is set - KVM was not used and QEMU does not support it QEMU_CAPS_KVM is not set QEMU_CAPS_ENABLE_KVM is not set Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Let's set QEMU_CAPS_KVM and QEMU_CAPS_ENABLE_KVM early so that the rest of the probing code can use these capabilities to handle KVM/TCG replies differently. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Using -machine instead of -M for QMP probing is safe because any QEMU binary which is capable of QMP probing supports -machine. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The code that runs a new QEMU process to be used for probing capabilities is separated into four reusable functions so that any code that wants to probe a QEMU process may just follow a few simple steps: cmd = virQEMUCapsInitQMPCommandNew(...); virQEMUCapsInitQMPCommandRun(cmd); /* talk to the running QEMU process using its QMP monitor */ if (reprobeIsRequired) { virQEMUCapsInitQMPCommandAbort(cmd, ...); virQEMUCapsInitQMPCommandRun(cmd); /* talk to the running QEMU process again */ } virQEMUCapsInitQMPCommandFree(cmd); Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 25 11月, 2016 6 次提交
-
-
由 Michal Privoznik 提交于
We have couple of functions that operate over NULL terminated lits of strings. However, our naming sucks: virStringJoin virStringFreeList virStringFreeListCount virStringArrayHasString virStringGetFirstWithPrefix We can do better: virStringListJoin virStringListFree virStringListFreeCount virStringListHasString virStringListGetFirstWithPrefix Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Boris Fiuczynski 提交于
If libvirt is compiled without NUMACTL support starting libvirtd reports a libvirt internal error "NUMA isn't available on this host" without checking if NUMA support is compiled into the libvirt binaries. This patch adds the missing NUMA support check to prevent the internal error. It also includes a check if the cgroup controller cpuset is available before using it. The error was noticed when libvirtd was restarted with running domains and on libvirtd start the qemuConnectCgroup gets called during qemuProcessReconnect. Signed-off-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com>
-
由 Eric Farman 提交于
Adjust the device string that is built for vhost-scsi devices so that it can be invoked from hotplug. From the QEMU command line, the file descriptors are expect to be numeric only. However, for hotplug, the file descriptors are expected to begin with at least one alphabetic character else this error occurs: # virsh attach-device guest_0001 ~/vhost.xml error: Failed to attach device from /root/vhost.xml error: internal error: unable to execute QEMU command 'getfd': Parameter 'fdname' expects a name not starting with a digit We also close the file descriptor in this case, so that shutting down the guest cleans up the host cgroup entries and allows future guests to use vhost-scsi devices. (Otherwise the guest will silently end.) Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
-
由 Eric Farman 提交于
Open /dev/vhost-scsi, and record the resulting file descriptor, so that the guest has access to the host device outside of the libvirt daemon. Pass this information, along with data parsed from the XML file, to build a device string for the qemu command line. That device string will be for either a vhost-scsi-ccw device in the case of an s390 machine, or vhost-scsi-pci for any others. Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
-
由 Eric Farman 提交于
We already have a "scsi" hostdev subsys type, which refers to a single LUN that is passed through to a guest. But what of things where multiple LUNs are passed through via a single SCSI HBA, such as with the vhost-scsi target? Create a new hostdev subsys type that will carry this. Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
-
由 Eric Farman 提交于
Do all the stuff for the vhost-scsi capability in QEMU, so it's in place for our checks later. Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
- 24 11月, 2016 1 次提交
-
-
由 Marc Hartmayer 提交于
Removed the comment 'Set the migration source' as it isn't valid anymore and 'start it up' isn't useful as qemuProcessStart() is already a speaking name. Signed-off-by: NMarc Hartmayer <mhartmay@linux.vnet.ibm.com>
-
- 23 11月, 2016 7 次提交
-
-
由 Michal Privoznik 提交于
Just like in the previous commit, we are not updating CGroups on chardev hot(un-)plug and thus leaving qemu unable to access any non-default device users are trying to hotplug. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
If users try to hotplug RNG device with a backend different to /dev/random or /dev/urandom the whole operation fails as qemu is unable to access the device. The problem is we don't update device CGroups during the operation. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
qemuDomainObjExitAgent is unsafe. First it accesses domain object without domain lock. Second it uses outdated logic that goes back to commit 79533da1 of year 2009 when code was quite different. (unref function instead of unreferencing only unlocked and disposed object in case of last reference and leaved unlocking to the caller otherwise). Nowadays this logic may lead to disposing locked object i guess. Another problem is that the callers of qemuDomainObjEnterAgent use domain object again (namely priv->agent) without domain lock. This patch address these two problems. qemuDomainGetAgent is dropped as unused.
-
由 Nikolay Shirokovskiy 提交于
-
由 Nikolay Shirokovskiy 提交于
Sometimes after domain restart agent is unavailabe even if it is up and running in guest. Diagnostic message is "QEMU guest agent is not available due to an error" that is 'priv->agentError' is set. Investiagion shows that 'priv->agent' is not NULL, so error flag is set probably during domain shutdown process and not cleaned up eventually. The patch is quite simple - just clean up error flag unconditionally upon domain stop. Other hunks address other cases when error flag is not cleaned up. 1. processSerialChangedEvent. We need to clean error flag unconditionally here too. For example if upon first 'connected' event we fail to connect and set error flag and then connect on second 'connected' event then error flag will remain set erroneously and make agent unavailable. 2. qemuProcessHandleAgentEOF. If error flag is set and we get EOF we need to change state (and diagnostic) from 'error' to 'not connected'.
-
由 Nikolay Shirokovskiy 提交于
-
由 Nikolay Shirokovskiy 提交于
qemuConnectAgent return -1 or -2 in case of different errors. A. -1 is a case of unsuccessuful connection to guest agent. B. -2 is a case of destoyed domain during connection attempt. All qemuConnectAgent callers handle the first error the same way so let's move this logic into qemuConnectAgent itself. Patched function returns 0 in case A and -1 in case B.
-
- 22 11月, 2016 6 次提交
-
-
由 Marc Hartmayer 提交于
Use the util function virHostdevIsSCSIDevice() to simplify if statements. Signed-off-by: NMarc Hartmayer <mhartmay@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
由 Marc Hartmayer 提交于
Add missing checks if a hostdev is a subsystem/SCSI device before access the union member 'subsys'/'scsi'. Also fix indentation and simplify qemuDomainObjCheckHostdevTaint(). Signed-off-by: NMarc Hartmayer <mhartmay@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
由 Sławek Kapłoński 提交于
New line character in name of domain is now forbidden because it mess virsh output and can be confusing for users. Validation of name is done in drivers, after parsing XML to avoid problems with dissappeared domains which was already created with new-line char in name. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Commit 3f71c797 added 'qemu_id' field to track the id of the cpu as reported by query-cpus. The patch did not include changes necessary to propagate the id through the functions matching the data to the libvirt cpu structures and thus all vcpus had id 0.
-
由 Peter Krempa 提交于
Don't use qemuMonitorGetCPUInfo which does a lot of matching to get the full picture which is not necessary and would be mostly discarded. Refresh only the vcpu halted state using data from query-cpus.
-
由 Peter Krempa 提交于
We don't need to call qemuMonitorGetCPUInfo which is very inefficient to get data required to update the vcpu 'halted' state. Add a monitor helper that will retrieve the halted state and return it in a bitmap so that it can be indexed easily.
-