- 15 11月, 2016 30 次提交
-
-
由 Jiri Denemark 提交于
Guest CPU definitions with mode='custom' and missing <vendor> are expected to run on a host CPU from any vendor as long as the required CPU model can be used as a guest CPU on the host. But even though no CPU vendor was explicitly requested we would sometimes force it due to a bug in virCPUUpdate and virCPUTranslate. The bug would effectively forbid cross vendor migrations even if they were previously working just fine. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
PPC driver needs to convert POWERx_v* legacy CPU model names into POWERx to maintain backward compatibility with existing domains. This patch adds a new step into the guest CPU configuration work flow which CPU drivers can use to convert legacy CPU definitions. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The API is no longer used anywhere else since it was replaced by a much saner work flow utilizing new APIs that work on virCPUDefPtr directly: virCPUCompare, virCPUUpdate, and virCPUTranslate. Not testing the new work flow caused some bugs to be hidden. This patch reveals them, but doesn't attempt to fix them. To make sure all test still pass after this patch, all affected test results are modified to pretend the tests succeeded. All of the bugs will be fixed in the following commits and the artificial modifications will be reverted. The following is the list of bugs in the new CPU model work flow: - a guest CPU with mode='custom' and missing <vendor/> gets the vendor copied from host's CPU (the vendor should only be copied to host-model CPUs): DO_TEST_UPDATE("x86", "host", "min", VIR_CPU_COMPARE_IDENTICAL) DO_TEST_UPDATE("x86", "host", "pentium3", VIR_CPU_COMPARE_IDENTICAL) DO_TEST_GUESTCPU("x86", "host-better", "pentium3", NULL, 0) - when a guest CPU with mode='custom' needs to be translated into another model because the original model is not supported by a hypervisor, the result will have its vendor set to the vendor of the original CPU model as specified in cpu_map.xml even if the original guest CPU XML didn't contain <vendor/>: DO_TEST_GUESTCPU("x86", "host", "guest", model486, 0) DO_TEST_GUESTCPU("x86", "host", "guest", models, 0) DO_TEST_GUESTCPU("x86", "host-Haswell-noTSX", "Haswell-noTSX", haswell, 0) - legacy POWERx_v* model names are not recognized: DO_TEST_GUESTCPU("ppc64", "host", "guest-legacy", ppc_models, 0) Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The API doesn't change the array so let's make it constant. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Now that all tests pass NULL as the preferred model, we can just drop that test parameter. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
In some cases preferred model doesn't really do anything since the result remains the same even if it is removed. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Using a preferred model for guest CPUs with forbidden fallback masks a bug in the code. It would just happily use another CPU model supported by a hypervisor even though it is explicitly forbidden in the CPU XML. This patch temporarily changes the expected result to -2, which is used when the result XML file cannot be found (but it was supposed not to be found since the tested API should have failed). The result will be switched back to -1 few commits later when the original bug gets fixed. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Using a preferred CPU model which is not in the list of CPU models supported by a hypervisor does not make sense. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Guest CPUs with match='minimum' should always be updated to match host CPU model. Trying to get different results by supplying preferred models does not make sense. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The new name is virCPUDataFormat. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The new name is virCPUDataParse. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The new name is virCPUModelIsAllowed. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
The new name is virCPUGetModels. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Maxim Nestratov 提交于
It was introduced by commit 7a51d9eb, which started to use monitor commands without job acquiring, which is unsafe and leads to simultaneous access to vm->mon structure by different threads. Crash backtrace is the following (shortened): Program received signal SIGSEGV, Segmentation fault. qemuMonitorSend (mon=mon@entry=0x7f4ef4000d20, msg=msg@entry=0x7f4f18e78640) at qemu/qemu_monitor.c:1011 1011 while (!mon->msg->finished) { 0 qemuMonitorSend () at qemu/qemu_monitor.c:1011 1 0x00007f691abdc720 in qemuMonitorJSONCommandWithFd () at qemu/qemu_monitor_json.c:298 2 0x00007f691abde64a in qemuMonitorJSONCommand at qemu/qemu_monitor_json.c:328 3 qemuMonitorJSONQueryCPUs at qemu/qemu_monitor_json.c:1408 4 0x00007f691abcaebd in qemuMonitorGetCPUInfo g@entry=false) at qemu/qemu_monitor.c:1931 5 0x00007f691ab96863 in qemuDomainRefreshVcpuHalted at qemu/qemu_domain.c:6309 6 0x00007f691ac0af99 in qemuDomainGetStatsVcpu at qemu/qemu_driver.c:18945 7 0x00007f691abef921 in qemuDomainGetStats at qemu/qemu_driver.c:19469 8 qemuConnectGetAllDomainStats at qemu/qemu_driver.c:19559 9 0x00007f693382e806 in virConnectGetAllDomainStats at libvirt-domain.c:11546 10 0x00007f6934470c40 in remoteDispatchConnectGetAllDomainStats at remote.c:6267 (gdb) p mon->msg $1 = (qemuMonitorMessagePtr) 0x0 This change fixes it by calling qemuDomainRefreshVcpuHalted only when job is acquired. Signed-off-by: NMaxim Nestratov <mnestratov@virtuozzo.com>
-
由 Laine Stump 提交于
For machinetypes with a pci-root bus (all legacy PCI), libvirt will make a "fake" reservation for one extra slot prior to assigning addresses to unaddressed PCI endpoint devices in the domain. This will trigger auto-adding of a pci-bridge for the final device to be assigned an address *if that device would have otherwise instead been the last device on the last available pci-bridge*; thus it assures that there will always be at least one slot left open in the domain's bus topology for expansion (which is important both for hotplug (since a new pci-bridge can't be added while the guest is running) as well as for offline additions to the config (since adding a new device might otherwise in some cases require re-addressing existing devices, which we want to avoid)). It's important to note that for the above case (legacy PCI), we must check for the special case of all slots on all buses being occupied *prior to assigning any addresses*, and avoid attempting to reserve the extra address in that case, because there is no free address in the existing topology, so no place to auto-add a pci-bridge for expansion (i.e. it would always fail anyway). Since that condition can only be reached by manual intervention, this is acceptable. For machinetypes with pcie-root (Q35, aarch64 virt), libvirt's methodology for automatically expanding the bus topology is different - pcie-root-ports are plugged into slots (soon to be functions) of pcie-root as needed, and the new endpoint devices are assigned to the single slot in each pcie-root-port. This is done so that the devices are, by default, hotpluggable (the slots of pcie-root don't support hotplug, but the single slot of the pcie-root-port does). Since pcie-root-ports can only be plugged into pcie-root, and we don't auto-assign endpoint devices to the pcie-root slots, this means topology expansion doesn't compete with endpoint devices for slots, so we don't need to worry about checking for all "useful" slots being free *prior* to assigning addresses to new endpoint devices - as a matter of fact, if we attempt to reserve the open slots before the used slots, it can lead to errors. Instead this patch just reserves one slot for a "future potential" PCIe device after doing the assignment for actual devices, but only if the only PCI controller defined prior to starting address assignment was pcie-root, and only if we auto-added at least one PCI controller during address assignment. This assures two things: 1) that reserving the open slots will only be done when the domain is initially defined, never at any time after, and 2) that if the user understands enough about PCI controllers that they are adding them manually, that we don't mess up their plan by adding extras - if they know enough to add one pcie-root-port, or to manually assign addresses such that no pcie-root-ports are needed, they know enough to add extra pcie-root-ports if they want them (this could be called the "libguestfs clause", since libguestfs needs to be able to create domains with as few devices/controllers as possible). This is set to reserve a single free port for now, but could be increased in the future if public sentiment goes in that direction (it's easy to increase later, but essentially impossible to decrease)
-
由 Laine Stump 提交于
Real Q35 hardware has an ICH9 chip that includes several integrated devices at particular addresses (see the file docs/q35-chipset.cfg in the qemu source). libvirt already attempts to put the first two sets of ich9 USB2 controllers it finds at 00:1D.* and 00:1A.* to match the real hardware. This patch does the same for the ich9 "HD audio" device. The main inspiration for this patch is that currently the *only* device in a reasonable "workstation" type virtual machine config that requires a legacy PCI slot is the audio device, Without this patch, the standard Q35 machine created by virt-manager will have a dmi-to-pci-bridge and a pci-bridge just for the sound device; with the patch (and if you change the sound device model from the default "ich6" to "ich9"), the machine definition constructed by virt-manager has absolutely no legacy PCI controllers - any legacy PCI devices (e.g. video and sound) are on pcie-root as integrated devices.
-
由 Laine Stump 提交于
Previously we added a set of EHCI+UHCI controllers to Q35 machines to mimic real hardware as closely as possible, but recent discussions have pointed out that the nec-usb-xhci (USB3) controller is much more virtualization-friendly (uses less CPU), so this patch switches the default for Q35 machinetypes to add an XHCI instead (if it's supported, which it of course *will* be). Since none of the existing test cases left out USB controllers in the input XML, a new Q35 test case was added which has *no* devices, so ends up with only the defaults always put in by qemu, plus those added by libvirt.
-
由 Laine Stump 提交于
Now the a dmi-to-pci-bridge is automatically added just as it's needed (when a pci-bridge is being added), we no longer have any need to force-add one to every single Q35 domain.
-
由 Laine Stump 提交于
A few of the qemu test cases assume that a dmi-to-pci-bridge will always be added at index 1, and so they omit it from the input data even though a pci-bridge is present at index 2, e.g.: <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='2' model='pci-bridge'/> Support for this odd practice was discussed on libvir-list and we decided that the complex code required to make this continue was not worth the headache of maintaining. So instead, this patch modifies the test cases to manually add a dmi-to-pci-bridge at index 1 (since an upcoming patch is going to eliminate the unconditional adding of dmi-to-pci-bridge). Because the auto-add was placing the dmi-to-pci-bridge later in the list (even though it has a lower index) the test output is also updated to take account for the new order (which puts the pci controllers in index-order)
-
由 Laine Stump 提交于
Previously libvirt would only add pci-bridge devices automatically when an address was requested for a device that required a legacy PCI slot and none was available. This patch expands that support to dmi-to-pci-bridge (which is needed in order to add a pci-bridge on a machine with a pcie-root), and pcie-root-port (which is needed to add a hotpluggable PCIe device). It does *not* automatically add pcie-switch-upstream-ports or pcie-switch-downstream-ports (and currently there are no plans for that). Given the existing code to auto-add pci-bridge devices, automatically adding pcie-root-ports is fairly straightforward. The dmi-to-pci-bridge support is a bit tricky though, for a few reasons: 1) Although the only reason to add a dmi-to-pci-bridge is so that there is a reasonable place to plug in a pci-bridge controller, most of the time it's not the presence of a pci-bridge *in the config* that triggers the requirement to add a dmi-to-pci-bridge. Rather, it is the presence of a legacy-PCI device in the config, which triggers auto-add of a pci-bridge, which triggers auto-add of a dmi-to-pci-bridge (this is handled in virDomainPCIAddressSetGrow() - if there's a request to add a pci-bridge we'll check if there is a suitable bus to plug it into; if not, we first add a dmi-to-pci-bridge). 2) Once there is already a single dmi-to-pci-bridge on the system, there won't be a need for any more, even if it's full, as long as there is a pci-bridge with an open slot - you can also plug pci-bridges into existing pci-bridges. So we have to make sure we don't add a dmi-to-pci-bridge unless there aren't any dmi-to-pci-bridges *or* any pci-bridges. 3) Although it is strongly discouraged, it is legal for a pci-bridge to be directly plugged into pcie-root, and we don't want to auto-add a dmi-to-pci-bridge if there is already a pci-bridge that's been forced directly into pcie-root. Although libvirt will now automatically create a dmi-to-pci-bridge when it's needed, the code still remains for now that forces a dmi-to-pci-bridge on all domains with pcie-root (in qemuDomainDefAddDefaultDevices()). That will be removed in a future patch. For now, the pcie-root-ports are added one to a slot, which is a bit wasteful and means it will fail after 31 total PCIe devices (30 if there are also some PCI devices), but helps keep the changeset down for this patch. A future patch will have 8 pcie-root-ports sharing the functions on a single slot.
-
由 Laine Stump 提交于
Andrea had the right idea when he disabled the "reserve an extra unused slot" bit for aarch64/virt. For *any* PCI Express-based machine, it is pointless since 1) an extra legacy-PCI slot can't be used for hotplug, since hotplug into legacy PCI slots doesn't work on PCI Express machinetypes, and 2) even for "coldplug" expansion, everybody will want to expand using Express controllers, not legacy PCI. This patch eliminates the extra slot reserve unless the system has a pci-root (i.e. legacy PCI)
-
由 Laine Stump 提交于
The nec-usb-xhci device (which is a USB3 controller) has always presented itself as a PCI device when plugged into a legacy PCI slot, and a PCIe device when plugged into a PCIe slot, but libvirt has always auto-assigned it to a legacy PCI slot. This patch changes that behavior to auto-assign to a PCIe slot on systems that have pcie-root (e.g. Q35 and aarch64/virt). Since we don't yet auto-create pcie-*-port controllers on demand, this means a config with an nec-xhci USB controller that has no PCI address assigned will also need to have an otherwise-unused pcie-*-port controller specified: <controller type='pci' model='pcie-root-port'/> <controller type='usb' model='nec-xhci'/> (this assumes there is an otherwise-unused slot on pcie-root to accept the pcie-root-port)
-
由 Laine Stump 提交于
The e1000e is an emulated network device based on the Intel 82574, present in qemu 2.7.0 and later. Among other differences from the e1000, it presents itself as a PCIe device rather than legacy PCI. In order to get it assigned to a PCIe controller, this patch updates the flags setting for network devices when the model name is "e1000e". (Note that for some reason libvirt has never validated the network device model names other than to check that there are no dangerous characters in them. That should probably change, but is the subject of another patch.) Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1343094
-
由 Laine Stump 提交于
libvirt previously assigned nearly all devices to a "hotpluggable" legacy PCI slot even on machines with a PCIe root bus (and even though most such machines don't even support hotplug on legacy PCI slots!) Forcing all devices onto legacy PCI slots means that the domain will need a dmi-to-pci-bridge (to convert from PCIe to legacy PCI) and a pci-bridge (to provide hotpluggable legacy PCI slots which, again, usually aren't hotpluggable anyway). To help reduce the need for these legacy controllers, this patch tries to assign virtio-1.0-capable devices to PCIe slots whenever possible, by setting appropriate connectFlags in virDomainCalculateDevicePCIConnectFlags(). Happily, when that function was written (just a few commits ago) it was created with a "virtioFlags" argument, set by both of its callers, which is the proper connectFlags to set for any virtio-*-pci device - depending on the arch/machinetype of the domain, and whether or not the qemu binary supports virtio-1.0, that flag will have either been set to PCI or PCIe. This patch merely enables the functionality by setting the flags for the device to whatever is in virtioFlags if the device is a virtio-*-pci device. NB: the first virtio video device will be placed directly on bus 0 slot 1 rather than on a pcie-root-port due to the override for primary video devices in qemuDomainValidateDevicePCISlotsQ35(). Whether or not to change that is a topic of discussion, but this patch doesn't change that particular behavior. NB2: since the slot must be hotpluggable, and pcie-root (the PCIe root complex) does *not* support hotplug, this means that suitable controllers must also be in the config (i.e. either pcie-root-port, or pcie-downstream-port). For now, libvirt doesn't add those automatically, so if you put virtio devices in a config for a qemu that has PCIe-capable virtio devices, you'll need to add extra pcie-root-ports yourself. That requirement will be eliminated in a future patch, but for now, it's simple to do this: <controller type='pci' model='pcie-root-port'/> <controller type='pci' model='pcie-root-port'/> <controller type='pci' model='pcie-root-port'/> ... Partially Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1330024
-
由 Laine Stump 提交于
This patch cleans up the connect flags for certain types/models of devices that aren't PCI to return 0. In the future that may be used as an indicator to the caller about whether or not a device needs a PCI address. For now it's just ignored, except for in virDomainPCIAddressEnsureAddr() - called during device hotplug - (and in some cases actually needs to be re-set to PCI|HOTPLUGGABLE just in case someone (in some old config) has manually set a PCI address for a device that isn't PCI.
-
由 Laine Stump 提交于
Before now, all the qemu hotplug functions assumed that all devices to be hotplugged were legacy PCI endpoint devices (VIR_PCI_CONNECT_TYPE_PCI_DEVICE). This worked out "okay", because all devices *are* legacy PCI endpoint devices on x86/440fx machinetypes, and hotplug didn't work properly on machinetypes using PCIe anyway (hotplugging onto a legacy PCI slot doesn't work, and until commit b87703cf any attempt to manually specify a PCIe address for a hotplugged device would be erroneously rejected). This patch makes all qemu hotplug operations honor the pciConnectFlags set by the single all-knowing function qemuDomainDeviceCalculatePCIConnectFlags(). This is done in 3 steps, but in a single commit since we would have to touch the other points at each step anyway: 1) add a flags argument to the hypervisor-agnostic virDomainPCIAddressEnsureAddr() (previously it hardcoded ..._PCI_DEVICE) 2) add a new qemu-specific function qemuDomainEnsurePCIAddress() which gets the correct pciConnectFlags for the device from qemuDomainDeviceConnectFlags(), then calls virDomainPCIAddressEnsureAddr(). 3) in qemu_hotplug.c replace all calls to virDomainPCIAddressEnsureAddr() with calls to qemuDomainEnsurePCIAddress() So in effect, we're putting a "shim" on top of all calls to virDomainPCIAddressEnsureAddr() that sets the right pciConnectFlags.
-
由 Laine Stump 提交于
Set pciConnectFlags in each device's DeviceInfo and then use those flags later when validating existing addresses in qemuDomainCollectPCIAddress() and when assigning new addresses with qemuDomainPCIAddressReserveNextAddr() (rather than scattering the logic about which devices need which type of slot all over the place). Note that the exact flags set by qemuDomainDeviceCalculatePCIConnectFlags() are different from the flags previously set manually in qemuDomainCollectPCIAddress(), but this doesn't matter because all validation of addresses in that case ignores the setting of the HOTPLUGGABLE flag, and treats PCIE_DEVICE and PCI_DEVICE the same (this lax checking was done on purpose, because there are some things that we want to allow the user to specify manually, e.g. assigning a PCIe device to a PCI slot, that we *don't* ever want libvirt to do automatically. The flag settings that we *really* want to match are 1) the old flag settings in qemuDomainAssignDevicePCISlots() (which is HOTPLUGGABLE | PCI_DEVICE for everything except PCI controllers) and 2) the new flag settings done by qemuDomainDeviceCalculatePCIConnectFlags() (which are currently exactly that - HOTPLUGGABLE | PCI_DEVICE for everything except PCI controllers).
-
由 Laine Stump 提交于
The lowest level function of this trio (qemuDomainDeviceCalculatePCIConnectFlags()) aims to be the single authority for the virDomainPCIConnectFlags to use for any given device using a particular arch/machinetype/qemu-binary. qemuDomainFillDevicePCIConnectFlags() sets info->pciConnectFlags in a single device (unless it has no virDomainDeviceInfo, in which case it's a NOP). qemuDomainFillAllPCIConnectFlags() sets info->pciConnectFlags in all devices that have a virDomainDeviceInfo The latter two functions aren't called anywhere yet. This commit is just making them available. Later patches will replace all the current hodge-podge of flag settings with calls to this single authority.
-
由 Laine Stump 提交于
These functions provide a simple one line method of learning if the current domain has a pci-root or pcie-root bus.
-
由 Pavel Glushchak 提交于
dom xml generated on begin step should be passed to perform step in VIR_MIGRATE_PARAM_DEST_XML parameter. Otherwise 'XML error: failed to parse xml document' is raised on destination host as dom xml is NULL. Signed-off-by: NPavel Glushchak <pglushchak@virtuozzo.com>
-
- 14 11月, 2016 10 次提交
-
-
由 Erik Skultety 提交于
Although there already was an effort (b620bdee) to replace vshPrint occurrences with vshPrintExtra due to '--quiet' flag, there were still some leftovers. So this patch fixes them, hopefully for good. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1356881Signed-off-by: NErik Skultety <eskultet@redhat.com>
-
由 Erik Skultety 提交于
There were a few places in our virsh* code where instead of calling vshError on failure we called vshPrint. Signed-off-by: NErik Skultety <eskultet@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
The <pre/> section is rendered as-is on the page. That is, if all the lines are prefixed with 4 spaces the rendered page will also have them. Problem is if we put a box around such <pre/> because the content might not fix into it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-