- 21 11月, 2019 1 次提交
-
-
由 Jiri Denemark 提交于
When starting a domain without a CPU model specified in the domain XML, QEMU will choose a default one. Which is fine unless the domain gets migrated to another host because libvirt doesn't perform any CPU ABI checks and the virtual CPU provided by QEMU on the destination host can differ from the one on the source host. With QEMU 4.2.0 we can probe for the default CPU model used by QEMU for a particular machine type and store it in the domain XML. This way the chosen CPU model is more visible to users and libvirt will make sure the guest will see the exact same CPU after migration. Architecture specific notes - aarch64: We only set the default CPU for TCG domains as KVM requires explicit "-cpu host" to work. - ppc64: The default CPU for KVM is "host" thanks to some hacks in QEMU, we will translate the default model to the model corresponding to the host CPU ("POWER8" on a Power8 host, "POWER9" on Power9 host, etc.). This is not a problem as the corresponding CPU model is in fact an alias for "host". This is probably not ideal, but it's not wrong and the default virtual CPU configured by libvirt is the same QEMU would use. TCG uses various CPU models depending on machine type and its version. - s390x: The default CPU for KVM is "host" while TCG defaults to "qemu". - x86_64: The default CPU model (qemu64) is not runnable on any host with KVM, but QEMU just disables unavailable features and starts happily. https://bugzilla.redhat.com/show_bug.cgi?id=1598151 https://bugzilla.redhat.com/show_bug.cgi?id=1598162Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 22 8月, 2019 1 次提交
-
-
由 Ján Tomko 提交于
In preparation to moving the validation to the parser, we need to supply the correct caps. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 26 7月, 2019 1 次提交
-
-
由 Stefan Berger 提交于
Add a test case for the TPM XML encryption parser and formatter. Signed-off-by: NStefan Berger <stefanb@linux.ibm.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 06 6月, 2018 2 次提交
-
-
由 Stefan Berger 提交于
This patch extends the TPM's device XML with TPM 2.0 support. This only works for the emulator type backend and looks as follows: <tpm model='tpm-tis'> <backend type='emulator' version='2.0'/> </tpm> The swtpm process now has --tpm2 as an additional parameter: system_u:system_r:svirt_t:s0:c597,c632 tss 18477 11.8 0.0 28364 3868 ? Rs 11:13 13:50 /usr/bin/swtpm socket --daemon --ctrl type=unixio,path=/var/run/libvirt/qemu/swtpm/testvm-swtpm.sock,mode=0660 --tpmstate dir=/var/lib/libvirt/swtpm/testvm/tpm2,mode=0640 --log file=/var/log/swtpm/libvirt/qemu/testvm-swtpm.log --tpm2 --pid file=/var/run/libvirt/qemu/swtpm/testvm-swtpm.pid The version of the TPM can be changed and the state of the TPM is preserved. Signed-off-by: NStefan Berger <stefanb@linux.vnet.ibm.com> Reviewed-by: NJohn Ferlan <jferlan@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Stefan Berger 提交于
This patch adds support for an external swtpm TPM emulator. The XML for this type of TPM looks as follows: <tpm model='tpm-tis'> <backend type='emulator'/> </tpm> The XML will currently only define a TPM 1.2. Extend the documentation. Add a test case testing the XML parser and formatter. Signed-off-by: NStefan Berger <stefanb@linux.vnet.ibm.com> Reviewed-by: NJohn Ferlan <jferlan@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 03 5月, 2018 1 次提交
-
-
由 Stefan Berger 提交于
Enable the TPM CRB to be specified in the domain XML. This now allows to describe the TPM device like this: <tpm model='tpm-crb'> <backend type='passthrough'> <device path='/dev/tpm0'/> </backend> </tpm> Extend the XML schema to also allow tpm-crb. Extend the documentation. Add a test case for testing the XML parser and formatter. Signed-off-by: NStefan Berger <stefanb@linux.vnet.ibm.com> Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
-
- 05 12月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
There's no reason for the files to have qemuxml2xmlout- prefix since they all live under qemuxml2xmloutdata directory. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 4月, 2017 1 次提交
-
-
由 Pavel Hrdina 提交于
Our test data used a lot of different qemu binary paths and some of them were based on downstream systems. Note that there is one file where I had to add "accel=kvm" because the qemuargv2xml code parses "/usr/bin/kvm" as virt type="kvm". Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 10 2月, 2016 1 次提交
-
-
由 Cole Robinson 提交于
We use the PreFormat callback for this. Many test cases need to be extended to pass in proper qemuCaps flags so AssignAddresses doesn't throw errors. One test case (pcie-root-port-too-many) is dropped, since it was meant only for checking an error condition in qemuxml2argv, and one we add in AssignAddresses it errors here too. Long term I think AssignAddresses should be handled in qemu's PostParse callback, but that's not entirely straightforward. Handling it here means we can get the test suite churn over with.
-
- 09 2月, 2016 1 次提交
-
-
由 Cole Robinson 提交于
Most qemuxml2xml tests expect that the input XML is unchanged after parsing. This is unlike 99% of new qemu configs in the wild, which after initial parsing end up with stable PCI device addresses. The xml2xml bit doesn't currently hit that code path though, so most XML testing indeed does not change. Future patches will add that PCI address bits, which means most test cases will have different output. So let's do away with the hardcoded same vs different test split, and always track a separate output file. Tests can still have same input and output, it just necessitates 2 separate XML files.
-
- 27 1月, 2016 1 次提交
-
-
由 Pavel Hrdina 提交于
The current code was a little bit odd. At first we've removed all possible implicit input devices from domain definition to add them later back if there was any graphics device defined while parsing XML description. That's not all, while formating domain definition to XML description we at first ignore any input devices with bus different to USB and VIRTIO and few lines later we add implicit input devices to XML. This seems to me as a lot of code for nothing. This patch may look to be more complicated than original approach, but this is a preferred way to modify/add driver specific stuff only in those drivers and not deal with them in common parsing/formating functions. The update is to add those implicit input devices into config XML to follow the real HW configuration visible by guest OS. There was also inconsistence between our behavior and QEMU's in the way, that in QEMU there is no way how to disable those implicit input devices for x86 architecture and they are available always, even without graphics device. This applies also to XEN hypervisor. VZ driver already does its part by putting correct implicit devices into live XML. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 25 4月, 2013 1 次提交
-
-
由 Ján Tomko 提交于
<controller type='pci' index='0' model='pci-root'/> is auto-added to pc* machine types. Without this controller PCI bus 0 is not available and no PCI addresses are assigned by default. Since older libvirt supported PCI bus 0 even without this controller, it is removed from the XML when migrating.
-
- 13 4月, 2013 1 次提交
-
-
由 Stefan Berger 提交于
Signed-off-by: NStefan Berger <stefanb@linux.vnet.ibm.com> Reviewed-by: NCorey Bryant <coreyb@linux.vnet.ibm.com> Tested-by: NCorey Bryant <coreyb@linux.vnet.ibm.com>
-