- 27 7月, 2019 2 次提交
-
-
由 Stefan Berger 提交于
In case of an incoming migration we do not need to run swtpm_setup with all the parameters but only want to get the benefit of it creating a TPM state file for us that we can then label with an SELinux label. The actual state will be overwritten by the in- coming state. So we have to pass an indicator for incomingMigration all the way to the command line parameter generation for swtpm_setup. Signed-off-by: NStefan Berger <stefanb@linux.ibm.com> Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
-
由 Eric Blake 提交于
If we are using -blockdev, then node names are always available (because we set them). But when not using it, we have to scrape node names from QMP, and want to do so as infrequently as possible. We were scraping node names after reconnecting a new libvirtd to an existing guest (see qemuProcessReconnect), and after any block job that may have changed the set of node names we care about (legacy block jobs), but forgot to scrape the names when first starting a guest. Do so now in order to allow the checkpoint code to always have access to a node name without having to repeat a node name scrape itself. Future patches may need to clean up qemuDomainSetBlockThreshold (if node names are always available, then it doesn't need to repeat a scrape) and/or hotplug and media changes (if the addition of new nodes can result in a null node name, then scraping at that point in time would be appropriate). But for now, this patch addresses only the most common instance of a missing node name. Signed-off-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 19 7月, 2019 1 次提交
-
-
由 Peter Krempa 提交于
The block job event handler qemuProcessHandleBlockJob looks at the block job data to see whether the job requires synchronous handling. Since the block job event may arrive before we continue the job handling (if the job has no data to copy) we could hit the state when the job is still set as QEMU_BLOCKJOB_STATE_NEW (as we move it to the QEMU_BLOCKJOB_STATE_RUNNING state only after returning from monitor). If the event handler uses qemuBlockJobStartupFinalize it would unregister and free the job. Thankfully this is not a big problem for legacy blockjobs as we don't need much data for them but since we'd re-instantiate the job data structure we'd report wrong job type for active commit as qemu reports it as a regular commit job. Fix it by not using qemuBlockJobStartupFinalize function in qemuProcessHandleBlockJob as it is not starting the job anyways. https://bugzilla.redhat.com/show_bug.cgi?id=1721375Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 18 7月, 2019 7 次提交
-
-
由 Peter Krempa 提交于
The PR manager is a property of the format layer in qemu so we need to be able to track it also in the chains of orphaned block jobs. Add a helper for qemu to look also into the blockjob state. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Refresh the state of the jobs and process any events that might have happened while libvirt was not running. The job state processing requires some care to figure out if a job needs to be bumped. For any invalid job try doing our best to cancel it. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Add support for handling the event either synchronously or asynchronously using the event thread. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
With blockdev we'll need to use the JOB_STATUS_CHANGE so gate the old events by the blockdev capability. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Now that block job data is stored in the status XML portion we need to make sure that everything which changes the state also saves the status XML. The job registering function is used while parsing the status XML so in that case we need to skip the XML saving. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Add the job structure to the table when instantiating a new job and remove it when it terminates/fails. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Cole Robinson 提交于
Pass an xmlopt argument through all the needed network conf functions, like is done for domain XML handling. No functional change for now Reviewed-by: NLaine Stump <laine@laine.org> Signed-off-by: NCole Robinson <crobinso@redhat.com>
-
- 21 6月, 2019 2 次提交
-
-
由 Peter Krempa 提交于
Filter out the given capabilities and set domain taint if we've done so. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
For testing purposes it's sometimes desired to be able to control the presence of capabilities of qemu. This adds the possibility to do this via the qemu namespace. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 20 6月, 2019 11 次提交
-
-
由 Peter Krempa 提交于
When connecting to virtlogd fails e.g. due to wrong libvirtd selinux process label we'd report an utterly useless error message: $ virsh start upstream error: Failed to start domain upstream error: Cannot recv data: Connection reset by peer Use virLastErrorPrefixMessage in the correct place to give a better sense of what's going on: $ virsh start upstream error: Failed to start domain upstream error: can't connect to virtlogd: Cannot recv data: Connection reset by peer Signed-off-by: NPeter Krempa <pkrempa@redhat.com> ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jiri Denemark 提交于
Without "unavailable-features" CPU property we cannot properly detect whether a specific MSR feature we asked for (either explicitly or implicitly via a CPU model) was disabled by QEMU for some reason. Because this could break migration, snapshots, and save/restore operaions, it's better to just forbid any use of MSR features with QEMU which lacks "unavailable-features" CPU property. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Ján Tomko 提交于
Always assume JSON monitor was requested, since all the callers pass true anyway. Signed-off-by: NJán Tomko <jtomko@redhat.com> Acked-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Ján Tomko 提交于
If we have a monitor, it is a JSON monitor. Signed-off-by: NJán Tomko <jtomko@redhat.com> Acked-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Ján Tomko 提交于
Now that we no longer support the HMP monitor, remove some dead code. Signed-off-by: NJán Tomko <jtomko@redhat.com> Acked-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Ján Tomko 提交于
Now that the virDomainQemuAttach API returns an error, we can remove the unused qemuProcessAttach function as well, deleting the only user that possibly could have requested to open a non-JSON monitor. Signed-off-by: NJán Tomko <jtomko@redhat.com> Acked-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Michal Privoznik 提交于
If spawning qemu fails then we report an error and proceed to writing status XML onto the disk. This is unnecessary as we are sure that the domain is not running. At the same time, if virPidFileReadPath() fails it returns -errno. Use it in the error message. It may explain what went wrong. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Jiri Denemark 提交于
When updating guest CPU definition according to the vCPU actually created by QEMU, we want to use the generic qemuMonitorGetGuestCPU to get both CPUID and MSR features. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Jiri Denemark 提交于
It was never implemented or used for anything else anyway. Mainly because it uses CPUID features bits. The function is renamed as qemuMonitorGetGuestCPUx86 to make this explicit. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Jiri Denemark 提交于
Properly filter features which should not be passed to QEMU because they were never supported by QEMU or they did nothing and QEMU dropped them. Currently they are just silently ignored by the command line generator. Let's make this process more visible and clean by dropping the features from the domain's active definition in qemuProcessUpdateGuestCPU. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 14 6月, 2019 1 次提交
-
-
由 Jie Wang 提交于
If libvirt receives DISCONNECTED event and prDaemonRunning is set to false, and qemuDomainRemoveDiskDevice() is performing in the meantime, then qemuDomainRemoveDiskDevice() will fail to remove pr-helper object because prDaemonRunning is false. But removing that check from qemuHotplugRemoveManagedPR() is not enough, because after removing the object through monitor the qemuProcessKillManagedPRDaemon() is called which contains the same check. Thus the pr-helper process might be left behind. Signed-off-by: NJie Wang <wangjie88@huawei.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 12 6月, 2019 1 次提交
-
-
由 Peter Krempa 提交于
The hash table returned by qemuMonitorGetAllBlockJobInfo is organized by the frontend name (which skipps the 'drive-' prefix). While our code properly matches the jobs to the disk, qemu needs the full job name including the 'drive-' prefix to be able to identify jobs. Fix this by adding an argument to qemuMonitorGetAllBlockJobInfo which does not modify the job name before filling the hash. This fixes a regression where users would not be able to cancel/pivot block jobs after restarting libvirtd while a blockjob is running. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 06 6月, 2019 1 次提交
-
-
由 Andrea Bolognani 提交于
Commit 2f2254c7 attempted to fix a memory leak by ensuring cpumapToSet is always a freshly allocated bitmap, but regrettably introduced a NULL pointer access while doing so, because it called virBitmapCopy() without allocating the destination bitmap first. Solve the issue by using virBitmapNewCopy() instead. Reported-by: NJohn Ferlan <jferlan@redhat.com> Signed-off-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com> Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
-
- 04 6月, 2019 4 次提交
-
-
由 Andrea Bolognani 提交于
We're using VIR_AUTOPTR() for everything now, plus the cleanup section was not doing anything useful anyway. Signed-off-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Andrea Bolognani 提交于
In two out of three scenarios we are cleaning up properly after ourselves, but commit 5f2212c0 has changed the remaining one in a way that caused it to start leaking cpumapToSet. Refactor the logic so that cpumapToSet is always a freshly allocated bitmap that gets cleaned up automatically thanks to VIR_AUTOPTR(); this also allows us to remove the hostcpumap variable. Reported-by: NJohn Ferlan <jferlan@redhat.com> Signed-off-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Andrea Bolognani 提交于
Ever since the feature was introduced with commit 0f8e7ae3, it has contained a logic error in that it attempted to use a NUMA node map where a CPU map was expected. Because of that, guests using <numatune> might fail to start: # virsh start guest error: Failed to start domain guest error: cannot set CPU affinity on process 40055: Invalid argument This was particularly easy to trigger on POWER 8 machines, where secondary threads always show up as offline in the host: having <numatune> <memory mode='strict' placement='static' nodeset='1'/> </numatune> in the guest configuration, for example, would result in libvirt trying to set the process affinity so that it would prefer running on CPU 1, but since that's a secondary thread and thus shows up as offline, the operation would fail, and so would starting the guest. Use the newly introduced virNumaNodesetToCPUset() to convert the NUMA node map to a CPU map, which in the example above would be 48,56,64,72,80,88 - a valid input for virProcessSetAffinity(). https://bugzilla.redhat.com/show_bug.cgi?id=1703661Signed-off-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Jiri Denemark 提交于
When migrating a domain with invtsc CPU feature enabled, the TSC frequency of the destination host must match the frequency used when the domain was started on the source host or the destination host has to support TSC scaling. If the frequencies do not match and the destination host does not support TSC scaling, QEMU will fail to set the right TSC frequency when starting vCPUs on the destination and thus migration will fail. However, this is quite late since both host might have spent significant time transferring memory and perhaps even storage data. By adding the check to libvirt we can let migration fail before any data starts to be sent over. If for some reason libvirt is unable to detect the host's TSC frequency or scaling support, we'll just let QEMU try and the migration will either succeed or fail later. Luckily, we mandate TSC frequency to be explicitly set in the domain XML to even allow migration of domains with invtsc. We can just check whether the requested frequency is compatible with the current host before starting QEMU. https://bugzilla.redhat.com/show_bug.cgi?id=1641702Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 27 5月, 2019 1 次提交
-
-
由 Martin Kletzander 提交于
If the scheduler is set before vCPU0 cannot be moved into its cpu,cpuacct cgroup. While it is not yet known whether this is a bug or not, it makes sense for us to do that later as otherwise the scheduler would be inherited by vCPU and I/O Threads even when they do not have any such setting specified. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 30 4月, 2019 1 次提交
-
-
由 Daniel P. Berrangé 提交于
This reverts commit 2f5e6502. Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 18 4月, 2019 2 次提交
-
-
由 Michal Privoznik 提交于
It's funny how this went unnoticed for such a long time. Long story short, if a domain is configured with VIR_DOMAIN_NUMATUNE_MEM_STRICT libvirt doesn't really honour that. This is because of 7e72ac78 after which libvirt allowed qemu to allocate memory just anywhere and only after that it used some magic involving cpuset.memory_migrate and cpuset.mems to move the memory to desired NUMA nodes. This was done in order to work around some KVM bug where KVM would fail if there wasn't a DMA zone available on the NUMA node. Well, while the work around might stopped libvirt tickling the KVM bug it also caused a bug on libvirt side: if there is not enough memory on configured NUMA node(s) then any attempt to start a domain must fail. Because of the way we play with guest memory domains can start just happily. The solution is to move the child we've just forked into emulator cgroup, set up cpuset.mems and exec() qemu only after that. This basically reverts 7e72ac78 which was a workaround for kernel bug. This bug was apparently fixed because I've tested this successfully with recent kernel. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Daniel P. Berrangé 提交于
The call to resolve the actual network type will turn any NICs with type=network into one of the other types. Thus there should be no need to handle type=network in later switch() statements jumping off the actual type. Reviewed-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 16 4月, 2019 3 次提交
-
-
由 Daniel P. Berrangé 提交于
The APIs for allocating/notifying/removing network ports just take an internal domain interface struct right now. As a step towards turning these into public facing APIs, add a virNetworkPtr argument to all of them. Reviewed-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The port allocation APIs are currently called unconditionally for all types of NIC, but (mostly) only do anything for NICs with type=network. The exception is the port allocate API which does some validation even for NICs with type!=network. Relying on this validation is flawed, however, since the network driver may not even be installed. IOW virt drivers must not delegate validation to the network driver for NICs with type != network. This change allows us to report errors when the virtual network driver is not registered. Reviewed-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Martin Kletzander 提交于
This helps in a scenarios where vCPUs run with a priority that is so high they might starve the emulator thread. And it also fits with the rest of the settings. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 13 4月, 2019 1 次提交
-
-
由 Martin Kletzander 提交于
This does not cause a problem in usual scenarios thanks to us allowing CAP_DAC_OVERRIDE for the qemu process, however in some scenarios this might be an issue because the directory is created with mkdtemp(3) which explicitly creates that with 0700 permissions and qemu running as non-root cannot access that. The scenarios include: - Builds without CAPNG - Running libvirtd in certain container configurations [1] - and possibly others. [1] https://github.com/kubevirt/kubevirt/pull/2181#issuecomment-481840304Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 04 4月, 2019 2 次提交
-
-
由 Nikolay Shirokovskiy 提交于
Since the STOP event handler can use the pausedReason as sent to qemuProcessStopCPUs, we no longer need to send duplicate suspended lifecycle events because we know what caused the stop along with extra details. This processing allows us to also remove the duplicated state change from qemuProcessStopCPUs. Reviewed-by: NJohn Ferlan <jferlan@redhat.com> Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
-
由 Nikolay Shirokovskiy 提交于
Map is based on existing cases in code where we send suspended event after changing domain state to paused. Reviewed-by: NJohn Ferlan <jferlan@redhat.com> Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
-