- 13 3月, 2020 7 次提交
-
-
由 Peter Krempa 提交于
For a long time we've masked out VIR_DOMAIN_BLOCK_COPY_SHALLOW if there's no backing chain for the copied disk to simplify the code. One of the refactors of the block copy code caused that we no longer update the 'flags' variable just the local copies. This was okay until in ccd4228a we started storing the job flags in the block job data. Given that we modify how we call qemu we also should modify @flags so that the correct value is recorded in the block job data. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Move the check whether the job is already synchronised to the beginning of the function so that we don't try to do some of the steps necessary for pivoting prior to actually wanting to pivot. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
'nfs' variable was set to -1 or -2 on agent failure. Cleanup then tried to free 'nfs' elements of the array which resulted into a crash. Make 'nfs' size_t and assign it only on successful agent call. https://bugzilla.redhat.com/show_bug.cgi?id=1812965 Broken by commit 599ae372Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
The only caller doesn't check the value and also there are no real errors to report anyways. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Michal Privoznik 提交于
The virQEMUCaps structure has usedQMP member which in the past used to tell if qemu we are dealing with is capable of QMP. Well, we don't support HMP anymore (minus a few HMP passthrough commands, which are wrapped into QMP anyways) and the member is not used really. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Nikolay Shirokovskiy 提交于
needReply added in [1] looks redundant. Indeed it is set to false only when mon->await_event is set too (the only exception qemuAgentFSTrim which is mistaken). However it fixes the issue when qemuAgentCommand exits on error path and mon->await_event is not reset. Let's instead reset mon->await_event properly. Also remove "Woken up by event" debug message as it can be misleading. We can get it also if monitor is closed due to serial changed event currently. Anyway both qemuAgentClose and qemuAgentNotifyEvent log itself. [1] qemu: make sure agent returns error when required data are missing Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
Sync was introduced in [1] to check for ga presence. This check is racy but in the era before serial events are available there was not better solution I guess. In case we have the events the sync function is different. It allows us to flush stateless ga channel from remnants of previous communications. But we need to do it only once. Until we get timeout on issued command channel state is ok. [1] qemu_agent: Issue guest-sync prior to every command Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 12 3月, 2020 1 次提交
-
-
由 Michal Privoznik 提交于
If a disk has persistent reservations enabled, qemu-pr-helper might open not only /dev/mapper/control but also individual targets of the multipath device. We are already querying for them in CGroups, but now we have to create them in the namespace too. This was brought up in [1]. 1: https://bugzilla.redhat.com/show_bug.cgi?id=1711045#c61Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Tested-by: NLin Ma <LMa@suse.com> Reviewed-by: NJim Fehlig <jfehlig@suse.com>
-
- 11 3月, 2020 5 次提交
-
-
由 Daniel P. Berrangé 提交于
This converts the QEMU agent APIs to use the per-VM event loop, which involves switching from virEvent APIs to GMainContext / GSource APIs. A GSocket is used as a convenient way to create a GSource for a socket, but is not yet used for actual I/O. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
We are dealing with the QEMU agent, not the monitor. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
This converts the QEMU monitor APIs to use the per-VM event loop, which involves switching from virEvent APIs to GMainContext / GSource APIs. A GSocket is used as a convenient way to create a GSource for a socket, but is not yet used for actual I/O. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
In common with regular QEMU guests, the QMP probing will need an event loop for handling monitor I/O operations. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The event loop thread will be responsible for handling any per-domain I/O operations, most notably the QEMU monitor and agent sockets. We start this event loop when launching QEMU, but stopping the event loop is a little more complicated. The obvious idea is to stop it in qemuProcessStop(), but if we do that we risk loosing the final events from the QEMU monitor, as they might not have been read by the event thread at the time we tell the thread to stop. The solution is to delay shutdown of the event thread until we have seen EOF from the QEMU monitor, and thus we know there are no further events to process. Note that this assumes that we don't have events to process from the QEMU agent. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 09 3月, 2020 1 次提交
-
-
由 Michal Privoznik 提交于
When preparing images for block jobs we modify their seclabels so that QEMU can open them. However, as mentioned in the previous commit, secdrivers base some it their decisions whether the image they are working on is top of of the backing chain. Fortunately, in places where we call secdrivers we know this and the information can be passed to secdrivers. The problem is the following: after the first blockcommit from the base to one of the parents the XATTRs on the base image are not cleared and therefore the second attempt to do another blockcommit fails. This is caused by blockcommit code calling qemuSecuritySetImageLabel() over the base image, possibly multiple times (to ensure RW/RO access). A naive fix would be to call the restore function. But this is not possible, because that would deny QEMU the access to the base image. Fortunately, we can use the fact that seclabels are remembered only for the top of the backing chain and not for the rest of the backing chain. And thanks to the previous commit we can tell secdrivers which images are top of the backing chain. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1803551Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
- 05 3月, 2020 8 次提交
-
-
由 Daniel P. Berrangé 提交于
Historically threads are given a name based on the C function, and this name is just used inside libvirt. With OS level thread naming this name is now visible to debuggers, but also has to fit in 15 characters on Linux, so function names are too long in some cases. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
The qemuMonitorOpenFD method has not been used since it was first introduced. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
Libvirt has never configured the QEMU agent to support running on a PTY implicitly. In theory an end user may have written such an XML config, but this is reasonably unlikely since when a bare <channel> is provided, libvirt will auto-expand it to a UNIX socket backend. With this change a user who has use the PTY backend will have to switch to the UNIX backend if they wish to use libvirt APIs for interacting with the agent. This will not have guest ABI impact. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Peter Krempa 提交于
qemuMonitorJSONMakeCommandInternal does the full command construction if you pass in what would become the value of the 'arguments' key. Refactor the open-coded implementation to use the helper and use modern cleanup helpers at the same time. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Peter Krempa 提交于
Make it obvious that the function always returns a valid pointer and fix all callers. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Michal Privoznik 提交于
I've found that if my virtlogd is socket activated but the daemon doesn't run yet, then the virt-qemu-run is killed right after it tries to start the domain. The problem is that because the default setting is to use virtlogd, the domain create code tries to connect to virtlogd socket, which in turn tries to detect who is connecting (virNetSocketGetUNIXIdentity()) and as a part of it, it will try to open /proc/${PID_OF_SHIM}/stat which is denied by SELinux: type=AVC msg=audit(1582903501.927:323): avc: denied { search } for \ pid=1210 comm="virtlogd" name="1843" dev="proc" ino=37224 \ scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 \ tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=dir \ permissive=0 Virtlogd reacts by closing the connection which the shim sees as SIGPIPE. Since the default response to the signal is Term, we don't even get to reporting any error nor to removing the temporary directory. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Michal Privoznik 提交于
When virt-qemu-run is ran without any root directory specified on the command line, a temporary directory is made and used instead. But since we are using g_dir_make_tmp() to create the directory it is going to have 0700 mode. So even though we create the whole directory structure under it and label everything, QEMU is very likely to not have the access. This is because in this case there is no qemu.conf and thus distro default UID:GID is used to run QEMU (e.g. qemu:kvm on Fedora). Change the mode of the temporary directory so that everybody has eXecute permission. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Michal Privoznik 提交于
Libvirt tries to forbid migration onto the same host and it does that by checking if local and remote hostnames are the same and whether local and remote UUIDs are the same. Well, the latter makes sense but the former doesn't really because libvirtd can be running inside an UTS namespace and hostnames can appear the same on both sides of migration. On the other hand, host UUIDs are unique, so rely on them when trying to prevent migration onto the same host. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1639596Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 04 3月, 2020 17 次提交
-
-
由 Peter Krempa 提交于
Use the 'flat' flag for 'query-named-block-nodes' if qemu supports QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT in qemuBlockGetNamedNodeData. We don't need the data so plumb in whether qemu supports the 'flat' output. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Modern qemu allows to skip the nested redundant data in the output of query-named-block-nodes. Plumb in the support for the argument that enables it. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Replace qemuMonitorBlockGetNamedNodeData by qemuBlockGetNamedNodeData. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Use g_autoptr to get rid of the cleanup section. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Detect the presence of the flag and make it available internally as QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
The monitor password callback was removed long time ago but the callback type and variable were left around. Finish the cleanup. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Ján Tomko 提交于
Format the 'vhost-user-fs' device on the QEMU command line. This device provides shared file system access using the FUSE protocol carried over virtio. The actual file server is implemented in an external vhost-user-fs device backend process. https://bugzilla.redhat.com/show_bug.cgi?id=1694166Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Look into /usr/share/qemu/vhost-user to see whether we can find a suitable virtiofsd binary, in case the user did not provide one in the domain XML. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Wire up the code to put virtiofsd in the emulator cgroup on domain startup. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Start virtiofsd for each <filesystem> device using it. Pre-create the socket for communication with QEMU and pass it to virtiofsd. Note that virtiofsd needs to run as root. https://bugzilla.redhat.com/show_bug.cgi?id=1694166 Introduced by QEMU commit a43efa34c7d7b628cbf1ec0fe60043e5c91043ea Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
This is not yet supported. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Reject unsupported configurations. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NMasayoshi Mizuma <m.mizuma@jp.fujitsu.com>
-
由 Ján Tomko 提交于
Add a 'virtiofsd_debug' option for tuning whether to run virtiofsd in debug mode. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Introduce a new 'virtiofs' driver type for filesystem. <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs'/> <source dir='/path'/> <target dir='mount_tag'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </filesystem> Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Introduced by QEMU commit 98fc1ada4cf70af0f1df1a2d7183cf786fc7da05 virtio: add vhost-user-fs base device Released in QEMU v4.2.0. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Acked-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
由 Ján Tomko 提交于
Pass logManager to qemuExtDevicesStart for future usage. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Tested-by: NAndrea Bolognani <abologna@redhat.com>
-
- 27 2月, 2020 1 次提交
-
-
由 Pavel Hrdina 提交于
The default memlock limit is 64k which is not enough to start a single VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF program, however, it fails to create eBPF map and program with 64k limit. By testing I figured out that the minimal limit is 80k to start a single VM with functional eBPF and if I add 12k I can start another one. This leads into following calculation: 80k as memlock limit worked to start a VM with eBPF which means there is 68k of lock memory that I was not able to figure out what was using it. So to get a number for 4096 VMs: 68 + 12 * 4096 = 49220 If we round it up we will get 64M of memory lock limit to support 4096 VMs with default map size which can hold 64 entries for devices. This should be good enough as a sane default and users can change it if the need to. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090Signed-off-by: NPavel Hrdina <phrdina@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-