- 17 12月, 2019 40 次提交
-
-
由 Michal Privoznik 提交于
Some variables are not used outside of the for() loop. Move their declaration to clean up the code a bit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
When using the monolithic daemon, then dom->conn has all driver tables filled in properly and thus it's safe to call an API other than virDomain*(). However, when using split daemons then dom->conn has only hypervisor driver table set (dom->conn->driver) and the rest is NULL. Therefore, if we want to call a non-domain API (virNetworkLookupByName() in this case), we have obtain the cached connection object accessible via virGetConnectNetwork(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
If we use glib alloc functions, we can drop the 'cleanup' label and @rv variable and also simplify the code a bit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Some variables are not used outside of the for() loop. Move their declaration to clean up the code a bit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
When using the monolithic daemon, then dom->conn has all driver tables filled in properly and thus it's safe to call an API other than virDomain*(). However, when using split daemons then dom->conn has only hypervisor driver table set (dom->conn->driver) and the rest is NULL. Therefore, if we want to call a non-domain API (virNetworkLookupByName() in this case), we have obtain the cached connection object accessible via virGetConnectNetwork(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
If we place qemuDomainInterfaceAddresses() a few lines below the two functions its using then we can drop forward declarations of those functions. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Pavel Mores 提交于
While at bugfixing, convert the whole function to the new-style memory allocation handling. Reviewed-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NPavel Mores <pmores@redhat.com>
-
由 Ján Tomko 提交于
Pick 256k as the limit. While -Wno-frame-larger-than would make more sense for usage in our test suite, the -Wno version seems to have no effect if -Wframe-larger-than was already specified. Use an (un)reasonably large value instead. Fixes the build with clang: ../../tests/cputest.c:964:1: error: stack frame size of 33176 bytes in function 'mymain' [-Werror,-Wframe-larger-than=] mymain(void) ^ 1 error generated. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Ján Tomko 提交于
My commit e73889b6 split the -Wframe-larger-than warning setting into two different variables - STRICT_FRAME_LIMIT_CFLAGS for the library code and RELAXED_FRAME_LIMIT_CFLAGS which was needed for tests. Use the strict limit by default and specify the warning flag twice for the parts that require a larger stack frame, relying on the fact that the compiler will pick up the latter value. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Michal Privoznik 提交于
This is slightly more complicated because NVMe disk source is not a simple attribute to <source/> element. The format in which the PCI address and namespace ID are printed is the same as QEMU accepts them: nvme://XXXX:XX:XX.X/X Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
With NVMe disks, one can start a blockjob with a NVMe disk that is not visible in domain XML (at least right away). Usually, it's fairly easy to override this limitation of qemuDomainGetMemLockLimitBytes() - for instance for hostdevs we temporarily add the device to domain def, let the function calculate the limit and then remove the device. But it's not so easy with virStorageSourcePtr - in some cases they don't necessarily are attached to a disk. And even if they are it's done later in the process and frankly, I find it too complicated to be able to use the simple trick we use with hostdevs. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
At the very beginning of the attach function the qemuDomainStorageSourceChainAccessAllow() is called which modifies CGroups, locks and seclabels for new disk and its backing chain. This must be followed by a counterpart which reverts back all the changes if something goes wrong. This boils down to calling qemuDomainStorageSourceChainAccessRevoke() which is done under 'error' label. But not all failure branches jump there. They just jump onto 'cleanup' label where no revoke is done. Such mistake is easy to do because 'cleanup' label does exist. Therefore, dissolve 'error' block in 'cleanup' and have everything jump onto 'cleanup' label. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Because this is a HMP we're dealing with, there is nothing like class of reply message, so we have to do some string comparison to guess if the command fails. Well, with NVMe disks whole new class of errors comes to play because qemu needs to initialize IOMMU and VFIO for them. You can see all the messages it may produce in qemu_vfio_init_pci(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Now, that we have everything prepared, we can generate command line for NVMe disks. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
This capability tracks if qemu is capable of: -drive file.driver=nvme The feature was added in QEMU's commit of v2.12.0-rc0~104^2~2. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
This function is currently not called for any type of storage source that is not considered 'local' (as defined by virStorageSourceIsLocalStorage()). Well, NVMe disks are not 'local' from that point of view and therefore we will need to call this function more frequently. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
If a domain has an NVMe disk configured, then we need to allow it on devices CGroup so that qemu can access it. There is one caveat though - if an NVMe disk is read only we need CGroup to allow write too. This is because when opening the device, qemu does couple of ioctl()-s which are considered as write. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
There are couple of places where a domain with a VFIO device gets special treatment: in CGroups when enabling/disabling access to /dev/vfio/vfio, and when creating/removing nodes in domain mount namespace. Well, a NVMe disk is a VFIO device too. Fortunately, we have this qemuDomainNeedsVFIO() function which is the only place that needs adjustment. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
If a domain has an NVMe disk configured, then we need to create /dev/vfio/* paths in domain's namespace so that qemu can open them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
We have this beautiful function that does crystal ball divination. The function is named qemuDomainGetMemLockLimitBytes() and it calculates the upper limit of how much locked memory is given guest going to need. The function bases its guess on devices defined for a domain. For instance, if there is a VFIO hostdev defined then it adds 1GiB to the guessed maximum. Since NVMe disks are pretty much VFIO hostdevs (but not quite), we have to do the same sorcery. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
The qemu driver has its own wrappers around virHostdev module (so that some arguments are filled in automatically). Extend these to include NVMe devices too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
The device configs (which are actually the same one config) come from a NVMe disk of mine. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Now that we have virNVMeDevice module (introduced in previous commit), let's use it int virHostdev to track which NVMe devices are free to be used by a domain and which are taken. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
This module will be used by virHostdevManager and it's inspired by virPCIDevice module. They are very similar except instead of what makes a NVMe device: PCI address AND namespace ID. This means that a NVMe device can appear in a domain multiple times, each time with a different namespace. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
This function will return true if any of disks (or their backing chain) for given domain contains an NVMe disk. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
This function will return true if there's a storage source of type VIR_STORAGE_TYPE_NVME, or false otherwise. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
To simplify implementation, some restrictions are added. For instance, an NVMe disk can't go to any bus but virtio and has to be type of 'disk' and can't have startupPolicy set. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
There is this class of PCI devices that act like disks: NVMe. Therefore, they are both PCI devices and disks. While we already have <hostdev/> (and can assign a NVMe device to a domain successfully) we don't have disk representation. There are three problems with PCI assignment in case of a NVMe device: 1) domains with <hostdev/> can't be migrated 2) NVMe device is assigned whole, there's no way to assign only a namespace 3) Because hypervisors see <hostdev/> they don't put block layer on top of it - users don't get all the fancy features like snapshots NVMe namespaces are way of splitting one continuous NVDIMM memory into smaller ones, effectively creating smaller NVMe-s (which can then be partitioned, LVMed, etc.) Because of all of this the following XML was chosen to model a NVMe device: <disk type='nvme' device='disk'> <driver name='qemu' type='raw'/> <source type='pci' managed='yes' namespace='1'> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <target dev='vda' bus='virtio'/> </disk> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
There are going to be more disk types that are considered unsafe with respect to migration. Therefore, move the error reporting call outside of if() body and rework if-else combo to switch(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
This helper is cleaner than plain memcpy() because one doesn't have to look into virPCIDeviceAddress struct to see if it contains any strings / pointers. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
In near future we will have a list of PCI devices we want to re-attach to the host (held in virPCIDeviceListPtr) but we don't have virDomainHostdevDefPtr. That's okay because virHostdevReAttachPCIDevices() works with virPCIDeviceListPtr mostly anyway. And in very few places where it needs virDomainHostdevDefPtr are not interesting for our case. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
In near future we will have a list of PCI devices we want to detach (held in virPCIDeviceListPtr) but we don't have virDomainHostdevDefPtr. That's okay because virHostdevPreparePCIDevices() works with virPCIDeviceListPtr mostly anyway. And in very few places where it needs virDomainHostdevDefPtr are not interesting for our case. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> ACKed-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Sometimes, we have a PCI address and not fully allocated virPCIDevice and yet we still want to know its /dev/vfio/N path. Introduce virPCIDeviceAddressGetIOMMUGroupDev() function exactly for that. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Previous patches rendered some of 'cleanup' labels needless. Drop them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Now that all callers of qemuDomainGetHostdevPath() handle /dev/vfio/vfio on their own, we can safely drop handling in this function. In near future the decision whether domain needs VFIO file is going to include more device types than just virDomainHostdev. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
There are several variables which could be automatically freed upon return from the function. I'm not changing @tmpPaths (which is a string list) because it is going to be removed in next commit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
In near future, the decision what to do with /dev/vfio/vfio with respect to domain namespace and CGroup is going to be moved out of qemuDomainGetHostdevPath() because there will be some other types of devices than hostdevs that need access to VFIO. All functions that I'm changing (except qemuSetupHostdevCgroup()) assume that hostdev we are adding/removing to VM is not in the definition yet (because of how qemuDomainNeedsVFIO() is written). Fortunately, this assumption is true. For qemuSetupHostdevCgroup(), the worst thing that may happen is that we allow /dev/vfio/vfio which was already allowed. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Peter Krempa 提交于
In commit 45270337 forgot to make sure that tests pass. Add the missing capability to fix the test. Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-