- 26 7月, 2013 3 次提交
-
-
由 John Ferlan 提交于
Adjust these drivers to handle their Autostart functionality after each of the drivers has gone through their Initialization functions
-
由 Daniel P. Berrange 提交于
When detecting cgroups we must honour any controllers whitelist the driver may have. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Instead of requiring drivers to use a combination of calls to virCgroupNewDetect and virCgroupIsValidMachine, combine the two into virCgroupNewDetectMachine Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 25 7月, 2013 2 次提交
-
-
由 Ján Tomko 提交于
Both virStoragePoolFree and virStorageVolFree reset the last error, which might lead to the cryptic message: An error occurred, but the cause is unknown When the volume wasn't found, virStorageVolFree was called with NULL, leading to an error: invalid storage volume pointer in virStorageVolFree This patch changes it to: Storage volume not found: no storage vol with matching name 'tomato'
-
由 Daniel P. Berrange 提交于
Convert the QEMU driver code to use the new atomic API for setup of cgroups Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 24 7月, 2013 8 次提交
-
-
由 Martin Kletzander 提交于
On two places, the usage of open() is replaced with qemuOpenFile as that is the preferred method in those cases. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=963881
-
由 Martin Kletzander 提交于
Function qemuOpenFile() haven't had any idea about seclabels applied to VMs only, so in case the seclabel differed from the "user:group" from configuration, there might have been issues with opening files. Make qemuOpenFile() VM-aware, but only optionally, passing NULL argument means skipping VM seclabel info completely. However, all current qemuOpenFile() calls look like they should use VM seclabel info in case there is any, so convert these calls as well. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=869053
-
由 Laine Stump 提交于
Since PCI bridges, PCIe bridges, PCIe switches, and PCIe root ports all share the same namespace, they are all defined as controllers of type='pci' in libvirt (but with a differing model attribute). Each of these controllers has a certain connection type upstream, allows certain connection types downstream, and each can either allow a single downstream connection at slot 0, or connections from slot 1 - 31. Right now, we only support the pci-root and pci-bridge devices, both of which only allow PCI devices to connect, and both which have usable slots 1 - 31. In preparation for adding other types of controllers that have different capabilities, this patch 1) adds info to the qemuDomainPCIAddressBus object to indicate the capabilities, 2) sets those capabilities appropriately for pci-root and pci-bridge devices, and 3) validates that the controller being connected to is the proper type when allocating slots or validating that a user-selected slot is appropriate for a device.. Having this infrastructure in place will make it much easier to add support for the other PCI controller types. While it would be possible to do all the necessary checking by just storing the controller model in the qemyuDomainPCIAddressBus, it greatly simplifies all the validation code to also keep a "flags", "minSlot" and "maxSlot" for each - that way we can just check those attributes rather than requiring a nearly identical switch statement everywhere we need to validate compatibility. You may notice many places where the flags are seemingly hard-coded to QEMU_PCI_CONNECT_HOTPLUGGABLE | QEMU_PCI_CONNECT_TYPE_PCI This is currently the correct value for all PCI devices, and in the future will be the default, with small bits of code added to change to the flags for the few devices which are the exceptions to this rule. Finally, there are a few places with "FIXME" comments. Note that these aren't indicating places that are broken according to the currently supported devices, they are places that will need fixing when support for new PCI controller models is added. To assure that there was no regression in the auto-allocation of PCI addresses or auto-creation of integrated pci-root, ide, and usb controllers, a new test case (pci-bridge-many-disks) has been added to both the qemuxml2argv and qemuxml2xml tests. This new test defines a domain with several dozen virtio disks but no pci-root or pci-bridges. The .args file of the new test case was created using libvirt sources from before this patch, and the test still passes after this patch has been applied.
-
由 Laine Stump 提交于
Although these two enums are named ..._LAST, they really had the value of ..._SIZE. This patch changes their values so that, e.g., QEMU_PCI_ADDRESS_SLOT_LAST really is the slot number of the last slot on a PCI bus.
-
由 Laine Stump 提交于
The implicit IDE, USB, and video controllers provided by the PIIX3 chipset in the pc-* machinetypes are not present on other machinetypes, so we shouldn't be doing the special checking for them. This patch places those validation checks into a separate function that is only called for machine types that have a PIIX3 chip (which happens to be the i440fx-based pc-* machine types). One qemuxml2argv test data file had to be changed - the pseries-usb-multi test had included a piix3-usb-uhci device, which was being placed at a specific address, and also had slot 2 auto reserved for a video device, but the pseries virtual machine doesn't actually have a PIIX3 chip, so even if there was a piix3-usb-uhci driver for it, the device wouldn't need to reside at slot 1 function 2. I just changed the .argv file to have the generic slot info for the two devices that results when the special PIIX3 code isn't executed.
-
由 Laine Stump 提交于
qemuDomainPCIAddressBus was an array of QEMU_PCI_ADDRESS_SLOT_LAST uint8_t's, which worked fine as long as every PCI bus was identical. In the future, some PCI busses will allow connecting PCI devices, and some will allow PCIe devices; also some will only allow connection of a single device, while others will allow connecting 31 devices. In order to keep track of that information for each bus, we need to turn qemuDomainPCIAddressBus into a struct, for now with just one member: uint8_t slots[QEMU_PCI_ADDRESS_SLOT_LAST]; Additional members will come in later patches. The item in qemuDomainPCIAddresSet that contains the array of qemuDomainPCIAddressBus is now called "buses" to be more consistent with the already existing "nbuses" (and with the new "slots" array).
-
由 Daniel P. Berrange 提交于
Currently the QEMU driver creates the VM's cgroup prior to forking, and then uses a virCommand hook to move the child into the cgroup. This won't work with systemd whose APIs do the creation of cgroups + attachment of processes atomically. Fortunately we have a handshake taking place between the QEMU driver and the child process prior to QEMU being exec()d, which was introduced to allow setup of disk locking. By good fortune this synchronization point can be used to enable the QEMU driver to do atomic setup of cgroups removing the use of the hook script. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Use the new virCgroupNewDetect function to determine cgroup placement of existing running VMs. This will allow the legacy cgroups creation APIs to be removed entirely Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 23 7月, 2013 7 次提交
-
-
由 John Ferlan 提交于
Split out into its own separate routine
-
由 John Ferlan 提交于
Make the secret fetching code common for qemuBuildRBDString() and qemuBuildDriveURIString() using the virDomainDiskDef.
-
由 John Ferlan 提交于
During qemuTranslateDiskSourcePool() execution, if the srcpool has been defined with authentication information, then for iSCSI pools copy the authentication and host information to virDomainDiskDef.
-
由 Peter Krempa 提交于
Due to a goto statement missed when refactoring in 2771f8b7 when acquiring of a domain job failed the error path was not taken. This resulted into a crash afterwards as an extra reference was removed from a domain object leading to it being freed. An attempt to list the domains leaded to a crash of the daemon afterwards. https://bugzilla.redhat.com/show_bug.cgi?id=928672
-
由 Osier Yang 提交于
The translation must be done before both of cgroup and security setting, otherwise since the disk source is not translated yet, it might be skipped on cgroup and security setting.
-
由 John Ferlan 提交于
The difference with already supported pool types (dir, fs, block) is: there are two modes for iscsi pool (or network pools in future), one can specify it either to use the volume target path (the path showed up on host) with mode='host', or to use the remote URI qemu supports (e.g. file=iscsi://example.org:6000/iqn.1992-01.com.example/1) with mode='direct'. For 'host' mode, it copies the volume target path into disk->src. For 'direct' mode, the corresponding info in the *one* pool source host def is copied to disk->hosts[0].
-
由 John Ferlan 提交于
Introduce a new helper to check if the disk source is of block type
-
- 22 7月, 2013 5 次提交
-
-
由 Daniel P. Berrange 提交于
Convert the remaining methods in vircgroup.c to report errors instead of returning errno values. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Instead of returning raw errno values, report full libvirt errors in virCgroupNew* functions. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Viktor Mihajlovski 提交于
The alias for hostdevs of type SCSI can be too long for QEMU if larger LUNs are encountered. Here's a real life example: <hostdev mode='subsystem' type='scsi' managed='no'> <source> <adapter name='scsi_host0'/> <address bus='0' target='19' unit='1088634913'/> </source> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> this results in a too long drive id, resulting in QEMU yelling Property 'scsi-generic.drive' can't find value 'drive-hostdev-scsi_host0-0-19-1088634913' This commit changes the alias back to the default hostdev$(index) scheme. Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
-
- 20 7月, 2013 2 次提交
-
-
由 Jiri Denemark 提交于
In case libvirtd is asked to unplug a device but the device is actually unplugged later when libvirtd is not running, we need to detect that and remove such device when libvirtd starts again and reconnects to running domains.
-
由 Jiri Denemark 提交于
This API provides a NULL-terminated list of devices which are currently attached to a QEMU domain.
-
- 19 7月, 2013 2 次提交
-
-
由 Jiri Denemark 提交于
-
由 Eric Blake 提交于
A future patch wants the DAC security manager to be able to safely get the supplemental group list for a given uid, but at the time of a fork rather than during initialization so as to pick up on live changes to the system's group database. This patch adds the framework, including the possibility of a pre-fork callback failing. For now, any driver that implements a prefork callback must be robust against the possibility of being part of a security stack where a later element in the chain fails prefork. This means that drivers cannot do any action that requires a call to postfork for proper cleanup (no grabbing a mutex, for example). If this is too prohibitive in the future, we would have to switch to a transactioning sequence, where each driver has (up to) 3 callbacks: PreForkPrepare, PreForkCommit, and PreForkAbort, to either clean up or commit changes made during prepare. * src/security/security_driver.h (virSecurityDriverPreFork): New callback. * src/security/security_manager.h (virSecurityManagerPreFork): Change signature. * src/security/security_manager.c (virSecurityManagerPreFork): Optionally call into driver, and allow returning failure. * src/security/security_stack.c (virSecurityDriverStack): Wrap the handler for the stack driver. * src/qemu/qemu_process.c (qemuProcessStart): Adjust caller. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 18 7月, 2013 11 次提交
-
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Peter Krempa 提交于
-
由 Osier Yang 提交于
When either "cpuset" of <vcpu> is specified, or the "placement" of <vcpu> is "auto", only setting the cpuset.mems might cause the guest starting to fail. E.g. ("placement" of both <vcpu> and <numatune> is "auto"): 1) Related XMLs <vcpu placement='auto'>4</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> 2) Host NUMA topology % numactl --hardware available: 8 nodes (0-7) node 0 cpus: 0 4 8 12 16 20 24 28 node 0 size: 16374 MB node 0 free: 11899 MB node 1 cpus: 32 36 40 44 48 52 56 60 node 1 size: 16384 MB node 1 free: 15318 MB node 2 cpus: 2 6 10 14 18 22 26 30 node 2 size: 16384 MB node 2 free: 15766 MB node 3 cpus: 34 38 42 46 50 54 58 62 node 3 size: 16384 MB node 3 free: 15347 MB node 4 cpus: 3 7 11 15 19 23 27 31 node 4 size: 16384 MB node 4 free: 15041 MB node 5 cpus: 35 39 43 47 51 55 59 63 node 5 size: 16384 MB node 5 free: 15202 MB node 6 cpus: 1 5 9 13 17 21 25 29 node 6 size: 16384 MB node 6 free: 15197 MB node 7 cpus: 33 37 41 45 49 53 57 61 node 7 size: 16368 MB node 7 free: 15669 MB 4) cpuset.cpus will be set as: (from debug log) 2013-05-09 16:50:17.296+0000: 417: debug : virCgroupSetValueStr:331 : Set value '/sys/fs/cgroup/cpuset/libvirt/qemu/toy/cpuset.cpus' to '0-63' 5) The advisory nodeset got from querying numad (from debug log) 2013-05-09 16:50:17.295+0000: 417: debug : qemuProcessStart:3614 : Nodeset returned from numad: 1 6) cpuset.mems will be set as: (from debug log) 2013-05-09 16:50:17.296+0000: 417: debug : virCgroupSetValueStr:331 : Set value '/sys/fs/cgroup/cpuset/libvirt/qemu/toy/cpuset.mems' to '0-7' I.E, the domain process's memory is restricted on the first NUMA node, however, it can use all of the CPUs, which will likely cause the domain process to fail to start because of the kernel fails to allocate memory with the the memory policy as "strict". % tail -n 20 /var/log/libvirt/qemu/toy.log ... 2013-05-09 05:53:32.972+0000: 7318: debug : virCommandHandshakeChild:377 : Handshake with parent is done char device redirected to /dev/pts/2 (label charserial0) kvm_init_vcpu failed: Cannot allocate memory ... Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Martin Kletzander 提交于
When user does not specify any model for scsi controller, or worse, no controller at all, but libvirt automatically adds scsi controller with no model, we are not searching for virtio-scsi and thus this can fail for example on qemu which doesn't support lsi logic adapter. This means that when qemu on x86 doesn't support lsi53c895a and the user adds the following to an XML without any scsi controller: <disk ...> ... <target dev='sda'> </disk> libvirt fails like this: # virsh define asdf.xml error: Failed to define domain from asdf.xml error: internal error Unable to determine model for scsi controller Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=974943
-
由 Michal Privoznik 提交于
-
由 Michal Privoznik 提交于
Moreover, since virAsprintf now does report OOM error, there's no need to call virReportOOMError in error path.
-
由 Ján Tomko 提交于
When virAsprintf was changed from a function to a macro reporting OOM error in dc6f2dad, it was documented as returning 0 on success. This is incorrect, it returns the number of bytes written as asprintf does. Some of the functions were converted to use virAsprintf's return value directly, changing the return value on success from 0 to >= 0. For most of these, this is not a problem, but the change in virPCIDriverDir breaks PCI passthrough. The return value check in virhashtest pre-dates virAsprintf OOM conversion. vmwareMakePath seems to be unused.
-
由 Daniel P. Berrange 提交于
Merge the virCommandPreserveFD / virCommandTransferFD methods into a single virCommandPasFD method, and use a new VIR_COMMAND_PASS_FD_CLOSE_PARENT to indicate their difference in behaviour Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-