- 24 7月, 2013 1 次提交
-
-
由 Daniel P. Berrange 提交于
Use the new virCgroupNewDetect function to determine cgroup placement of existing running VMs. This will allow the legacy cgroups creation APIs to be removed entirely Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 23 7月, 2013 7 次提交
-
-
由 John Ferlan 提交于
Split out into its own separate routine
-
由 John Ferlan 提交于
Make the secret fetching code common for qemuBuildRBDString() and qemuBuildDriveURIString() using the virDomainDiskDef.
-
由 John Ferlan 提交于
During qemuTranslateDiskSourcePool() execution, if the srcpool has been defined with authentication information, then for iSCSI pools copy the authentication and host information to virDomainDiskDef.
-
由 Peter Krempa 提交于
Due to a goto statement missed when refactoring in 2771f8b7 when acquiring of a domain job failed the error path was not taken. This resulted into a crash afterwards as an extra reference was removed from a domain object leading to it being freed. An attempt to list the domains leaded to a crash of the daemon afterwards. https://bugzilla.redhat.com/show_bug.cgi?id=928672
-
由 Osier Yang 提交于
The translation must be done before both of cgroup and security setting, otherwise since the disk source is not translated yet, it might be skipped on cgroup and security setting.
-
由 John Ferlan 提交于
The difference with already supported pool types (dir, fs, block) is: there are two modes for iscsi pool (or network pools in future), one can specify it either to use the volume target path (the path showed up on host) with mode='host', or to use the remote URI qemu supports (e.g. file=iscsi://example.org:6000/iqn.1992-01.com.example/1) with mode='direct'. For 'host' mode, it copies the volume target path into disk->src. For 'direct' mode, the corresponding info in the *one* pool source host def is copied to disk->hosts[0].
-
由 John Ferlan 提交于
Introduce a new helper to check if the disk source is of block type
-
- 22 7月, 2013 5 次提交
-
-
由 Daniel P. Berrange 提交于
Convert the remaining methods in vircgroup.c to report errors instead of returning errno values. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Instead of returning raw errno values, report full libvirt errors in virCgroupNew* functions. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Viktor Mihajlovski 提交于
The alias for hostdevs of type SCSI can be too long for QEMU if larger LUNs are encountered. Here's a real life example: <hostdev mode='subsystem' type='scsi' managed='no'> <source> <adapter name='scsi_host0'/> <address bus='0' target='19' unit='1088634913'/> </source> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> this results in a too long drive id, resulting in QEMU yelling Property 'scsi-generic.drive' can't find value 'drive-hostdev-scsi_host0-0-19-1088634913' This commit changes the alias back to the default hostdev$(index) scheme. Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
-
- 20 7月, 2013 2 次提交
-
-
由 Jiri Denemark 提交于
In case libvirtd is asked to unplug a device but the device is actually unplugged later when libvirtd is not running, we need to detect that and remove such device when libvirtd starts again and reconnects to running domains.
-
由 Jiri Denemark 提交于
This API provides a NULL-terminated list of devices which are currently attached to a QEMU domain.
-
- 19 7月, 2013 2 次提交
-
-
由 Jiri Denemark 提交于
-
由 Eric Blake 提交于
A future patch wants the DAC security manager to be able to safely get the supplemental group list for a given uid, but at the time of a fork rather than during initialization so as to pick up on live changes to the system's group database. This patch adds the framework, including the possibility of a pre-fork callback failing. For now, any driver that implements a prefork callback must be robust against the possibility of being part of a security stack where a later element in the chain fails prefork. This means that drivers cannot do any action that requires a call to postfork for proper cleanup (no grabbing a mutex, for example). If this is too prohibitive in the future, we would have to switch to a transactioning sequence, where each driver has (up to) 3 callbacks: PreForkPrepare, PreForkCommit, and PreForkAbort, to either clean up or commit changes made during prepare. * src/security/security_driver.h (virSecurityDriverPreFork): New callback. * src/security/security_manager.h (virSecurityManagerPreFork): Change signature. * src/security/security_manager.c (virSecurityManagerPreFork): Optionally call into driver, and allow returning failure. * src/security/security_stack.c (virSecurityDriverStack): Wrap the handler for the stack driver. * src/qemu/qemu_process.c (qemuProcessStart): Adjust caller. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 18 7月, 2013 11 次提交
-
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Peter Krempa 提交于
-
由 Osier Yang 提交于
When either "cpuset" of <vcpu> is specified, or the "placement" of <vcpu> is "auto", only setting the cpuset.mems might cause the guest starting to fail. E.g. ("placement" of both <vcpu> and <numatune> is "auto"): 1) Related XMLs <vcpu placement='auto'>4</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> 2) Host NUMA topology % numactl --hardware available: 8 nodes (0-7) node 0 cpus: 0 4 8 12 16 20 24 28 node 0 size: 16374 MB node 0 free: 11899 MB node 1 cpus: 32 36 40 44 48 52 56 60 node 1 size: 16384 MB node 1 free: 15318 MB node 2 cpus: 2 6 10 14 18 22 26 30 node 2 size: 16384 MB node 2 free: 15766 MB node 3 cpus: 34 38 42 46 50 54 58 62 node 3 size: 16384 MB node 3 free: 15347 MB node 4 cpus: 3 7 11 15 19 23 27 31 node 4 size: 16384 MB node 4 free: 15041 MB node 5 cpus: 35 39 43 47 51 55 59 63 node 5 size: 16384 MB node 5 free: 15202 MB node 6 cpus: 1 5 9 13 17 21 25 29 node 6 size: 16384 MB node 6 free: 15197 MB node 7 cpus: 33 37 41 45 49 53 57 61 node 7 size: 16368 MB node 7 free: 15669 MB 4) cpuset.cpus will be set as: (from debug log) 2013-05-09 16:50:17.296+0000: 417: debug : virCgroupSetValueStr:331 : Set value '/sys/fs/cgroup/cpuset/libvirt/qemu/toy/cpuset.cpus' to '0-63' 5) The advisory nodeset got from querying numad (from debug log) 2013-05-09 16:50:17.295+0000: 417: debug : qemuProcessStart:3614 : Nodeset returned from numad: 1 6) cpuset.mems will be set as: (from debug log) 2013-05-09 16:50:17.296+0000: 417: debug : virCgroupSetValueStr:331 : Set value '/sys/fs/cgroup/cpuset/libvirt/qemu/toy/cpuset.mems' to '0-7' I.E, the domain process's memory is restricted on the first NUMA node, however, it can use all of the CPUs, which will likely cause the domain process to fail to start because of the kernel fails to allocate memory with the the memory policy as "strict". % tail -n 20 /var/log/libvirt/qemu/toy.log ... 2013-05-09 05:53:32.972+0000: 7318: debug : virCommandHandshakeChild:377 : Handshake with parent is done char device redirected to /dev/pts/2 (label charserial0) kvm_init_vcpu failed: Cannot allocate memory ... Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Martin Kletzander 提交于
When user does not specify any model for scsi controller, or worse, no controller at all, but libvirt automatically adds scsi controller with no model, we are not searching for virtio-scsi and thus this can fail for example on qemu which doesn't support lsi logic adapter. This means that when qemu on x86 doesn't support lsi53c895a and the user adds the following to an XML without any scsi controller: <disk ...> ... <target dev='sda'> </disk> libvirt fails like this: # virsh define asdf.xml error: Failed to define domain from asdf.xml error: internal error Unable to determine model for scsi controller Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=974943
-
由 Michal Privoznik 提交于
-
由 Michal Privoznik 提交于
Moreover, since virAsprintf now does report OOM error, there's no need to call virReportOOMError in error path.
-
由 Ján Tomko 提交于
When virAsprintf was changed from a function to a macro reporting OOM error in dc6f2dad, it was documented as returning 0 on success. This is incorrect, it returns the number of bytes written as asprintf does. Some of the functions were converted to use virAsprintf's return value directly, changing the return value on success from 0 to >= 0. For most of these, this is not a problem, but the change in virPCIDriverDir breaks PCI passthrough. The return value check in virhashtest pre-dates virAsprintf OOM conversion. vmwareMakePath seems to be unused.
-
由 Daniel P. Berrange 提交于
Merge the virCommandPreserveFD / virCommandTransferFD methods into a single virCommandPasFD method, and use a new VIR_COMMAND_PASS_FD_CLOSE_PARENT to indicate their difference in behaviour Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 17 7月, 2013 7 次提交
-
-
由 Michal Privoznik 提交于
In all qemu APIs we tend to prefer qemuDomObjFromDomain over virDomainObjListFindByUUID. But somehow the qemuDomainGetSchedulerType left unattended.
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Eric Blake 提交于
Introduced in commit 24b08219; compilation on RHEL 6.4 complained: qemu/qemu_hotplug.c: In function 'qemuDomainAttachChrDevice': qemu/qemu_hotplug.c:1257: error: declaration of 'remove' shadows a global declaration [-Wshadow] /usr/include/stdio.h:177: error: shadowed declaration is here [-Wshadow] * src/qemu/qemu_hotplug.c (qemuDomainAttachChrDevice): Avoid the name 'remove'. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 16 7月, 2013 5 次提交
-
-
由 Peter Krempa 提交于
A part of the returned monitor response was freed twice and caused crashes of the daemon when using guest agent cpu count retrieval. # virsh vcpucount dom --guest Introduced in v1.0.6-48-gc6afcb05
-
由 John Ferlan 提交于
Implement the new API that will handle setting the balloon driver statistics collection period in order to enable or disable the collection dynamically.
-
由 John Ferlan 提交于
This patch will add the qemuMonitorJSONGetMemoryStats() to execute a "guest-stats" on the balloonpath using "get-qom" replacing the former mechanism which looked through the "query-ballon" returned data for the fields. The "query-balloon" code only returns 'actual' memory. Rather than duplicating the existing code, have the JSON API use the GetBalloonInfo API. A check in the qemuMonitorGetMemoryStats() will be made to ensure the balloon driver path has been set. Since the underlying JSON code can return data not associated with the balloon driver, we don't fail on a failure to get the balloonpath. Of course since we've made the check, we can then set the ballooninit flag. Getting the path here is primarily due to the process reconnect path which doesn't attempt to set the collection period.
-
由 John Ferlan 提交于
At vm startup and attach attempt to set the balloon driver statistics collection period based on the value found in the domain xml file. This is not done at reconnect since it's possible that a collection period was set on the live guest and making the set period call would reset to whatever value is stored in the config file. Setting the stats collection period has a side effect of searching through the qom-list output for the virtio balloon driver and making sure that it has the right properties in order to allow setting of a collection period and eventually fetching of statistics. The walk through the qom-list is expensive and thus the balloonpath will be saved in the monitor private structure as well as a flag indicating that the initialization has already been attempted (in the event that a path is not found, no sense to keep checking). This processing model conforms to the qom object model model which requires setting object properties after device startup. That is, it's not possible to pass the period along via the startup code as it won't be recognized.
-
由 Alex Jia 提交于
If users haven't configured guest agent then qemuAgentCommand() will dereference a NULL 'mon' pointer, which causes crash of libvirtd when using agent based cpu (un)plug. With the patch, when the qemu-ga service isn't running in the guest, a expected error "error: Guest agent is not responding: Guest agent not available for now" will be raised, and the error "error: argument unsupported: QEMU guest agent is not configured" is raised when the guest hasn't configured guest agent. GDB backtrace: (gdb) bt #0 virNetServerFatalSignal (sig=11, siginfo=<value optimized out>, context=<value optimized out>) at rpc/virnetserver.c:326 #1 <signal handler called> #2 qemuAgentCommand (mon=0x0, cmd=0x7f39300017b0, reply=0x7f394b090910, seconds=-2) at qemu/qemu_agent.c:975 #3 0x00007f39429507f6 in qemuAgentGetVCPUs (mon=0x0, info=0x7f394b0909b8) at qemu/qemu_agent.c:1475 #4 0x00007f39429d9857 in qemuDomainGetVcpusFlags (dom=<value optimized out>, flags=9) at qemu/qemu_driver.c:4849 #5 0x00007f3957dffd8d in virDomainGetVcpusFlags (domain=0x7f39300009c0, flags=8) at libvirt.c:9843 How to reproduce? # To start a guest without guest agent configuration # then run the following cmdline # virsh vcpucount foobar --guest error: End of file while reading data: Input/output error error: One or more references were leaked after disconnect from the hypervisor error: Failed to reconnect to the hypervisor RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=984821Signed-off-by: NAlex Jia <ajia@redhat.com> Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-