- 21 7月, 2011 1 次提交
-
-
由 Wen Congyang 提交于
This patch implements period and quota tunable XML configuration and parsing. A quota or period of zero will be simply ignored.
-
- 19 7月, 2011 1 次提交
-
-
由 Eric Blake 提交于
There were two API in driver.c that were silently masking flags bits prior to calling out to the drivers, and several others that were explicitly masking flags bits. This is not forward-compatible - if we ever have that many flags in the future, then talking to an old server that masks out the flags would be indistinguishable from talking to a new server that can honor the flag. In general, libvirt.c should forward _all_ flags on to drivers, and only the drivers should reject unknown flags. In the case of virDrvSecretGetValue, the solution is to separate the internal driver callback function to have two parameters instead of one, with only one parameter affected by the public API. In the case of virDomainGetXMLDesc, it turns out that no one was ever mixing VIR_DOMAIN_XML_INTERNAL_STATUS with the dumpxml path in the first place; that internal flag was only used in saving and restoring state files, which happened to be in functions internal to a single file, so there is no mixing of the internal flag with a public flags argument. Additionally, virDomainMemoryStats passed a flags argument over RPC, but not to the driver. * src/driver.h (VIR_DOMAIN_XML_FLAGS_MASK) (VIR_SECRET_GET_VALUE_FLAGS_MASK): Delete. (virDrvSecretGetValue): Separate out internal flags. (virDrvDomainMemoryStats): Provide missing flags argument. * src/driver.c (verify): Drop unused check. * src/conf/domain_conf.h (virDomainObjParseFile): Delete declaration. (virDomainXMLInternalFlags): Move... * src/conf/domain_conf.c: ...here. Delete redundant include. (virDomainObjParseFile): Make static. * src/libvirt.c (virDomainGetXMLDesc, virSecretGetValue): Update clients. (virDomainMemoryPeek, virInterfaceGetXMLDesc) (virDomainMemoryStats, virDomainBlockPeek, virNetworkGetXMLDesc) (virStoragePoolGetXMLDesc, virStorageVolGetXMLDesc) (virNodeNumOfDevices, virNodeListDevices, virNWFilterGetXMLDesc): Don't mask unknown flags. * src/interface/netcf_driver.c (interfaceGetXMLDesc): Reject unknown flags. * src/secret/secret_driver.c (secretGetValue): Update clients. * src/remote/remote_driver.c (remoteSecretGetValue) (remoteDomainMemoryStats): Likewise. * src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase): Likewise. * src/qemu/qemu_driver.c (qemudDomainMemoryStats): Likewise. * daemon/remote.c (remoteDispatchDomainMemoryStats): Likewise.
-
- 14 7月, 2011 1 次提交
-
-
由 Jiri Denemark 提交于
When creating new qemu process we saved domain status XML only after the process was fully setup and running. In case libvirtd was killed before the whole process finished, once libvirtd started again it didn't know anything about the new process and we end up with an orphaned qemu process. Let's save the domain status XML as soon as we know the PID so that libvirtd can kill the process on restart.
-
- 13 7月, 2011 5 次提交
-
-
由 Jiri Denemark 提交于
Detect and react on situations when libvirtd was restarted or killed when a job was active.
-
由 Jiri Denemark 提交于
If libvirtd is restarted when a job is running, the new libvirtd process needs to know about that to be able to recover and rollback the operation.
-
由 Jiri Denemark 提交于
Query commands are safe to be called during long running jobs (such as migration). This patch makes them all work without the need to special-case every single one of them. The patch introduces new job.asyncCond condition and associated job.asyncJob which are dedicated to asynchronous (from qemu monitor point of view) jobs that can take arbitrarily long time to finish while qemu monitor is still usable for other commands. The existing job.active (and job.cond condition) is used all other synchronous jobs (including the commands run during async job). Locking schema is changed to use these two conditions. While asyncJob is active, only allowed set of synchronous jobs is allowed (the set can be different according to a particular asyncJob) so any method that communicates to qemu monitor needs to check if it is allowed to be executed during current asyncJob (if any). Once the check passes, the method needs to normally acquire job.cond to ensure no other command is running. Since domain object lock is released during that time, asyncJob could have been started in the meantime so the method needs to recheck the first condition. Then, normal jobs set job.active and asynchronous jobs set job.asyncJob and optionally change the list of allowed job groups. Since asynchronous jobs only set job.asyncJob, other allowed commands can still be run when domain object is unlocked (when communicating to remote libvirtd or sleeping). To protect its own internal synchronous commands, the asynchronous job needs to start a special nested job before entering qemu monitor. The nested job doesn't check asyncJob, it only acquires job.cond and sets job.active to block other jobs.
-
由 Jiri Denemark 提交于
-
由 Daniel P. Berrange 提交于
The LXC and UML drivers can both make use of auditing. Move the qemu_audit.{c,h} files to src/conf/domain_audit.{c,h} * src/conf/domain_audit.c: Rename from src/qemu/qemu_audit.c * src/conf/domain_audit.h: Rename from src/qemu/qemu_audit.h * src/Makefile.am: Remove qemu_audit.{c,h}, add domain_audit.{c,h} * src/qemu/qemu_audit.h, src/qemu/qemu_cgroup.c, src/qemu/qemu_command.c, src/qemu/qemu_driver.c, src/qemu/qemu_hotplug.c, src/qemu/qemu_migration.c, src/qemu/qemu_process.c: Update for changed audit API names
-
- 12 7月, 2011 2 次提交
-
-
由 Daniel P. Berrange 提交于
Given a PID, the QEMU driver reads /proc/$PID/cmdline and /proc/$PID/environ to get the configuration. This is fed into the ARGV->XML convertor to build an XML configuration for the process. /proc/$PID/exe is resolved to identify the full command binary path After checking for name/uuid uniqueness, an attempt is made to connect to the monitor socket. If successful then 'info status' and 'info kvm' are issued to determine whether the CPUs are running and if KVM is enabled. * src/qemu/qemu_driver.c: Implement virDomainQemuAttach * src/qemu/qemu_process.h, src/qemu/qemu_process.c: Add qemuProcessAttach to connect to the monitor of an existing QEMU process
-
由 Daniel P. Berrange 提交于
Avoid re-formatting the pidfile path everytime we need it. Create it once when starting the guest, and preserve it until the guest is shutdown. * src/libvirt_private.syms, src/util/util.c, src/util/util.h: Add virFileReadPidPath * src/qemu/qemu_domain.h: Add pidfile field * src/qemu/qemu_process.c: Store pidfile path in qemuDomainObjPrivate
-
- 06 7月, 2011 1 次提交
-
-
由 Matthias Bolte 提交于
Some callers expected virFileMakePath to set errno, some expected it to return an errno value. Unify this to return 0 on success and -1 on error. Set errno to report detailed error information. Also optimize virFileMakePath if stat fails with an errno different from ENOENT.
-
- 04 7月, 2011 2 次提交
-
-
由 Daniel P. Berrange 提交于
Add a new attribute to the <seclabel> XML to allow resource relabelling to be enabled with static label usage. <seclabel model='selinux' type='static' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> * docs/schemas/domain.rng: Add relabel attribute * src/conf/domain_conf.c, src/conf/domain_conf.h: Parse the 'relabel' attribute * src/qemu/qemu_process.c: Unconditionally clear out the 'imagelabel' attribute * src/security/security_apparmor.c: Skip based on 'relabel' attribute instead of label type * src/security/security_selinux.c: Skip based on 'relabel' attribute instead of label type and fill in <imagelabel> attribute if relabel is enabled.
-
由 Daniel P. Berrange 提交于
Normally the dynamic labelling mode will always use a base label of 'svirt_t' for VMs. Introduce a <baselabel> field in the <seclabel> XML to allow this base label to be changed eg <seclabel type='dynamic' model='selinux'> <baselabel>system_u:object_r:virt_t:s0</baselabel> </seclabel> * docs/schemas/domain.rng: Add <baselabel> * src/conf/domain_conf.c, src/conf/domain_conf.h: Parsing of base label * src/qemu/qemu_process.c: Don't reset 'model' attribute if a base label is specified * src/security/security_apparmor.c: Refuse to support base label * src/security/security_selinux.c: Use 'baselabel' when generating label, if available
-
- 28 6月, 2011 2 次提交
-
-
由 Daniel P. Berrange 提交于
The libvirt sanlock plugin is intentionally leaking a file descriptor to QEMU. To enable QEMU to use this FD under SELinux, it must be labelled correctly. We dont want to use the svirt_image_t for this, since QEMU must not be allowed to actually use the FD. So instead we label it with svirt_t using virSecurityManagerSetProcessFDLabel * src/locking/domain_lock.c, src/locking/domain_lock.h, src/locking/lock_driver.h, src/locking/lock_driver_nop.c, src/locking/lock_driver_sanlock.c, src/locking/lock_manager.c, src/locking/lock_manager.h: Optionally pass an FD back to the hypervisor for security driver labelling * src/qemu/qemu_process.c: label the lock manager plugin FD with the process label
-
由 Daniel P. Berrange 提交于
The virSecurityManagerSetFDLabel method is used to label file descriptors associated with disk images. There will shortly be a need to label other file descriptors in a different way. So the current name is ambiguous. Rename the method to virSecurityManagerSetImageFDLabel to clarify its purpose * src/libvirt_private.syms, src/qemu/qemu_migration.c, src/qemu/qemu_process.c, src/security/security_apparmor.c, src/security/security_dac.c, src/security/security_driver.h, src/security/security_manager.c, src/security/security_manager.h, src/security/security_selinux.c, src/security/security_stack.c: s/FDLabel/ImageFDLabel/
-
- 27 6月, 2011 1 次提交
-
-
由 Osier Yang 提交于
This is no code between virSaveLastError and virGetLastError will set an error, remove the bogus codes.
-
- 24 6月, 2011 6 次提交
-
-
由 Eric Blake 提交于
This reverts commit 12cd77a0. Conflicts: python/libvirt-override-virConnect.py python/libvirt-override.c src/remote/remote_protocol.x
-
由 Eric Blake 提交于
Use NUMA's older nodemask_t (fixed-size map) rather than the newer 'struct bitmask' (variable-size) in order to still compile on RHEL 5, with its numactl-devel-0.9.8. * src/qemu/qemu_process.c [HAVE_NUMA]: Prefer back-compat mode. (qemuProcessInitNumaMemoryPolicy): Use older nodemask_t.
-
由 Daniel P. Berrange 提交于
* src/qemu/qemu_process.c: Add missing _(...)
-
由 Daniel P. Berrange 提交于
If an application is using libvirt + KVM as a piece of its internal infrastructure to perform a specific task, it can be desirable to guarentee the VM dies when the virConnectPtr disconnects from libvirtd. This ensures the app can't leak any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as a flag when starting guests enables this to be done. * include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL * src/qemu/qemu_driver.c: Support automatic killing of guests upon connection close * tools/virsh.c: Add --autokill flag to 'start' and 'create' commands
-
由 Daniel P. Berrange 提交于
Sometimes it is useful to be able to automatically destroy a guest when a connection is closed. For example, kill an incoming migration if the client managing the migration dies. This introduces a map between guest 'uuid' strings and virConnectPtr objects. When a connection is closed, any associated guests are killed off. * src/qemu/qemu_conf.h: Add autokill hash table to qemu driver * src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs for performing autokill of guests associated with a connection * src/qemu/qemu_driver.c: Initialize autodestroy map
-
由 Daniel P. Berrange 提交于
For controlled shutdown we issue a 'system_powerdown' command to the QEMU monitor. This triggers an ACPI event which (most) guest OS wire up to a controlled shutdown. There is no equiv ACPI event to trigger a controlled reboot. This patch attempts to fake a reboot. - In qemuDomainObjPrivatePtr we have a bool fakeReboot flag. - The virDomainReboot method sets this flag and then triggers a normal 'system_powerdown'. - The QEMU process is started with '-no-shutdown' so that the guest CPUs pause when it powers off the guest - When we receive the 'POWEROFF' event from QEMU JSON monitor if fakeReboot is not set we invoke the qemuProcessKill command and shutdown continues normally - If fakeReboot was set, we spawn a background thread which issues 'system_reset' to perform a warm reboot of the guest hardware. Then it issues 'cont' to start the CPUs again * src/qemu/qemu_command.c: Add -no-shutdown flag if we have JSON support * src/qemu/qemu_domain.h: Add 'fakeReboot' flag to qemuDomainObjPrivate struct * src/qemu/qemu_driver.c: Fake reboot using the system_powerdown command if JSON support is available * src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h, src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h, src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add binding for system_reset command * src/qemu/qemu_process.c: Reset the guest & start CPUs if fakeReboot is set
-
- 23 6月, 2011 2 次提交
-
-
由 Osier Yang 提交于
Move "VIR_FREE(buf) into label "closelog", so that "buf" could be freed before returning.
-
由 Jiri Denemark 提交于
We only care about NUMA availability if NUMA configuration is requested in domain XML.
-
- 21 6月, 2011 1 次提交
-
-
由 Dirk Herrendoerfer 提交于
The following patch addresses the problem that when a PASSTHROUGH mode DIRECT NIC connection is made the MAC address of the NIC is not automatically set and reset to the configured VM MAC and back again. The attached patch fixes this problem by setting and resetting the MAC while remembering the previous setting while the VM is running. This also works if libvirtd is restarted while the VM is running. the patch passes make syntax-check
-
- 20 6月, 2011 1 次提交
-
-
由 Osier Yang 提交于
Implemented as setting NUMA policy between fork and exec as a hook, using libnuma. Only support memory tuning on domain process currently. For the nodemask out of range, will report soft warning instead of hard error in libvirt layer. (Kernel will be silent as long as one of set bit in the nodemask is valid on the host. E.g. For a host has two NUMA nodes, kernel will be silent for nodemask "01010101"). So, soft warning is the only thing libvirt can do, as one might want to specify the numa policy prior to a node that doesn't exist yet, however, it may come as hotplug soon.
-
- 17 6月, 2011 1 次提交
-
-
由 Jiri Denemark 提交于
-
- 15 6月, 2011 1 次提交
-
-
由 Adam Litke 提交于
When an operation started by virDomainBlockPullAll completes (either with success or with failure), raise an event to indicate the final status. This allows an API user to avoid polling on virDomainBlockPullInfo if they would prefer to use the event mechanism. * daemon/remote.c: Dispatch events to client * include/libvirt/libvirt.h.in: Define event ID and callback signature * src/conf/domain_event.c, src/conf/domain_event.h, src/libvirt_private.syms: Extend API to handle the new event * src/qemu/qemu_driver.c: Connect to the QEMU monitor event for block_stream completion and emit a libvirt block pull event * src/remote/remote_driver.c: Receive and dispatch events to application * src/remote/remote_protocol.x: Wire protocol definition for the event * src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h, src/qemu/qemu_monitor_json.c: Watch for BLOCK_STREAM_COMPLETED event from QEMU monitor Signed-off-by: NAdam Litke <agl@us.ibm.com>
-
- 14 6月, 2011 3 次提交
-
-
由 Cole Robinson 提交于
If qemu supports -chardev, our char frontend aliases are ex. 'charserial0' not just 'serial0'. Typically we don't use this code path because the pty's are scraped from stdout.
-
由 Cole Robinson 提交于
Currently we forget to do this and have to fallback to info chardev (which also fails, see following patch)
-
由 Taku Izumi 提交于
There is the case where cpu affinites for vcpu of qemu doesn't work correctly. For example, if only one vcpupin setting entry is provided and its setting is not for vcpu0, it doesn't work. # virsh dumpxml VM ... <vcpu>4</vcpu> <cputune> <vcpupin vcpu='3' cpuset='9-11'/> </cputune> ... # virsh start VM Domain VM started # virsh vcpuinfo VM VCPU: 0 CPU: 31 State: running CPU time: 2.5s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy VCPU: 1 CPU: 12 State: running CPU time: 0.9s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy VCPU: 2 CPU: 30 State: running CPU time: 1.5s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy VCPU: 3 CPU: 13 State: running CPU time: 1.7s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy This patch fixes this problem. Signed-off-by: NTaku Izumi <izumi.taku@jp.fujitsu.com>
-
- 07 6月, 2011 2 次提交
-
-
由 Cole Robinson 提交于
v2: Have virCommand cleanup intermediate process for us v3: Preserve original FD closing behavior Signed-off-by: NCole Robinson <crobinso@redhat.com>
-
由 Osier Yang 提交于
-
- 04 6月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
The error code for virKillProcess is returned in the errno variable not the return value. THis mistake caused the logs to be filled with errors when shutting down QEMU processes * src/qemu/qemu_process.c: Fix process kill check.
-
- 03 6月, 2011 1 次提交
-
-
由 Eric Blake 提交于
Detected by Coverity. This leaked a cpumap on every iteration of the loop. Leak introduced in commit 1cc4d025 (v0.9.0). * src/qemu/qemu_process.c (qemuProcessSetVcpuAffinites): Plug leak, and hoist allocation outside loop.
-
- 02 6月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
The QEMU integrates with the lock manager instructure in a number of key places * During startup, a lock is acquired in between the fork & exec * During startup, the libvirtd process acquires a lock before setting file labelling * During shutdown, the libvirtd process acquires a lock before restoring file labelling * During hotplug, unplug & media change the libvirtd process holds a lock while setting/restoring labels The main content lock is only ever held by the QEMU child process, or libvirtd during VM shutdown. The rest of the operations only require libvirtd to hold the metadata locks, relying on the active QEMU still holding the content lock. * src/qemu/qemu_conf.c, src/qemu/qemu_conf.h, src/qemu/libvirtd_qemu.aug, src/qemu/test_libvirtd_qemu.aug: Add config parameter for configuring lock managers * src/qemu/qemu_driver.c: Add calls to the lock manager
-
- 29 5月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
Currently whenever there is any failure with parsing the monitor, this is treated in the same was as end-of-file (ie QEMU quit). The domain is terminated, if not already dead. With this change, failures in parsing the monitor stream do not result in the death of QEMU. The guest continues running unchanged, but all further use of the monitor will be disabled. The VMM_FAILURE event will be emitted, and the mgmt application can decide when to kill/restart the guest to re-gain control * src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h: Run a different callback for monitor EOF vs error conditions. * src/qemu/qemu_process.c: Emit VMM_FAILURE event when monitor fails
-
- 16 5月, 2011 2 次提交
-
-
由 Jiri Denemark 提交于
A qemu domain can get paused when libvirtd is stopped (e.g., because of I/O error) so we should check its current state when reconnecting to it.
-
由 Jiri Denemark 提交于
Only in drivers which use virDomainObj, drivers that query hypervisor for domain status need to be updated separately in case their hypervisor supports this functionality. The reason is also saved into domain state XML so if a domain is not running (i.e., no state XML exists) the reason will be lost by libvirtd restart. I think this is an acceptable limitation.
-
- 12 5月, 2011 1 次提交
-
-
由 Lai Jiangshan 提交于
These VIR_XXXX0 APIs make us confused, use the non-0-suffix APIs instead. How do these coversions works? The magic is using the gcc extension of ##. When __VA_ARGS__ is empty, "##" will swallow the "," in "fmt," to avoid compile error. example: origin after CPP high_level_api("%d", a_int) low_level_api("%d", a_int) high_level_api("a string") low_level_api("a string") About 400 conversions. 8 special conversions: VIR_XXXX0("") -> VIR_XXXX("msg") (avoid empty format) 2 conversions VIR_XXXX0(string_literal_with_%) -> VIR_XXXX(%->%%) 0 conversions VIR_XXXX0(non_string_literal) -> VIR_XXXX("%s", non_string_literal) (for security) 6 conversions Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
-