- 06 2月, 2014 2 次提交
-
-
由 Daniel P. Berrange 提交于
The NWFilter code has as a deadlock race condition between the virNWFilter{Define,Undefine} APIs and starting of guest VMs due to mis-matched lock ordering. In the virNWFilter{Define,Undefine} codepaths the lock ordering is 1. nwfilter driver lock 2. virt driver lock 3. nwfilter update lock 4. domain object lock In the VM guest startup paths the lock ordering is 1. virt driver lock 2. domain object lock 3. nwfilter update lock As can be seen the domain object and nwfilter update locks are not acquired in a consistent order. The fix used is to push the nwfilter update lock upto the top level resulting in a lock ordering for virNWFilter{Define,Undefine} of 1. nwfilter driver lock 2. nwfilter update lock 3. virt driver lock 4. domain object lock and VM start using 1. nwfilter update lock 2. virt driver lock 3. domain object lock This has the effect of serializing VM startup once again, even if no nwfilters are applied to the guest. There is also the possibility of deadlock due to a call graph loop via virNWFilterInstantiate and virNWFilterInstantiateFilterLate. These two problems mean the lock must be turned into a read/write lock instead of a plain mutex at the same time. The lock is used to serialize changes to the "driver->nwfilters" hash, so the write lock only needs to be held by the define/undefine methods. All other methods can rely on a read lock which allows good concurrency. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 6e5c79a1) Conflicts: src/conf/nwfilter_conf.c - virReportOOMError() in context of one hunk. src/lxc/lxc_driver.c - functions renamed, and lxc object locking changed, creating a conflict in the context.
-
由 Daniel P. Berrange 提交于
The virConnectPtr is passed around loads of nwfilter code in order to provide it as a parameter to the callback registered by the virt drivers. None of the virt drivers use this param though, so it serves no purpose. Avoiding the need to pass a virConnectPtr means that the nwfilterStateReload method no longer needs to open a bogus QEMU driver connection. This addresses a race condition that can lead to a crash on startup. The nwfilter driver starts before the QEMU driver and registers some callbacks with DBus to detect firewalld reload. If the firewalld reload happens while the QEMU driver is still starting up though, the nwfilterStateReload method will open a connection to the partially initialized QEMU driver and cause a crash. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 999d72fb)
-
- 16 1月, 2014 4 次提交
-
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit 3b564259)
-
由 Jiri Denemark 提交于
Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit ff5f30b6)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit f93d2caa)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Generally, every API that is going to begin a job should do that before fetching data from vm->def. However, qemuDomainGetBlockInfo does not know whether it will have to start a job or not before checking vm->def. To avoid using disk alias that might have been freed while we were waiting for a job, we use its copy. In case the disk was removed in the meantime, we will fail with "cannot find statistics for device '...'" error message. (cherry picked from commit b7992595)
-
- 15 1月, 2014 1 次提交
-
-
由 Jiri Denemark 提交于
CVE-2013-6458 https://bugzilla.redhat.com/show_bug.cgi?id=1043069 When virDomainDetachDeviceFlags is called concurrently to virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats finds a disk in vm->def before getting a job on a domain and uses the disk pointer after getting the job. However, the domain in unlocked while waiting on a job condition and thus data behind the disk pointer may disappear. This happens when thread 1 runs virDomainDetachDeviceFlags and enters monitor to actually remove the disk. Then another thread starts running virDomainBlockStats, finds the disk in vm->def, and while it's waiting on the job condition (owned by the first thread), the first thread finishes the disk removal. When the second thread gets the job, the memory pointed to be the disk pointer is already gone. That said, every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit db86da5c)
-
- 20 7月, 2013 1 次提交
-
-
由 Alex Jia 提交于
CVE-2013-4154 If users haven't configured guest agent then qemuAgentCommand() will dereference a NULL 'mon' pointer, which causes crash of libvirtd when using agent based cpu (un)plug. With the patch, when the qemu-ga service isn't running in the guest, a expected error "error: Guest agent is not responding: Guest agent not available for now" will be raised, and the error "error: argument unsupported: QEMU guest agent is not configured" is raised when the guest hasn't configured guest agent. GDB backtrace: (gdb) bt #0 virNetServerFatalSignal (sig=11, siginfo=<value optimized out>, context=<value optimized out>) at rpc/virnetserver.c:326 #1 <signal handler called> #2 qemuAgentCommand (mon=0x0, cmd=0x7f39300017b0, reply=0x7f394b090910, seconds=-2) at qemu/qemu_agent.c:975 #3 0x00007f39429507f6 in qemuAgentGetVCPUs (mon=0x0, info=0x7f394b0909b8) at qemu/qemu_agent.c:1475 #4 0x00007f39429d9857 in qemuDomainGetVcpusFlags (dom=<value optimized out>, flags=9) at qemu/qemu_driver.c:4849 #5 0x00007f3957dffd8d in virDomainGetVcpusFlags (domain=0x7f39300009c0, flags=8) at libvirt.c:9843 How to reproduce? # To start a guest without guest agent configuration # then run the following cmdline # virsh vcpucount foobar --guest error: End of file while reading data: Input/output error error: One or more references were leaked after disconnect from the hypervisor error: Failed to reconnect to the hypervisor RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=984821Signed-off-by: NAlex Jia <ajia@redhat.com> Signed-off-by: NPeter Krempa <pkrempa@redhat.com> (cherry picked from commit 96518d43)
-
- 10 7月, 2013 1 次提交
-
- 02 7月, 2013 2 次提交
-
-
由 Michal Privoznik 提交于
After abf75aea the compiler screams: qemu/qemu_driver.c: In function 'qemuNodeDeviceDetachFlags': qemu/qemu_driver.c:10693:9: error: 'domain' may be used uninitialized in this function [-Werror=maybe-uninitialized] pci = virPCIDeviceNew(domain, bus, slot, function); ^ qemu/qemu_driver.c:10693:9: error: 'bus' may be used uninitialized in this function [-Werror=maybe-uninitialized] qemu/qemu_driver.c:10693:9: error: 'slot' may be used uninitialized in this function [-Werror=maybe-uninitialized] qemu/qemu_driver.c:10693:9: error: 'function' may be used uninitialized in this function [-Werror=maybe-uninitialized] Since the other functions qemuNodeDeviceReAttach and qemuNodeDeviceReset looks exactly the same, I've initialized the variables there as well. However, I am still wondering why those functions don't matter to gcc while the first one does. (cherry picked from commit bc09c5d3)
-
由 Ján Tomko 提交于
If qemuMonitorBlockJob returned 0, qemuDomainBlockPivot might return 0 even if an error occured. https://bugzilla.redhat.com/show_bug.cgi?id=977678 (cherry picked from commit c34107df)
-
- 26 6月, 2013 1 次提交
-
-
由 Laine Stump 提交于
The driver arg to virPCIDeviceDetach is no longer used (the name of the stub driver is now set in the virPCIDevice object, and virPCIDeviceDetach retrieves it from there). Remove it.
-
- 25 6月, 2013 8 次提交
-
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Jiri Denemark 提交于
-
由 Laine Stump 提交于
virPCIDeviceDetach would previously sometimes consume the input device object (to put it on the inactive list) and sometimes not. Avoiding memory leaks required checking beforehand to see if the device was already on the list, and freeing the device object in the caller only if there wasn't already an identical object on the inactive list. This patch makes it consistent - virPCIDeviceDetach will *never* consume the input virPCIDevice object; if it needs to put one on the inactive list, it will create a copy and put *that* on the list. This way the caller knows that it is always their responsibility to free the device object they created.
-
由 Laine Stump 提交于
Previously stubDriver was always set from a string literal, so it was okay to use a const char * that wasn't freed when the virPCIDevice was freed. This will not be the case in the near future, so it is now a char* that is allocated in virPCIDeviceSetStubDriver() and freed during virPCIDeviceFree().
-
- 24 6月, 2013 2 次提交
-
-
由 Daniel P. Berrange 提交于
Insert calls to the ACL checking APIs in all QEMU driver entrypoints. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Ján Tomko 提交于
We can only pass values up to LLONG_MAX through JSON and QEMU checks if the int64_t number is not negative at startup since 1.5.0. https://bugzilla.redhat.com/show_bug.cgi?id=974010
-
- 20 6月, 2013 1 次提交
-
-
由 John Ferlan 提交于
As a consequence of the cgroup layout changes from commit '632f78ca', the qemuDomainGetSchedulerParameters[Flags]()' and qemuGetSchedulerType() APIs failed to return data for a non running domain. This can be seen through a 'virsh schedinfo <domain>' command which returns: Scheduler : Unknown error: Requested operation is not valid: cgroup CPU controller is not mounted Prior to that change a non running domain would return: Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 This patch will restore the capability to return configuration only data for a non running domain regardless of whether cgroups are available.
-
- 18 6月, 2013 1 次提交
-
-
由 Peter Krempa 提交于
Paolo Bonzini pointed out that it's actually possible to migrate a qemu instance that was paused due to I/O error and it will be able to work on the destination if the storage is accessible. This patch introduces flag VIR_MIGRATE_ABORT_ON_ERROR that cancels the migration in case an I/O error happens while it's being performed and allows migration without this flag. This flag can be possibly used for other error reasons that may be introduced in the future.
-
- 13 6月, 2013 1 次提交
-
-
由 Ján Tomko 提交于
Convert input XML to migratable before using it in qemuDomainSaveImageOpen. XML in the save image is migratable, i.e. doesn't contain implicit controllers. If these controllers were in a non-default order in the input XML, the ABI check would fail. Removing and re-adding these controllers fixes it. https://bugzilla.redhat.com/show_bug.cgi?id=834196
-
- 11 6月, 2013 1 次提交
-
-
由 Jiri Denemark 提交于
Avoid leaking virDomainDef if Prepare phase fails before it gets to qemuMigrationPrepareAny.
-
- 10 6月, 2013 1 次提交
-
-
由 Peter Krempa 提交于
This patch fixes changes done in commit 29c1e913 that was pushed without implementing review feedback. The flag introduced by the patch is changed to VIR_DOMAIN_VCPU_GUEST and documentation makes the difference between regular hotplug and this new functionality more explicit. The virsh options that enable the use of the new flag are changed to "--guest" and the documentation is fixed too.
-
- 07 6月, 2013 4 次提交
-
-
由 Michal Privoznik 提交于
Currently, there's a path to use the ncpuinfo variable uninitialized, which leads to a compiler warning: qemu/qemu_driver.c: In function 'qemuDomainGetVcpusFlags': qemu/qemu_driver.c:4573:9: error: 'ncpuinfo' may be used uninitialized in this function [-Werror=maybe-uninitialized] for (i = 0; i < ncpuinfo; i++) { ^
-
由 Peter Krempa 提交于
This patch adds support for agent-based cpu disabling and enabling to qemuDomainSetVcpusFlags() API.
-
由 Peter Krempa 提交于
This patch implements the VIR_DOMAIN_VCPU_AGENT flag for the qemuDomainGetVcpusFlags() libvirt API implementation.
-
由 Peter Krempa 提交于
The 'online' parameter has only two possible values. Use a bool for it.
-
- 06 6月, 2013 1 次提交
-
-
由 Ján Tomko 提交于
Found with 'git grep "= 1"'.
-
- 05 6月, 2013 1 次提交
-
-
由 Guannan Ren 提交于
The work was done at the time of snapshot xmlstring parsing if (offline && def->memory && def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_NONE) { virReportError(...); }
-
- 03 6月, 2013 1 次提交
-
-
由 Peter Krempa 提交于
The code for arbitrary guest agent passthrough was horribly broken since introduction. Fix it to correctly report errors.
-
- 31 5月, 2013 3 次提交
-
-
由 Peter Krempa 提交于
If snapshot creation failed for example due to invalid use of the "REUSE_EXTERNAL" flag, libvirt killed access to the original image file instead of the new image file. On machines with selinux this kills the whole VM as the selinux context is enforced immediately. * qemu_driver.c:qemuDomainSnapshotUndoSingleDiskActive(): - Kill access to the new image file instead of the old one. Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=906639
-
由 Peter Krempa 提交于
After deleting "WithDriver" from the async job function the code was unaligned.
-
由 Eric Blake 提交于
This is a recurring problem for cygwin :) For example, see commit 23a4df88. qemu/qemu_driver.c: In function 'qemuStateInitialize': qemu/qemu_driver.c:691:13: error: format '%d' expects type 'int', but argument 8 has type 'uid_t' [-Wformat] * src/qemu/qemu_driver.c (qemuStateInitialize): Add casts. * daemon/remote.c (remoteDispatchAuthList): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 24 5月, 2013 1 次提交
-
-
由 Martin Kletzander 提交于
Function qemuDomainSetBlockIoTune() was checking QEMU capabilities even when !(flags & VIR_DOMAIN_AFFECT_LIVE) and the domain was shutoff, resulting in the following problem: virsh # domstate asdf; blkdeviotune asdf vda --write-bytes-sec 100 shut off error: Unable to change block I/O throttle error: unsupported configuration: block I/O throttling not supported with this QEMU binary Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=965016
-
- 23 5月, 2013 1 次提交
-
-
由 Michal Privoznik 提交于
-
- 21 5月, 2013 1 次提交
-
-
由 Osier Yang 提交于
-