- 10 4月, 2014 3 次提交
-
-
由 Martin Kletzander 提交于
by moving qemuAgentCommand() after qemuAgentCheckError(). Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit e9d09fe1) Conflicts: src/qemu/qemu_agent.c -- label indentation (5922d05a) comment removal (56874f01) VIR_ALLOC refactor (e987a30d)
-
由 Martin Kletzander 提交于
On all the places where qemuAgentComand() was called, we did a check for errors in the reply. Unfortunately, some of the places called qemuAgentCheckError() without checking for non-null reply which might have resulted in a crash. So this patch makes the error-checking part of qemuAgentCommand() itself, which: a) makes it look better, b) makes the check mandatory and, most importantly, c) checks for the errors if and only if it is appropriate. This actually fixes a potential crashers when qemuAgentComand() returned 0, but reply was NULL. Having said that, it *should* fix the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1058149Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit 5b3492fa) Conflicts: src/qemu/qemu_agent.c -- vCPU functions (3099c063)
-
由 Peter Krempa 提交于
The code for arbitrary guest agent passthrough was horribly broken since introduction. Fix it to correctly report errors. (cherry picked from commit 6e5b36d5)
-
- 20 3月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling poll(). This is okay, as we don't want poll() to be interrupted. However, then - immediately as we fall out from the poll() - we try to restore the original sigmask - again using SIG_BLOCK. But as the man page says, SIG_BLOCK adds signals to the signal mask: SIG_BLOCK The set of blocked signals is the union of the current set and the set argument. Therefore, when restoring the original mask, we need to completely overwrite the one we set earlier and hence we should be using: SIG_SETMASK The set of blocked signals is set to the argument set. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit 3d4b4f5a)
-
- 10 3月, 2014 1 次提交
-
-
由 Daniel P. Berrange 提交于
The nwfilter conf update mutex previously serialized updates to the internal data structures for firewall rules, and updates to the firewall itself. The latter was recently turned into a read/write lock, and filter instantiation allowed to proceed in parallel. It was believed that this was ok, since each filter is created on a separate iptables/ebtables chain. It turns out that there is a subtle lock ordering problem on virNWFilterObjPtr instances. __virNWFilterInstantiateFilter will hold a lock on the virNWFilterObjPtr it is instantiating. This in turn invokes virNWFilterInstantiate which then invokes virNWFilterDetermineMissingVarsRec which then invokes virNWFilterObjFindByName. This iterates over every single virNWFilterObjPtr in the list, locking them and checking their name. So if 2 or more threads try to instantiate a filter in parallel, they'll all hold 1 lock at the top level in the __virNWFilterInstantiateFilter method which will cause the other thread to deadlock in virNWFilterObjFindByName. The fix is to add an exclusive mutex to serialize the execution of __virNWFilterInstantiateFilter. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 925de19e) Conflicts: src/nwfilter/nwfilter_gentech_driver.c
-
- 06 2月, 2014 6 次提交
-
-
由 Daniel P. Berrange 提交于
The NWFilter code has as a deadlock race condition between the virNWFilter{Define,Undefine} APIs and starting of guest VMs due to mis-matched lock ordering. In the virNWFilter{Define,Undefine} codepaths the lock ordering is 1. nwfilter driver lock 2. virt driver lock 3. nwfilter update lock 4. domain object lock In the VM guest startup paths the lock ordering is 1. virt driver lock 2. domain object lock 3. nwfilter update lock As can be seen the domain object and nwfilter update locks are not acquired in a consistent order. The fix used is to push the nwfilter update lock upto the top level resulting in a lock ordering for virNWFilter{Define,Undefine} of 1. nwfilter driver lock 2. nwfilter update lock 3. virt driver lock 4. domain object lock and VM start using 1. nwfilter update lock 2. virt driver lock 3. domain object lock This has the effect of serializing VM startup once again, even if no nwfilters are applied to the guest. There is also the possibility of deadlock due to a call graph loop via virNWFilterInstantiate and virNWFilterInstantiateFilterLate. These two problems mean the lock must be turned into a read/write lock instead of a plain mutex at the same time. The lock is used to serialize changes to the "driver->nwfilters" hash, so the write lock only needs to be held by the define/undefine methods. All other methods can rely on a read lock which allows good concurrency. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 6e5c79a1) Conflicts: src/conf/nwfilter_conf.c - virReportOOMError() in context of one hunk. src/lxc/lxc_driver.c - functions renamed, and lxc object locking changed, creating a conflict in the context. src/qemu/qemu_driver.c - qemuDomainStartWithFlags (called qemuDomainCreateWithFlags upstream) gets the domain object using qemuDomObjFromDomain() upstream, but virDomainObjListFindByUUID() in 1.0.4. This creates a small conflict in context.
-
由 Daniel P. Berrange 提交于
Add virRWLock backed up by a POSIX rwlock primitive Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit c065984b)
-
由 Daniel P. Berrange 提交于
The virConnectPtr is passed around loads of nwfilter code in order to provide it as a parameter to the callback registered by the virt drivers. None of the virt drivers use this param though, so it serves no purpose. Avoiding the need to pass a virConnectPtr means that the nwfilterStateReload method no longer needs to open a bogus QEMU driver connection. This addresses a race condition that can lead to a crash on startup. The nwfilter driver starts before the QEMU driver and registers some callbacks with DBus to detect firewalld reload. If the firewalld reload happens while the QEMU driver is still starting up though, the nwfilterStateReload method will open a connection to the partially initialized QEMU driver and cause a crash. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 999d72fb) Conflicts: src/nwfilter/nwfilter_driver.c - *EnsureACL*() was added after this branch was created, and caused two small conflicts in the context around a hunk. - nwfilterDriverReload() was renamed to nwfilterStateReload() upstream.
-
由 Daniel P. Berrange 提交于
The nwfilter driver only needs a reference to its private state object, not a full virConnectPtr. Update the domUpdateCBStruct struct to have a 'void *opaque' field instead of a virConnectPtr. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit ebca369e)
-
由 Daniel P. Berrange 提交于
None of the virNWFilterDefParse* methods require a virConnectPtr arg, so just drop it Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit b77b16ce)
-
由 Daniel P. Berrange 提交于
For inexplicable reasons, the nwfilter XML parser is intentionally ignoring errors that arise during parsing. As well as meaning that users don't get any feedback on their XML mistakes, this will lead it to silently drop data in OOM conditions. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 4f209434)
-
- 16 1月, 2014 10 次提交
-
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When writing commit 173c2914, I missed the fact virNetServerClientClose unlocks the client object before actually clearing client->sock and thus it is possible to hit a window when client->keepalive is NULL while client->sock is not NULL. I was thinking client->sock == NULL was a better check for a closed connection but apparently we have to go with client->keepalive == NULL to actually fix the crash. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 066c8ef6)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When a client closes its connection to libvirtd early during virConnectOpen, more specifically just after making REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for the result, libvirtd may crash due to a race in keep-alive initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, the daemon's event loop delegates it to a worker thread. In case the event loop detects EOF on the connection and calls virNetServerClientClose before the worker thread starts to handle REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be disposed by the time virNetServerClientStartKeepAlive gets called from remoteDispatchConnectSupportsFeature. Because the flow is common for both authenticated and read-only connections, even unprivileged clients may cause the daemon to crash. To avoid the crash, virNetServerClientStartKeepAlive needs to check if the connection is still open before starting keep-alive protocol. Every libvirt release since 0.9.8 is affected by this bug. (cherry picked from commit 173c2914)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit 3b564259)
-
由 Jiri Denemark 提交于
Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit ff5f30b6) Conflicts: src/qemu/qemu_driver.c - context
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit f93d2caa)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Generally, every API that is going to begin a job should do that before fetching data from vm->def. However, qemuDomainGetBlockInfo does not know whether it will have to start a job or not before checking vm->def. To avoid using disk alias that might have been freed while we were waiting for a job, we use its copy. In case the disk was removed in the meantime, we will fail with "cannot find statistics for device '...'" error message. (cherry picked from commit b7992595) Conflicts: src/qemu/qemu_driver.c - VIR_STRDUP not backported
-
由 Jiri Denemark 提交于
CVE-2013-6458 https://bugzilla.redhat.com/show_bug.cgi?id=1043069 When virDomainDetachDeviceFlags is called concurrently to virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats finds a disk in vm->def before getting a job on a domain and uses the disk pointer after getting the job. However, the domain in unlocked while waiting on a job condition and thus data behind the disk pointer may disappear. This happens when thread 1 runs virDomainDetachDeviceFlags and enters monitor to actually remove the disk. Then another thread starts running virDomainBlockStats, finds the disk in vm->def, and while it's waiting on the job condition (owned by the first thread), the first thread finishes the disk removal. When the second thread gets the job, the memory pointed to be the disk pointer is already gone. That said, every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit db86da5c) Conflicts: src/qemu/qemu_driver.c - context: no ACLs
-
由 Eric Blake 提交于
While working on v1.0.5-maint (the branch in use on Fedora 19) with the host at Fedora 20, I got a failure in virstoragetest. I traced it to the fact that we were using qemu-img to create a qcow2 file, but qemu-img changed from creating v2 files by default in F19 to creating v3 files in F20. Rather than leaving it up to qemu-img, it is better to write the test to force testing of BOTH file formats (better code coverage and all). This patch alone does not fix all the failures in v1.0.5-maint; for that, we must decide to either teach the older branch to understand v3 files, or to reject them outright as unsupported. But for upstream, making the test less dependent on changing qemu-img defaults is always a good thing. * tests/virstoragetest.c (testPrepImages): Simplify creation of raw file; check if qemu supports compat and if so use it. Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 974e5914) Conflicts: tests/virstoragetest.c - hardcode test to v2, since this branch doesn't handle v3 correctly
-
由 Eric Blake 提交于
Newer pod (hello rawhide) complains if you attempt to mix bullets and non-bullets in the same list: virsh.pod around line 3177: Expected text after =item, not a bullet As our intent was to nest an inner list, we make that explicit to keep pod happy. * tools/virsh.pod (ENVIRONMENT): Use correct pod syntax. (cherry picked from commit 00d69b4a)
-
由 Jim Fehlig 提交于
Xen 4.3 fixes a mistake in the libxl event handler signature where the event owned by the application was defined as const. Detect this and define the libvirt libxl event handler signature appropriately. (cherry picked from commit 43b0ff5b)
-
- 18 10月, 2013 1 次提交
-
-
由 Zhou Yimin 提交于
Introduced by 7b87a3 When I quit the process which only register VIR_DOMAIN_EVENT_ID_REBOOT, I got error like: "libvirt: XML-RPC error : internal error: domain event 0 not registered". Then I add the following code, it fixed. Signed-off-by: NZhou Yimin <zhouyimin@huawei.com> Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 9712c251)
-
- 03 10月, 2013 1 次提交
-
-
由 Osier Yang 提交于
Introduced by commit 1daa4ba3. vshCommandOptStringReq returns 0 on *success* or the option is not required && not present, both are right result. Error out when returning 0 is not correct. the caller, it doesn't have to check wether it (cherry picked from commit 2a3a725c)
-
- 19 9月, 2013 3 次提交
-
-
由 Daniel P. Berrange 提交于
The 'stats' variable was not initialized to NULL, so if some early validation of the RPC call fails, it is possible to jump to the 'cleanup' label and VIR_FREE an uninitialized pointer. This is a security flaw, since the API can be called from a readonly connection which can trigger the validation checks. This was introduced in release v0.9.1 onwards by commit 158ba873 Author: Daniel P. Berrange <berrange@redhat.com> Date: Wed Apr 13 16:21:35 2011 +0100 Merge all returns paths from dispatcher into single path Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit e7f400a1) Conflicts: daemon/remote.c - context
-
由 Daniel P. Berrange 提交于
With the existing pkcheck (pid, start time) tuple for identifying the process, there is a race condition, where a process can make a libvirt RPC call and in another thread exec a setuid application, causing it to change to effective UID 0. This in turn causes polkit to do its permission check based on the wrong UID. To address this, libvirt must get the UID the caller had at time of connect() (from SO_PEERCRED) and pass a (pid, start time, uid) triple to the pkcheck program. Signed-off-by: NColin Walters <walters@redhat.com> Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 922b7fda) Conflicts: src/access/viraccessdriverpolkit.c Resolution: Dropped file that does not exist in this branch.
-
由 Daniel P. Berrange 提交于
Since PIDs can be reused, polkit prefers to be given a (PID,start time) pair. If given a PID on its own, it will attempt to lookup the start time in /proc/pid/stat, though this is subject to races. It is safer if the client app resolves the PID start time itself, because as long as the app has the client socket open, the client PID won't be reused. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 979e9c56) Conflicts: src/util/virprocess.c src/util/virstring.c src/util/virstring.h
-
- 20 8月, 2013 1 次提交
-
-
由 Peter Krempa 提交于
The virBitmapParse function was calling virBitmapIsSet() function that requires the caller to check the bounds of the bitmap without checking them. This resulted into crashes when parsing a bitmap string that was exceeding the bounds used as argument. This patch refactors the function to use virBitmapSetBit without checking if the bit is set (this function does the checks internally) and then counts the bits in the bitmap afterwards (instead of keeping track while parsing the string). This patch also changes the "parse_error" label to a more common "error". The refactor should also get rid of the need to call sa_assert on the returned variable as the callpath should allow coverity to infer the possible return values. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=997367 Thanks to Alex Jia for tracking down the issue. This issue is introduced by commit 0fc89098. (cherry picked from commit 47b9127e)
-
- 11 7月, 2013 2 次提交
-
-
由 Ján Tomko 提交于
Don't reuse the return value of virStorageBackendFileSystemIsMounted. If it's 0, we'd return it even if the mount command failed. Also, don't report another error if it's -1, since one has already been reported. Introduced by 258e06c8. https://bugzilla.redhat.com/show_bug.cgi?id=981251 (cherry picked from commit 13fde7ce)
-
由 Ján Tomko 提交于
If qemuMonitorBlockJob returned 0, qemuDomainBlockPivot might return 0 even if an error occured. https://bugzilla.redhat.com/show_bug.cgi?id=977678 (cherry picked from commit c34107df)
-
- 01 7月, 2013 4 次提交
-
-
由 Dennis Chen 提交于
When creating a virtual FC HBA with virsh/libvirt API, an error message will be returned: "error: Node device not found", also the 'nodedev-dumpxml' shows wrong information of wwpn & wwnn for the new created device. Signed-off-by: xschen@tnsoft.com.cn This reverts f90af691 which switched wwpn & wwwn in the wrong place. https://www.kernel.org/doc/Documentation/scsi/scsi_fc_transport.txt (cherry picked from commit 3c0d5e22) Conflicts: src/storage/storage_backend_scsi.c
-
由 Ján Tomko 提交于
If networkUnplugBandwidth is called on a network which has no bandwidth defined, print a warning instead of crashing. This can happen when destroying a domain with bandwidth if bandwidth was removed from the network after the domain was started. https://bugzilla.redhat.com/show_bug.cgi?id=975359 (cherry picked from commit 658c932a)
-
由 Ján Tomko 提交于
Don't check for '\n' at the end of file if zero bytes were read. Found by valgrind: ==404== Invalid read of size 1 ==404== at 0x529B09F: virCgroupGetValueStr (vircgroup.c:540) ==404== by 0x529AF64: virCgroupMoveTask (vircgroup.c:1079) ==404== by 0x1EB475: qemuSetupCgroupForEmulator (qemu_cgroup.c:1061) ==404== by 0x1D9489: qemuProcessStart (qemu_process.c:3801) ==404== by 0x18557E: qemuDomainObjStart (qemu_driver.c:5787) ==404== by 0x190FA4: qemuDomainCreateWithFlags (qemu_driver.c:5839) Introduced by 0d0b4098. https://bugzilla.redhat.com/show_bug.cgi?id=978356 (cherry picked from commit 306c49ff)
-
由 Ján Tomko 提交于
Free the old XML strings before overwriting them if the user has chosen to reedit the file or force the redefinition. Found by Alex Jia trying to reproduce another bug: https://bugzilla.redhat.com/show_bug.cgi?id=977430#c3 (cherry picked from commit 1e3a2529)
-
- 20 6月, 2013 2 次提交
-
-
由 John Ferlan 提交于
Cherry-picked from 38ada092 As a consequence of the cgroup layout changes from commit 'cfed9ad4', the lxcDomainGetSchedulerParameters[Flags]()' and lxcGetSchedulerType() APIs failed to return data for a non running domain. This can be seen through a 'virsh schedinfo <domain>' command which returns: Scheduler : Unknown error: Requested operation is not valid: cgroup CPU controller is not mounted Prior to that change a non running domain would return: Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 This patch will restore the capability to return configuration only data for a non running domain regardless of whether cgroups are available. Conflicts: src/lxc/lxc_driver.c * Resolved conflict by using former lxcCgroupHasController() rather than virCgroupHasController() * Needed to add the code to fetch the 'vm' vm = virDomainObjListFindByUUID(driver->domains, domain->uuid); if (vm == NULL) { virReportError(VIR_ERR_INTERNAL_ERROR, _("No such domain %s"), domain->uuid); goto cleanup; } * Used 'ret = strdup("posix");' rather than VIR_STRDUP(ret, "posix"); and added the virReportOOMError(); on failure.
-
由 John Ferlan 提交于
Cherry-picked from b2375453 As a consequence of the cgroup layout changes from commit '632f78ca', the qemuDomainGetSchedulerParameters[Flags]()' and qemuGetSchedulerType() APIs failed to return data for a non running domain. This can be seen through a 'virsh schedinfo <domain>' command which returns: Scheduler : Unknown error: Requested operation is not valid: cgroup CPU controller is not mounted Prior to that change a non running domain would return: Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 This patch will restore the capability to return configuration only data for a non running domain regardless of whether cgroups are available. Conflicts: src/qemu/qemu_driver.c * Resolved conflict by using former qemuCgroupHasController() rather than virCgroupHasController() * Needed to add the code to fetch the 'vm' vm = virDomainObjListFindByUUID(driver->domains, dom->uuid); if (vm == NULL) { virReportError(VIR_ERR_INTERNAL_ERROR, _("No such domain %s"), dom->uuid); goto cleanup; } * Used 'ret = strdup("posix");' rather than VIR_STRDUP(ret, "posix"); and added the virReportOOMError(); on failure.
-
- 18 6月, 2013 3 次提交
-
-
由 Ján Tomko 提交于
Don't free the stream on error if we've successfully added it to the hash table, since it will be freed by virChrdevHashEntryFree callback. Preserve the error message before calling virStreamFree, since it resets the error. Introduced by 47161382, crashing since 69218922. Reported by Sergey Fionov on libvir-list. (cherry picked from commit a32b4174)
-
由 Ján Tomko 提交于
Change the socket path to match the one used by lockd driver. https://bugzilla.redhat.com/show_bug.cgi?id=968128 (cherry picked from commit 70fe1295)
-
由 Ján Tomko 提交于
Use the host number as the host number when constructing the sysfs path instead of the variable we are trying to fill. https://bugzilla.redhat.com/show_bug.cgi?id=973543 (cherry picked from commit 371c1551)
-
- 01 6月, 2013 1 次提交
-
-
由 Laine Stump 提交于
This should resolve: https://bugzilla.redhat.com/show_bug.cgi?id=959191 The problem was that qemuUpdateActivePciHostdevs was returning 0 (success) when no hostdevs were present, but would otherwise return -1 (failure) even when it completed successfully. It is only called from qemuProcessReconnect(), and when qemuProcessReconnect got back an error, it would not only stop reconnecting, but would terminate the guest qemu process "to remove danger of it ending up running twice if user tries to start it again later". (This bug was introduced in commit 011cf7ad, which was pushed between v1.0.2 and v1.0.3, so all maintenance branches from v1.0.3 up to 1.0.5 will need this one line patch applied.) (cherry picked from commit 2ea45647)
-
- 16 5月, 2013 1 次提交
-