- 17 1月, 2014 2 次提交
-
-
由 Cole Robinson 提交于
-
由 Daniel P. Berrange 提交于
Currently the virDBusAddWatch does virEventAddHandle(fd, flags, virDBusWatchCallback, watch, NULL); dbus_watch_set_data(watch, info, virDBusWatchFree); Unfortunately this is racy - since the event loop is in a different thread, the virDBusWatchCallback method may be run before we get to calling dbus_watch_set_data. We must reverse the order of these calls See https://bugzilla.redhat.com/show_bug.cgi?id=885445Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 7d3a1c8b)
-
- 16 1月, 2014 8 次提交
-
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When writing commit 173c2914, I missed the fact virNetServerClientClose unlocks the client object before actually clearing client->sock and thus it is possible to hit a window when client->keepalive is NULL while client->sock is not NULL. I was thinking client->sock == NULL was a better check for a closed connection but apparently we have to go with client->keepalive == NULL to actually fix the crash. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 066c8ef6)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When a client closes its connection to libvirtd early during virConnectOpen, more specifically just after making REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for the result, libvirtd may crash due to a race in keep-alive initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, the daemon's event loop delegates it to a worker thread. In case the event loop detects EOF on the connection and calls virNetServerClientClose before the worker thread starts to handle REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be disposed by the time virNetServerClientStartKeepAlive gets called from remoteDispatchConnectSupportsFeature. Because the flow is common for both authenticated and read-only connections, even unprivileged clients may cause the daemon to crash. To avoid the crash, virNetServerClientStartKeepAlive needs to check if the connection is still open before starting keep-alive protocol. Every libvirt release since 0.9.8 is affected by this bug. (cherry picked from commit 173c2914)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit 3b564259)
-
由 Jiri Denemark 提交于
Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit ff5f30b6) Conflicts: src/qemu/qemu_driver.c - context
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit f93d2caa)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Generally, every API that is going to begin a job should do that before fetching data from vm->def. However, qemuDomainGetBlockInfo does not know whether it will have to start a job or not before checking vm->def. To avoid using disk alias that might have been freed while we were waiting for a job, we use its copy. In case the disk was removed in the meantime, we will fail with "cannot find statistics for device '...'" error message. (cherry picked from commit b7992595) Conflicts: src/qemu/qemu_driver.c - VIR_STRDUP not backported
-
由 Jiri Denemark 提交于
CVE-2013-6458 https://bugzilla.redhat.com/show_bug.cgi?id=1043069 When virDomainDetachDeviceFlags is called concurrently to virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats finds a disk in vm->def before getting a job on a domain and uses the disk pointer after getting the job. However, the domain in unlocked while waiting on a job condition and thus data behind the disk pointer may disappear. This happens when thread 1 runs virDomainDetachDeviceFlags and enters monitor to actually remove the disk. Then another thread starts running virDomainBlockStats, finds the disk in vm->def, and while it's waiting on the job condition (owned by the first thread), the first thread finishes the disk removal. When the second thread gets the job, the memory pointed to be the disk pointer is already gone. That said, every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit db86da5c) Conflicts: src/qemu/qemu_driver.c - context: no ACLs
-
由 Eric Blake 提交于
While working on v1.0.5-maint (the branch in use on Fedora 19) with the host at Fedora 20, I got a failure in virstoragetest. I traced it to the fact that we were using qemu-img to create a qcow2 file, but qemu-img changed from creating v2 files by default in F19 to creating v3 files in F20. Rather than leaving it up to qemu-img, it is better to write the test to force testing of BOTH file formats (better code coverage and all). This patch alone does not fix all the failures in v1.0.5-maint; for that, we must decide to either teach the older branch to understand v3 files, or to reject them outright as unsupported. But for upstream, making the test less dependent on changing qemu-img defaults is always a good thing. * tests/virstoragetest.c (testPrepImages): Simplify creation of raw file; check if qemu supports compat and if so use it. Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 974e5914) Conflicts: tests/virstoragetest.c - hardcode test to v2, since this branch doesn't handle v3 correctly
-
- 09 1月, 2014 5 次提交
-
-
由 Zeng Junliang 提交于
If there's a migration cancelled, the bitmap of migration port should be cleaned up too. Signed-off-by: NZeng Junliang <zengjunliang@huawei.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit c92ca769)
-
由 Michal Privoznik 提交于
Commit e3ef20d7 allows user to configure migration ports range via qemu.conf. However, it forgot to update augeas definition file and even the test data was malicious. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit d9be5a71) Conflicts: missing support for changing migration listen_address src/qemu/libvirtd_qemu.aug src/qemu/test_libvirtd_qemu.aug.in
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1019053 (cherry picked from commit e3ef20d7) Conflicts: missing support for changing the migration listen address src/qemu/qemu.conf src/qemu/qemu_conf.c src/qemu/qemu_conf.h src/qemu/test_libvirtd_qemu.aug.in
-
由 Wang Yufei 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1019053 When we migrate vms concurrently, there's a chance that libvirtd on destination assigns the same port for different migrations, which will lead to migration failure during prepare phase on destination. So we use virPortAllocator here to solve the problem. Signed-off-by: NWang Yufei <james.wangyufei@huawei.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 0196845d) Conflicts: missing support for WebSockets and listen address virAsprintf doesn't report OOM src/qemu/qemu_command.h src/qemu/qemu_conf.h src/qemu/qemu_driver.c src/qemu/qemu_migration.c
-
由 Jiri Denemark 提交于
Whenever virPortAllocatorRelease is called with port == 0, it complains that the port is not in an allowed range, which is expectable as the port was never allocated. Let's make virPortAllocatorRelease ignore 0 ports in a similar way free() ignores NULL pointers. (cherry picked from commit 86dba8f3) Conflicts: missing VNC websocket support src/qemu/qemu_process.c
-
- 20 12月, 2013 2 次提交
-
-
由 Martin Kletzander 提交于
The function doesn't check whether the request is made for active or inactive domain. Thus when the domain is not running it still tries accessing non-existing cgroups (priv->cgroup, which is NULL). I re-made the function in order for it to work the same way it's qemu counterpart does. Reproducer: 1) Define an LXC domain 2) Do 'virsh memtune <domain> --hard-limit 133T' Backtrace: Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)): #0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764 #1 0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824") at util/vircgroup.c:669 #2 0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740 #3 0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904 #4 0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576) at util/vircgroup.c:1944 #5 0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420, params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774 #6 0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420, params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051 #7 0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510) at remote_dispatch.h:7621 #8 0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510, ret=0x7fffe40b84f0) at remote_dispatch.h:7591 #9 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0) at rpc/virnetserverprogram.c:435 #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0) at rpc/virnetserverprogram.c:305 #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10, prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165 #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00) at rpc/virnetserver.c:186 #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144 #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161 #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308 #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit 9faf3f29) Conflicts: src/lxc/lxc_driver.c
-
由 Martin Kletzander 提交于
The function doesn't check whether the request is made for active or inactive domain. Thus when the domain is not running it still tries accessing non-existing cgroups (priv->cgroup, which is NULL). I re-made the function in order for it to work the same way it's qemu counterpart does. Reproducer: 1) Define an LXC domain 2) Do 'virsh memtune <domain>' Backtrace: Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)): #0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764 #1 0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705 #2 0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804 #3 0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8) at util/vircgroup.c:1962 #4 0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0, params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826 #5 0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0, params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137 #6 0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0, ret=0x7fffd4002420) at remote.c:1895 #7 0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0, ret=0x7fffd4002420) at remote_dispatch.h:4050 #8 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0) at rpc/virnetserverprogram.c:435 #9 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0) at rpc/virnetserverprogram.c:305 #10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0, prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165 #11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00) at rpc/virnetserver.c:186 #12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144 #13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161 #14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308 #15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit f8c1cb90) Conflicts: src/lxc/lxc_driver.c
-
- 15 12月, 2013 4 次提交
-
-
由 Cole Robinson 提交于
-
由 Christophe Fergeau 提交于
The array of sasl_callback_t callbacks which is passed to sasl_client_new() must be kept alive as long as the created sasl_conn_t object is alive as cyrus-sasl uses this structure internally for things like logging, so the memory used for callbacks must only be freed after sasl_dispose() has been called. During testing of successful SASL logins with virsh -c qemu+tls:///system list --all I've been getting invalid read reports from valgrind ==9237== Invalid read of size 8 ==9237== at 0x6E93B6F: _sasl_getcallback (common.c:1745) ==9237== by 0x6E95430: _sasl_log (common.c:1850) ==9237== by 0x16593D87: digestmd5_client_mech_dispose (digestmd5.c:4580) ==9237== by 0x6E91653: client_dispose (client.c:332) ==9237== by 0x6E9476A: sasl_dispose (common.c:851) ==9237== by 0x4E225A1: virNetSASLSessionDispose (virnetsaslcontext.c:678) ==9237== by 0x4CBC551: virObjectUnref (virobject.c:262) ==9237== by 0x4E254D1: virNetSocketDispose (virnetsocket.c:1042) ==9237== by 0x4CBC551: virObjectUnref (virobject.c:262) ==9237== by 0x4E2701C: virNetSocketEventFree (virnetsocket.c:1794) ==9237== by 0x4C965D3: virEventPollCleanupHandles (vireventpoll.c:583) ==9237== by 0x4C96987: virEventPollRunOnce (vireventpoll.c:652) ==9237== by 0x4C94730: virEventRunDefaultImpl (virevent.c:274) ==9237== by 0x12C7BA: vshEventLoop (virsh.c:2407) ==9237== by 0x4CD3D04: virThreadHelper (virthreadpthread.c:161) ==9237== by 0x7DAEF32: start_thread (pthread_create.c:309) ==9237== by 0x8C86EAC: clone (clone.S:111) ==9237== Address 0xe2d61b0 is 0 bytes inside a block of size 168 free'd ==9237== at 0x4A07577: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==9237== by 0x4C73827: virFree (viralloc.c:580) ==9237== by 0x4DE4BC7: remoteAuthSASL (remote_driver.c:4219) ==9237== by 0x4DE33D0: remoteAuthenticate (remote_driver.c:3639) ==9237== by 0x4DDBFAA: doRemoteOpen (remote_driver.c:832) ==9237== by 0x4DDC8DC: remoteConnectOpen (remote_driver.c:1031) ==9237== by 0x4D8595F: do_open (libvirt.c:1239) ==9237== by 0x4D863F3: virConnectOpenAuth (libvirt.c:1481) ==9237== by 0x12762B: vshReconnect (virsh.c:337) ==9237== by 0x12C9B0: vshInit (virsh.c:2470) ==9237== by 0x12E9A5: main (virsh.c:3338) This commit changes virNetSASLSessionNewClient() to take ownership of the SASL callbacks. Then we can free them in virNetSASLSessionDispose() after the corresponding sasl_conn_t has been freed. (cherry picked from commit 13fdc6d6)
-
由 Jiri Denemark 提交于
The previous attempt (commit d65e0e14) removed just one of two libvirt-guests restarts that happened on libvirt-client update. Let's remove the last one too :-) https://bugzilla.redhat.com/show_bug.cgi?id=962225Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 604f79b3)
-
由 Don Dugger 提交于
This Python interface code is returning a -1 on errors for the `baselineCPU' API. Since this API is supposed to return a pointer the error return value should really be VIR_PY_NONE. NB: I've checked all the other APIs in this file and this is the only pointer API that is returning -1. Signed-off-by: NDon Dugger <donald.d.dugger@intel.com> (crobinso: Upstream in libvirt-python.git)
-
- 22 11月, 2013 1 次提交
-
-
由 Cole Robinson 提交于
Restarting an active libvirt-guests.service is the equivalent of doing: /usr/libexec/libvirt-guests.sh stop /usr/libexec/libvirt-guests.sh start Which in a default configuration will managedsave every running VM, and then restore them. Certainly not something we should do every time the libvirt-client RPM is updated. Just drop the try-restart attempt, I don't know what purpose it serves anyways. https://bugzilla.redhat.com/show_bug.cgi?id=962225 (cherry picked from commit d65e0e14)
-
- 20 11月, 2013 3 次提交
-
-
由 Daniel P. Berrange 提交于
If the host side of an LXC container console disconnected and the guest side continued to write data, until the PTY buffer filled up, the LXC controller would busy wait. It would repeatedly see POLLHUP from poll() and not disable the watch. This was due to some bogus logic detecting blocking conditions. Upon seeing a POLLHUP we must disable all reading & writing from the PTY, and setup the epoll to wake us up again when the connection comes back. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 5087a5a0)
-
由 Cole Robinson 提交于
Possible fix for occasional libvirt-guests failure at boot time: https://bugzilla.redhat.com/show_bug.cgi?id=906009 (cherry picked from commit d9203675)
-
由 Guido Günther 提交于
Syslog is socket activated since at least systemd v35 so we can drop this dependency. Debian's linitian otherwise complains about it. References: http://www.freedesktop.org/wiki/Software/systemd/syslog/ http://lintian.debian.org/tags/systemd-service-file-refers-to-obsolete-target.html (cherry picked from commit 3c9e40a1)
-
- 18 11月, 2013 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Rather than casting the virBitmap pointer to uint8_t* and then using the structure contents as a byte array, use the virBitmap API to determine the bitmap size and test each bit. Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org> (cherry picked from commit ba1bf100)
-
由 Laine Stump 提交于
This should resolve: https://bugzilla.redhat.com/show_bug.cgi?id=1012085 libvirt previously recognized NFS, GFS2, OCFS2, and AFS filesystems as "shared", and thus eligible for exceptions to certain rules/actions about chowning image files before handing them off to a guest. This patch widens the definition of "shared filesystem" to include SMB and CIFS filesystems (aka "Windows file sharing"); both of these use the same protocol, but different drivers so there are different magic numbers for each. (cherry picked from commit e4e73337)
-
- 13 11月, 2013 1 次提交
-
-
由 Ján Tomko 提交于
When opening a new connection to the driver, nwfilterOpen only succeeds if the driverState has been allocated. Move the privilege check in driver initialization before the state allocation to disable the driver. This changes the nwfilter-define error from: error: cannot create config directory (null): Bad address To: this function is not supported by the connection driver: virNWFilterDefineXML https://bugzilla.redhat.com/show_bug.cgi?id=1029266 (cherry picked from commit b7829f95)
-
- 07 11月, 2013 4 次提交
-
-
由 Jiri Denemark 提交于
Our configure.ac says: Not all versions of gnutls include -lgcrypt, and so we add it explicitly for the calls to gcry_control/check_version Thus we cannot rely on gnutls-devel to bring grcypt-devel as a dependency. (cherry picked from commit 3b50a711)
-
由 Cole Robinson 提交于
-
由 Michal Privoznik 提交于
Since 16bcb3 we have a regression. The hard_limit is set unconditionally. By default the limit is zero. Hence, if user hasn't configured any, we set the zero in cgroup subsystem making the kernel kill the corresponding qemu process immediately. The proper fix is to set hard_limit iff user has configured any. (cherry picked from commit 94a24dd3) Conflicts: src/qemu/qemu_cgroup.c
-
由 Michal Privoznik 提交于
This function is to guess the correct limit for maximal memory usage by qemu for given domain. This can never be guessed correctly, not to mention all the pains and sleepless nights this code has caused. Once somebody discovers algorithm to solve the Halting Problem, we can compute the limit algorithmically. But till then, this code should never see the light of the release again. (cherry picked from commit 16bcb3b6) Conflicts: src/qemu/qemu_cgroup.c src/qemu/qemu_command.c src/qemu/qemu_domain.c src/qemu/qemu_domain.h src/qemu/qemu_hotplug.c
-
- 18 10月, 2013 2 次提交
-
-
由 Zhou Yimin 提交于
Introduced by 7b87a3 When I quit the process which only register VIR_DOMAIN_EVENT_ID_REBOOT, I got error like: "libvirt: XML-RPC error : internal error: domain event 0 not registered". Then I add the following code, it fixed. Signed-off-by: NZhou Yimin <zhouyimin@huawei.com> Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 9712c251)
-
由 Martin Kletzander 提交于
Commit a0b6a36f "fixed" what abfff210 broke (URI precedence), but there was still one more thing missing to fix. When using virsh parameters to setup debugging, those weren't honored, because at the time debugging was initializing, arguments weren't parsed yet. To make ewerything work as expected, we need to initialize the debugging twice, once before debugging (so we can debug option parsing properly) and then again after these options are parsed. As a side effect, this patch also fixes a leak when virsh is ran with multiple '-l' parameters. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit ac43da70)
-
- 15 10月, 2013 6 次提交
-
-
由 Martin Kletzander 提交于
Commit abfff210 changed the order of vshParseArgv() and vshInit() in order to make fix debugging of parameter parsing. However, vshInit() did a vshReconnect() even though ctl->name wasn't set according to the '-c' parameter yet. In order to keep both issues fixed, I've split the vshInit() into vshInitDebug() and vshInit(). One simple memleak of ctl->name is fixed as a part of this patch, since it is related to the issue it's fixing. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=999323 (cherry picked from commit a0b6a36f)
-
由 Liuji (Jeremy) 提交于
After freeing the bitmap pointer, it must set the pointer to NULL. This will avoid any other use of the freed memory of the bitmap pointer. https://bugzilla.redhat.com/show_bug.cgi?id=1006710Signed-off-by: NLiuji (Jeremy) <jeremy.liu@huawei.com> (cherry picked from commit ef5d51d4)
-
由 Daniel Hansel 提交于
Introduced by commit 3f029fb5 the RPM build was broken due to a missing LXC textcase. Signed-off-by: NDaniel Hansel <daniel.hansel@linux.vnet.ibm.com> (cherry picked from commit 6285c17f)
-
由 Ján Tomko 提交于
Since 76b644c3 when the support for RAM filesystems was introduced, libvirt accepted the following XML: <source usage='1024' unit='KiB'/> This was parsed correctly and internally stored in bytes, but it was formatted as (with an extra 's'): <source usage='1024' units='KiB'/> When read again, this was treated as if the units were missing, meaning libvirt was unable to parse its own XML correctly. The usage attribute was documented as being in KiB, but it was not scaled if the unit was missing. Transient domains still worked, because this was balanced by an extra 'k' in the mount options. This patch: Changes the parser to use 'units' instead of 'unit', as the latter was never documented (fixing persistent domains) and some programs (libvirt-glib, libvirt-sandbox) already parse the 'units' attribute. Removes the extra 'k' from the tmpfs mount options, which is needed because now we parse our own XML correctly. Changes the default input unit to KiB to match documentation, fixing: https://bugzilla.redhat.com/show_bug.cgi?id=1015689 (cherry picked from commit 3f029fb5) Conflicts: src/conf/domain_conf.c src/conf/domain_conf.h - missing format src/lxc/lxc_container.c - virAsprintf doesn't report OOM errors tests/lxcxml2xmltest.c - missing format test
-
由 Michal Privoznik 提交于
After successful @cmd construction the memory where @keys points to is part of @cmd. Avoid double freeing it. (cherry picked from commit 3e8343e1)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1006864 Commit 38ab1225 changed the default value of ret from true to false but forgot to set ret = true when job is NONE. Thus, virsh domjobinfo returned 1 when there was no job running for a domain but it used to (and should) return 0 in this case. (cherry picked from commit f084caae)
-