- 16 1月, 2014 3 次提交
-
-
由 Eric Blake 提交于
Ever since ACL filtering was added in commit 76397360 (v1.1.1), a user could still use event registration to obtain access to a domain that they could not normally access via virDomainLookup* or virConnectListAllDomains and friends. We already have the framework in the RPC generator for creating the filter, and previous cleanup patches got us to the point that we can now wire the filter through the entire object event stack. Furthermore, whether or not domain:getattr is honored, use of global events is a form of obtaining a list of networks, which is covered by connect:search_domains added in a93cd08f (v1.1.0). Ideally, we'd have a way to enforce connect:search_domains when doing global registrations while omitting that check on a per-domain registration. But this patch just unconditionally requires connect:search_domains, even when no list could be obtained, based on the following observations: 1. Administrators are unlikely to grant domain:getattr for one or all domains while still denying connect:search_domains - a user that is able to manage domains will want to be able to manage them efficiently, but efficient management includes being able to list the domains they can access. The idea of denying connect:search_domains while still granting access to individual domains is therefore not adding any real security, but just serves as a layer of obscurity to annoy the end user. 2. In the current implementation, domain events are filtered on the client; the server has no idea if a domain filter was requested, and must therefore assume that all domain event requests are global. Even if we fix the RPC protocol to allow for server-side filtering for newer client/server combos, making the connect:serach_domains ACL check conditional on whether the domain argument was NULL won't benefit older clients. Therefore, we choose to document that connect:search_domains is a pre-requisite to any domain event management. Network events need the same treatment, with the obvious change of using connect:search_networks and network:getattr. * src/access/viraccessperm.h (VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS) (VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional effect of the permission. * src/conf/domain_event.h (virDomainEventStateRegister) (virDomainEventStateRegisterID): Add new parameter. * src/conf/network_event.h (virNetworkEventStateRegisterID): Likewise. * src/conf/object_event_private.h (virObjectEventStateRegisterID): Likewise. * src/conf/object_event.c (_virObjectEventCallback): Track a filter. (virObjectEventDispatchMatchCallback): Use filter. (virObjectEventCallbackListAddID): Register filter. * src/conf/domain_event.c (virDomainEventFilter): New function. (virDomainEventStateRegister, virDomainEventStateRegisterID): Adjust callers. * src/conf/network_event.c (virNetworkEventFilter): New function. (virNetworkEventStateRegisterID): Adjust caller. * src/remote/remote_protocol.x (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER) (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY) (REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a filter, and require connect:search_domains instead of weaker connect:read. * src/test/test_driver.c (testConnectDomainEventRegister) (testConnectDomainEventRegisterAny) (testConnectNetworkEventRegisterAny): Update callers. * src/remote/remote_driver.c (remoteConnectDomainEventRegister) (remoteConnectDomainEventRegisterAny): Likewise. * src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister) (xenUnifiedConnectDomainEventRegisterAny): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise. * src/libxl/libxl_driver.c (libxlConnectDomainEventRegister) (libxlConnectDomainEventRegisterAny): Likewise. * src/qemu/qemu_driver.c (qemuConnectDomainEventRegister) (qemuConnectDomainEventRegisterAny): Likewise. * src/uml/uml_driver.c (umlConnectDomainEventRegister) (umlConnectDomainEventRegisterAny): Likewise. * src/network/bridge_driver.c (networkConnectNetworkEventRegisterAny): Likewise. * src/lxc/lxc_driver.c (lxcConnectDomainEventRegister) (lxcConnectDomainEventRegisterAny): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit f9f56340) Conflicts: src/conf/object_event.c - not backporting event refactoring src/conf/object_event_private.h - likewise src/conf/network_event.c - not backporting network events src/conf/network_event.h - likewise src/network/bridge_driver.c - likewise src/access/viraccessperm.h - likewise src/remote/remote_protocol.x - likewise src/conf/domain_event.c - includes code that upstream has in object_event src/conf/domain_event.h - context src/libxl/libxl_driver.c - context src/lxc/lxc_driver.c - context src/remote/remote_driver.c - context, not backporting network events src/test/test_driver.c - context, not backporting network events src/uml/uml_driver.c - context src/xen/xen_driver.c - context
-
由 Eric Blake 提交于
While running objecteventtest, it was found that valgrind pointed out the following memory leak: ==13464== 5 bytes in 1 blocks are definitely lost in loss record 7 of 134 ==13464== at 0x4A0887C: malloc (vg_replace_malloc.c:270) ==13464== by 0x341F485E21: strdup (strdup.c:42) ==13464== by 0x4CAE28F: virStrdup (virstring.c:554) ==13464== by 0x4CF3CBE: virObjectEventCallbackListAddID (object_event.c:286) ==13464== by 0x4CF49CA: virObjectEventStateRegisterID (object_event.c:729) ==13464== by 0x4CF73FE: virDomainEventStateRegisterID (domain_event.c:1424) ==13464== by 0x4D7358F: testConnectDomainEventRegisterAny (test_driver.c:6032) ==13464== by 0x4D600C8: virConnectDomainEventRegisterAny (libvirt.c:19128) ==13464== by 0x402409: testDomainStartStopEvent (objecteventtest.c:232) ==13464== by 0x403451: virtTestRun (testutils.c:138) ==13464== by 0x402012: mymain (objecteventtest.c:395) ==13464== by 0x403AF2: virtTestMain (testutils.c:593) ==13464== (cherry picked from commit 34d52b34) Conflicts: src/conf/object_event.c - 1.2.1 refactoring to object_event not backported, so change applied directly in older domain_event.c instead
-
由 Michal Privoznik 提交于
The @list->callbacks is an array that is inflated whenever a new event is added, e.g. via virDomainEventCallbackListAddID(). However, when we are freeing the array, we free the items within it but forgot to actually free it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit ea13a759)
-
- 15 1月, 2014 7 次提交
-
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When writing commit 173c2914, I missed the fact virNetServerClientClose unlocks the client object before actually clearing client->sock and thus it is possible to hit a window when client->keepalive is NULL while client->sock is not NULL. I was thinking client->sock == NULL was a better check for a closed connection but apparently we have to go with client->keepalive == NULL to actually fix the crash. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 066c8ef6)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When a client closes its connection to libvirtd early during virConnectOpen, more specifically just after making REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for the result, libvirtd may crash due to a race in keep-alive initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, the daemon's event loop delegates it to a worker thread. In case the event loop detects EOF on the connection and calls virNetServerClientClose before the worker thread starts to handle REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be disposed by the time virNetServerClientStartKeepAlive gets called from remoteDispatchConnectSupportsFeature. Because the flow is common for both authenticated and read-only connections, even unprivileged clients may cause the daemon to crash. To avoid the crash, virNetServerClientStartKeepAlive needs to check if the connection is still open before starting keep-alive protocol. Every libvirt release since 0.9.8 is affected by this bug. (cherry picked from commit 173c2914)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit 3b564259)
-
由 Jiri Denemark 提交于
Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit ff5f30b6)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit f93d2caa)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Generally, every API that is going to begin a job should do that before fetching data from vm->def. However, qemuDomainGetBlockInfo does not know whether it will have to start a job or not before checking vm->def. To avoid using disk alias that might have been freed while we were waiting for a job, we use its copy. In case the disk was removed in the meantime, we will fail with "cannot find statistics for device '...'" error message. (cherry picked from commit b7992595)
-
由 Jiri Denemark 提交于
CVE-2013-6458 https://bugzilla.redhat.com/show_bug.cgi?id=1043069 When virDomainDetachDeviceFlags is called concurrently to virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats finds a disk in vm->def before getting a job on a domain and uses the disk pointer after getting the job. However, the domain in unlocked while waiting on a job condition and thus data behind the disk pointer may disappear. This happens when thread 1 runs virDomainDetachDeviceFlags and enters monitor to actually remove the disk. Then another thread starts running virDomainBlockStats, finds the disk in vm->def, and while it's waiting on the job condition (owned by the first thread), the first thread finishes the disk removal. When the second thread gets the job, the memory pointed to be the disk pointer is already gone. That said, every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit db86da5c)
-
- 09 1月, 2014 4 次提交
-
-
由 Zeng Junliang 提交于
If there's a migration cancelled, the bitmap of migration port should be cleaned up too. Signed-off-by: NZeng Junliang <zengjunliang@huawei.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit c92ca769)
-
由 Michal Privoznik 提交于
Commit e3ef20d7 allows user to configure migration ports range via qemu.conf. However, it forgot to update augeas definition file and even the test data was malicious. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit d9be5a71) Conflicts: src/qemu/libvirtd_qemu.aug src/qemu/test_libvirtd_qemu.aug.in
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1019053 (cherry picked from commit e3ef20d7) Conflicts: missing support for changing migration listen address src/qemu/qemu.conf src/qemu/qemu_conf.h src/qemu/test_libvirtd_qemu.aug.in
-
由 Wang Yufei 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1019053 When we migrate vms concurrently, there's a chance that libvirtd on destination assigns the same port for different migrations, which will lead to migration failure during prepare phase on destination. So we use virPortAllocator here to solve the problem. Signed-off-by: NWang Yufei <james.wangyufei@huawei.com> Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 0196845d) Conflicts: missing support for changing the migration listen address src/qemu/qemu_migration.c
-
- 29 12月, 2013 1 次提交
-
-
由 Dario Faggioli 提交于
by, in libxlDomainGetNumaParameters(), calling libxl_bitmap_init() as soon as possible, which avoids getting to 'cleanup:', where libxl_bitmap_dispose() happens, without having initialized the nodemap, and hence crashing after some invalid free()-s: # ./daemon/libvirtd -v *** Error in `/home/xen/libvirt.git/daemon/.libs/lt-libvirtd': munmap_chunk(): invalid pointer: 0x00007fdd42592666 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x7bbe7)[0x7fdd3f767be7] /lib64/libxenlight.so.4.3(libxl_bitmap_dispose+0xd)[0x7fdd2c88c045] /home/xen/libvirt.git/daemon/.libs/../../src/.libs/libvirt_driver_libxl.so(+0x12d26)[0x7fdd2caccd26] /home/xen/libvirt.git/src/.libs/libvirt.so.0(virDomainGetNumaParameters+0x15c)[0x7fdd4247898c] /home/xen/libvirt.git/daemon/.libs/lt-libvirtd(+0x1d9a2)[0x7fdd42ecc9a2] /home/xen/libvirt.git/src/.libs/libvirt.so.0(virNetServerProgramDispatch+0x3da)[0x7fdd424e9eaa] /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0x1a6f38)[0x7fdd424e3f38] /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa81e5)[0x7fdd423e51e5] /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa783e)[0x7fdd423e483e] /lib64/libpthread.so.0(+0x7c53)[0x7fdd3febbc53] /lib64/libc.so.6(clone+0x6d)[0x7fdd3f7e1dbd] Signed-off-by: NDario Faggili <dario.faggioli@citrix.com> Cc: Jim Fehlig <jfehlig@suse.com> Cc: Ian Jackson <Ian.Jackson@eu.citrix.com> (cherry picked from commit f9ee91d3)
-
- 20 12月, 2013 2 次提交
-
-
由 Martin Kletzander 提交于
The function doesn't check whether the request is made for active or inactive domain. Thus when the domain is not running it still tries accessing non-existing cgroups (priv->cgroup, which is NULL). I re-made the function in order for it to work the same way it's qemu counterpart does. Reproducer: 1) Define an LXC domain 2) Do 'virsh memtune <domain> --hard-limit 133T' Backtrace: Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)): #0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764 #1 0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824") at util/vircgroup.c:669 #2 0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740 #3 0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904 #4 0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576) at util/vircgroup.c:1944 #5 0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420, params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774 #6 0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420, params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051 #7 0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510) at remote_dispatch.h:7621 #8 0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510, ret=0x7fffe40b84f0) at remote_dispatch.h:7591 #9 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0) at rpc/virnetserverprogram.c:435 #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0) at rpc/virnetserverprogram.c:305 #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10, prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165 #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00) at rpc/virnetserver.c:186 #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144 #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161 #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308 #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit 9faf3f29)
-
由 Martin Kletzander 提交于
The function doesn't check whether the request is made for active or inactive domain. Thus when the domain is not running it still tries accessing non-existing cgroups (priv->cgroup, which is NULL). I re-made the function in order for it to work the same way it's qemu counterpart does. Reproducer: 1) Define an LXC domain 2) Do 'virsh memtune <domain>' Backtrace: Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)): #0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764 #1 0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705 #2 0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804 #3 0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8) at util/vircgroup.c:1962 #4 0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0, params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826 #5 0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0, params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137 #6 0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0, ret=0x7fffd4002420) at remote.c:1895 #7 0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0, ret=0x7fffd4002420) at remote_dispatch.h:4050 #8 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0) at rpc/virnetserverprogram.c:435 #9 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0) at rpc/virnetserverprogram.c:305 #10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0, prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165 #11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00) at rpc/virnetserver.c:186 #12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144 #13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161 #14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308 #15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit f8c1cb90)
-
- 15 12月, 2013 4 次提交
-
-
由 Cole Robinson 提交于
-
由 Christophe Fergeau 提交于
The array of sasl_callback_t callbacks which is passed to sasl_client_new() must be kept alive as long as the created sasl_conn_t object is alive as cyrus-sasl uses this structure internally for things like logging, so the memory used for callbacks must only be freed after sasl_dispose() has been called. During testing of successful SASL logins with virsh -c qemu+tls:///system list --all I've been getting invalid read reports from valgrind ==9237== Invalid read of size 8 ==9237== at 0x6E93B6F: _sasl_getcallback (common.c:1745) ==9237== by 0x6E95430: _sasl_log (common.c:1850) ==9237== by 0x16593D87: digestmd5_client_mech_dispose (digestmd5.c:4580) ==9237== by 0x6E91653: client_dispose (client.c:332) ==9237== by 0x6E9476A: sasl_dispose (common.c:851) ==9237== by 0x4E225A1: virNetSASLSessionDispose (virnetsaslcontext.c:678) ==9237== by 0x4CBC551: virObjectUnref (virobject.c:262) ==9237== by 0x4E254D1: virNetSocketDispose (virnetsocket.c:1042) ==9237== by 0x4CBC551: virObjectUnref (virobject.c:262) ==9237== by 0x4E2701C: virNetSocketEventFree (virnetsocket.c:1794) ==9237== by 0x4C965D3: virEventPollCleanupHandles (vireventpoll.c:583) ==9237== by 0x4C96987: virEventPollRunOnce (vireventpoll.c:652) ==9237== by 0x4C94730: virEventRunDefaultImpl (virevent.c:274) ==9237== by 0x12C7BA: vshEventLoop (virsh.c:2407) ==9237== by 0x4CD3D04: virThreadHelper (virthreadpthread.c:161) ==9237== by 0x7DAEF32: start_thread (pthread_create.c:309) ==9237== by 0x8C86EAC: clone (clone.S:111) ==9237== Address 0xe2d61b0 is 0 bytes inside a block of size 168 free'd ==9237== at 0x4A07577: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==9237== by 0x4C73827: virFree (viralloc.c:580) ==9237== by 0x4DE4BC7: remoteAuthSASL (remote_driver.c:4219) ==9237== by 0x4DE33D0: remoteAuthenticate (remote_driver.c:3639) ==9237== by 0x4DDBFAA: doRemoteOpen (remote_driver.c:832) ==9237== by 0x4DDC8DC: remoteConnectOpen (remote_driver.c:1031) ==9237== by 0x4D8595F: do_open (libvirt.c:1239) ==9237== by 0x4D863F3: virConnectOpenAuth (libvirt.c:1481) ==9237== by 0x12762B: vshReconnect (virsh.c:337) ==9237== by 0x12C9B0: vshInit (virsh.c:2470) ==9237== by 0x12E9A5: main (virsh.c:3338) This commit changes virNetSASLSessionNewClient() to take ownership of the SASL callbacks. Then we can free them in virNetSASLSessionDispose() after the corresponding sasl_conn_t has been freed. (cherry picked from commit 13fdc6d6)
-
由 Jiri Denemark 提交于
The previous attempt (commit d65e0e14) removed just one of two libvirt-guests restarts that happened on libvirt-client update. Let's remove the last one too :-) https://bugzilla.redhat.com/show_bug.cgi?id=962225Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 604f79b3)
-
由 Don Dugger 提交于
This Python interface code is returning a -1 on errors for the `baselineCPU' API. Since this API is supposed to return a pointer the error return value should really be VIR_PY_NONE. NB: I've checked all the other APIs in this file and this is the only pointer API that is returning -1. Signed-off-by: NDon Dugger <donald.d.dugger@intel.com> (crobinso: Upstream in libvirt-python.git)
-
- 10 12月, 2013 6 次提交
-
-
由 Cole Robinson 提交于
We were unconditionally removing the device from the host list, when it should only be done on error. This fixes USB collision detection when hotplugging the same device to two guests. (cherry picked from commit 586b0ed8)
-
由 Cole Robinson 提交于
If we hit a collision, we free the USB device while it is still part of our temporary USBDeviceList. When the list is unref'd, the device is free'd again. Make the initial device freeing dependent on whether it is present in the temporary list or not. (cherry picked from commit 5953a737)
-
由 Cole Robinson 提交于
Similar to what Jiri did for cgroup setup/teardown in 05e149f9, push it all into the device handler functions so we can do the necessary prep work before claiming the device. This also fixes hotplugging USB devices by product/vendor (virt-manager's default behavior): https://bugzilla.redhat.com/show_bug.cgi?id=1016511 (cherry picked from commit ee414b5d)
-
由 Cole Robinson 提交于
They aren't used outside of qemu_hotplug.c (cherry picked from commit 79776aa5)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1025108 So far qemuSetupHostdevCGroup was called very early during hotplug, even before we knew the device we were about to hotplug was actually available. By calling the function later, we make sure QEMU won't be allowed to access devices used by other domains. Another important effect of this change is that hopluging USB devices specified by vendor and product (but not by their USB address) works again. This was broken since v1.0.5-171-g7d763aca, when the call to qemuFindHostdevUSBDevice was moved after the call to qemuSetupHostdevCGroup, which then used an uninitialized USB address. (cherry picked from commit 05e149f9)
-
由 Peter Krempa 提交于
To simplify future patches dealing with this code, simplify and refactor some conditions to switch statements. (cherry picked from commit 9d132989)
-
- 03 12月, 2013 1 次提交
-
-
由 Peter Krempa 提交于
When doing an internal snapshot on a VM with sheepdog or RBD disks we would not set a flag to mark the domain is using internal snapshots and might end up creating a mixed snapshot. Move the setting of the variable to avoid this problem. (cherry picked from commit d8cf91ae)
-
- 22 11月, 2013 1 次提交
-
-
由 Cole Robinson 提交于
Restarting an active libvirt-guests.service is the equivalent of doing: /usr/libexec/libvirt-guests.sh stop /usr/libexec/libvirt-guests.sh start Which in a default configuration will managedsave every running VM, and then restore them. Certainly not something we should do every time the libvirt-client RPM is updated. Just drop the try-restart attempt, I don't know what purpose it serves anyways. https://bugzilla.redhat.com/show_bug.cgi?id=962225 (cherry picked from commit d65e0e14)
-
- 20 11月, 2013 3 次提交
-
-
由 Daniel P. Berrange 提交于
If the host side of an LXC container console disconnected and the guest side continued to write data, until the PTY buffer filled up, the LXC controller would busy wait. It would repeatedly see POLLHUP from poll() and not disable the watch. This was due to some bogus logic detecting blocking conditions. Upon seeing a POLLHUP we must disable all reading & writing from the PTY, and setup the epoll to wake us up again when the connection comes back. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 5087a5a0)
-
由 Cole Robinson 提交于
Possible fix for occasional libvirt-guests failure at boot time: https://bugzilla.redhat.com/show_bug.cgi?id=906009 (cherry picked from commit d9203675)
-
由 Guido Günther 提交于
Syslog is socket activated since at least systemd v35 so we can drop this dependency. Debian's linitian otherwise complains about it. References: http://www.freedesktop.org/wiki/Software/systemd/syslog/ http://lintian.debian.org/tags/systemd-service-file-refers-to-obsolete-target.html (cherry picked from commit 3c9e40a1)
-
- 18 11月, 2013 3 次提交
-
-
由 Michael Avdienko 提交于
QEMU 1.6.0 introduced new migration status: setup Libvirt does not expect such string in QMP and refuses to migrate with error "unexpected migration status in setup" This patch fixes it. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit d35ae414)
-
由 Jeremy Fitzhardinge 提交于
Rather than casting the virBitmap pointer to uint8_t* and then using the structure contents as a byte array, use the virBitmap API to determine the bitmap size and test each bit. Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org> (cherry picked from commit ba1bf100)
-
由 Laine Stump 提交于
This should resolve: https://bugzilla.redhat.com/show_bug.cgi?id=1012085 libvirt previously recognized NFS, GFS2, OCFS2, and AFS filesystems as "shared", and thus eligible for exceptions to certain rules/actions about chowning image files before handing them off to a guest. This patch widens the definition of "shared filesystem" to include SMB and CIFS filesystems (aka "Windows file sharing"); both of these use the same protocol, but different drivers so there are different magic numbers for each. (cherry picked from commit e4e73337)
-
- 13 11月, 2013 3 次提交
-
-
由 Ján Tomko 提交于
When opening a new connection to the driver, nwfilterOpen only succeeds if the driverState has been allocated. Move the privilege check in driver initialization before the state allocation to disable the driver. This changes the nwfilter-define error from: error: cannot create config directory (null): Bad address To: this function is not supported by the connection driver: virNWFilterDefineXML https://bugzilla.redhat.com/show_bug.cgi?id=1029266 (cherry picked from commit b7829f95)
-
由 Ján Tomko 提交于
Since qemu-kvm 1.1 [1] (since 1.3. in upstream QEMU [2]) '-no-kvm-pit-reinjection' has been deprecated. Use -global kvm-pit.lost_tick_policy=discard instead. https://bugzilla.redhat.com/show_bug.cgi?id=978719 [1] http://git.kernel.org/cgit/virt/kvm/qemu-kvm.git/commit/?id=4e4fa39 [2] http://git.qemu.org/?p=qemu.git;a=commitdiff;h=c21fb4f (cherry picked from commit 1569fa14) Conflicts: tests/qemucapabilitiesdata/caps_1.2.2-1.caps tests/qemucapabilitiesdata/caps_1.2.2-1.replies tests/qemucapabilitiesdata/caps_1.3.1-1.caps tests/qemucapabilitiesdata/caps_1.3.1-1.replies tests/qemucapabilitiesdata/caps_1.4.2-1.caps tests/qemucapabilitiesdata/caps_1.4.2-1.replies tests/qemucapabilitiesdata/caps_1.5.3-1.caps tests/qemucapabilitiesdata/caps_1.5.3-1.replies tests/qemucapabilitiesdata/caps_1.6.0-1.caps tests/qemucapabilitiesdata/caps_1.6.0-1.replies tests/qemucapabilitiesdata/caps_1.6.50-1.caps tests/qemucapabilitiesdata/caps_1.6.50-1.replies (qemucapabilitiestest is not backported)
-
由 Michal Privoznik 提交于
Since 86d90b3a (yes, my patch; again) we are supporting NBD storage migration. However, on error recovery path we got the steps reversed. The correct order is: return NBD port to the virPortAllocator and then either unlock the vm or remove it from the driver. Not vice versa. ==11192== Invalid write of size 4 ==11192== at 0x11488559: qemuMigrationPrepareAny (qemu_migration.c:2459) ==11192== by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652) ==11192== by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332) ==11192== by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290) ==11192== by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798) ==11192== by 0x12DECA: remoteDispatchDomainMigratePrepare3ParamsHelper (remote_dispatch.h:5741) ==11192== by 0x5212127: virNetServerProgramDispatchCall (virnetserverprogram.c:435) ==11192== by 0x5211C86: virNetServerProgramDispatch (virnetserverprogram.c:305) ==11192== by 0x520A8FD: virNetServerProcessMsg (virnetserver.c:165) ==11192== by 0x520A9E1: virNetServerHandleJob (virnetserver.c:186) ==11192== by 0x50DA78F: virThreadPoolWorker (virthreadpool.c:144) ==11192== by 0x50DA11C: virThreadHelper (virthreadpthread.c:161) ==11192== Address 0x1368baa0 is 576 bytes inside a block of size 688 free'd ==11192== at 0x4A07F5C: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==11192== by 0x5079A2F: virFree (viralloc.c:580) ==11192== by 0x11456C34: qemuDomainObjPrivateFree (qemu_domain.c:267) ==11192== by 0x50F41B4: virDomainObjDispose (domain_conf.c:2034) ==11192== by 0x50C2991: virObjectUnref (virobject.c:262) ==11192== by 0x50F4CFC: virDomainObjListRemove (domain_conf.c:2361) ==11192== by 0x1145C125: qemuDomainRemoveInactive (qemu_domain.c:2087) ==11192== by 0x11488520: qemuMigrationPrepareAny (qemu_migration.c:2456) ==11192== by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652) ==11192== by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332) ==11192== by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290) ==11192== by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit 1f2f879e)
-
- 12 11月, 2013 2 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1018897 If a PCI deivce is not binded to any driver (e.g. there's yet no PCI driver in the linux kernel) but still users want to passthru the device we fail the whole operation as we fail to resolve the 'driver' link under the PCI device sysfs tree. Obviously, this is not a fatal error and it shouldn't be error at all. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit df4283a5)
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1027096 If there's the following snippet in the domain XML, the domain will be lost upon the daemon restart (if the domain is started prior restart): <seclabel type='dynamic' relabel='yes'/> The problem is, the 'label', 'imagelabel' and 'baselabel' are parsed whenever the VIR_DOMAIN_XML_INACTIVE is *not* present or the label is static. The latter is not our case, obviously. So, when libvirtd starts up, it finds domain state xml and parse it. During parsing, many XML flags are enabled but VIR_DOMAIN_XML_INACTIVE. Hence, our parser tries to extract 'label', 'imagelabel' and 'baselabel' from the XML which fails for model='none'. Err, this model - even though not specified in XML - can be taken from qemu wide config file: /etc/libvirtd/qemu.conf. However, in order to know we are dealing with model='none' the code in question must be moved forward a bit. Then a new check must be introduced. This is what the first two chunks are doing. But this alone is not sufficient. The domain state XML won't contain the model attribute without slight modification. The model should be inserted into the XML even if equal to 'none' and the state XML is being generated - what if the origin (the @security_driver variable in qemu.conf) changes during libvirtd restarts? At the end, a test to catch this scenario is introduced. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit 9fb3f957)
-