- 18 2月, 2014 19 次提交
-
-
由 Daniel P. Berrange 提交于
Rewrite lxcDomainAttachDeviceHostdevStorageLive function to use the virProcessRunInMountNamespace helper. This avoids risk of a malicious guest replacing /dev with a absolute symlink, tricking the driver into changing the host OS filesystem. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 1754c7f0) Conflicts: src/lxc/lxc_driver.c: OOM + cgroups error reporting
-
由 Daniel P. Berrange 提交于
Rewrite lxcDomainAttachDeviceHostdevSubsysUSBLive function to use the virProcessRunInMountNamespace helper. This avoids risk of a malicious guest replacing /dev with a absolute symlink, tricking the driver into changing the host OS filesystem. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 7fba01c1) Conflicts: src/lxc/lxc_driver.c: OOM + cgroups error reporting
-
由 Daniel P. Berrange 提交于
Rewrite lxcDomainAttachDeviceDiskLive function to use the virProcessRunInMountNamespace helper. This avoids risk of a malicious guest replacing /dev with a absolute symlink, tricking the driver into changing the host OS filesystem. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 4dd3a7d5) Conflicts: src/lxc/lxc_driver.c: OOM + cgroups error reporting and remove usernamespace integration
-
由 Eric Blake 提交于
Use helper virProcessRunInMountNamespace in lxcDomainShutdownFlags and lxcDomainReboot. Otherwise, a malicious guest could use symlinks to force the host to manipulate the wrong file in the host's namespace. Idea by Dan Berrange, based on an initial report by Reco <recoverym4n@gmail.com> at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=732394Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit aebbcdd3) Conflicts: src/lxc/lxc_driver.c: OOM error reporting changes src/util/virinitctl.c: OOM error reporting changes
-
由 Daniel P. Berrange 提交于
Implement virProcessRunInMountNamespace, which runs callback of type virProcessNamespaceCallback in a container namespace. This uses a child process to run the callback, since you can't change the mount namespace of a thread. This implies that callbacks have to be careful about what code they run due to async safety rules. Idea by Dan Berrange, based on an initial report by Reco <recoverym4n@gmail.com> at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=732394Signed-off-by: NDaniel Berrange <berrange@redhat.com> Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 7c72ef6f) Backport fixed for OOM error reporting
-
由 Daniel P. Berrange 提交于
Add a helper function which takes a file path and ensures that all directory components leading up to the file exist. IOW, it strips the filename part of the path and passes the result to virFileMakePath. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit c321bfc5)
-
由 Daniel P. Berrange 提交于
The check for whether the cgroup devices ACL is available is done quite late during LXC hotplug - in fact after the device node is already created in the container in some cases. Better to do it upfront so we fail immediately. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit c3eb12ca)
-
由 Daniel P. Berrange 提交于
The LXC disk hotplug code was allowing block or character devices to be given as disk. A disk is always a block device. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit d24e6b8b)
-
由 Daniel P. Berrange 提交于
When detaching a USB device from an LXC guest we must remove the device from the cgroup ACL. Unfortunately we were telling the cgroup code to use the guest /dev path, not the host /dev path, and the guest device node had already been unlinked. This was, however, fortunate since the code passed &priv->cgroup instead of priv->cgroup, so would have crash if the device node were accessible. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 2c2bec94)
-
由 Daniel P. Berrange 提交于
After hotplugging a USB device, the LXC driver forgot to add the device def to the virDomainDefPtr. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit a537827d) Backport fixed for OOM error reporting
-
由 Daniel P. Berrange 提交于
The LXC code missed the 'usb' component out of the path /dev/bus/usb/$BUSNUM/$DEVNUM, so it failed to actually setup cgroups for the device. This was in fact lucky because the call to virLXCSetupHostUsbDeviceCgroup was also mistakenly passing '&priv->cgroup' instead of just 'priv->cgroup'. So once the path is fixed, libvirtd would then crash trying to access the bogus virCgroupPtr pointer. This would have been a security issue, were it not for the bogus path preventing the pointer reference being reached. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit c3648972)
-
由 Daniel P. Berrange 提交于
virDomainDefCompatibleDevice blocks use of USB if no USB controller is present. This is not correct for containers since devices can be assigned directly regardless of any controllers. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 7a44af96)
-
由 Eric Blake 提交于
Our backing file chain code was not very robust to an ill-timed EINTR, which could lead to a short read causing us to randomly treat metadata differently than usual. But the existing virFileReadLimFD forces an error if we don't read the entire file, even though we only care about the header of the file. So add a new virFile function that does what we want. * src/util/virfile.h (virFileReadHeaderFD): New prototype. * src/util/virfile.c (virFileReadHeaderFD): New function. * src/libvirt_private.syms (virfile.h): Export it. * src/util/virstoragefile.c (virStorageFileGetMetadataInternal) (virStorageFileProbeFormatFromFD): Use it. Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 5327fad4) Conflicts: src/util/virstoragefile.c: OOM error reporting & buffer signedness
-
由 Eric Blake 提交于
* src/lxc/lxc_controller.c (virLXCControllerSetupDisk): Fix typo. * src/lxc/lxc_driver.c (lxcDomainAttachDeviceDiskLive): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 8de47efd) Conflicts: src/lxc/lxc_controller.c: No userns support yet
-
由 Chen Hanxiao 提交于
Free dst before lxcDomainAttachDeviceDiskLive returns Signed-off-by: NChen Hanxiao <chenhanxiao@cn.fujitsu.com> (cherry picked from commit c82513ac)
-
由 Hongwei Bi 提交于
The variable vroot should be freed in label cleanup. (cherry picked from commit 46c9bce4)
-
由 Yuri Chornoivan 提交于
Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 5b4c035b)
-
由 Gao feng 提交于
Create parent directroy for hostdev automatically when we start a lxc domain or attach a hostdev to a lxc domain. Signed-off-by: NGao feng <gaofeng@cn.fujitsu.com> (cherry picked from commit 468ee0bc)
-
由 Gao feng 提交于
This helper function is used to create parent directory for the hostdev which will be added to the container. If the parent directory of this hostdev doesn't exist, the mknod of the hostdev will fail. eg with /dev/net/tun Signed-off-by: NGao feng <gaofeng@cn.fujitsu.com> (cherry picked from commit c0d8c7c8)
-
- 06 2月, 2014 6 次提交
-
-
由 Daniel P. Berrange 提交于
The NWFilter code has as a deadlock race condition between the virNWFilter{Define,Undefine} APIs and starting of guest VMs due to mis-matched lock ordering. In the virNWFilter{Define,Undefine} codepaths the lock ordering is 1. nwfilter driver lock 2. virt driver lock 3. nwfilter update lock 4. domain object lock In the VM guest startup paths the lock ordering is 1. virt driver lock 2. domain object lock 3. nwfilter update lock As can be seen the domain object and nwfilter update locks are not acquired in a consistent order. The fix used is to push the nwfilter update lock upto the top level resulting in a lock ordering for virNWFilter{Define,Undefine} of 1. nwfilter driver lock 2. nwfilter update lock 3. virt driver lock 4. domain object lock and VM start using 1. nwfilter update lock 2. virt driver lock 3. domain object lock This has the effect of serializing VM startup once again, even if no nwfilters are applied to the guest. There is also the possibility of deadlock due to a call graph loop via virNWFilterInstantiate and virNWFilterInstantiateFilterLate. These two problems mean the lock must be turned into a read/write lock instead of a plain mutex at the same time. The lock is used to serialize changes to the "driver->nwfilters" hash, so the write lock only needs to be held by the define/undefine methods. All other methods can rely on a read lock which allows good concurrency. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 6e5c79a1) Conflicts: src/conf/nwfilter_conf.c - virReportOOMError() in context of one hunk. src/lxc/lxc_driver.c - functions renamed, and lxc object locking changed, creating a conflict in the context.
-
由 Daniel P. Berrange 提交于
Add virRWLock backed up by a POSIX rwlock primitive Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit c065984b)
-
由 Daniel P. Berrange 提交于
The virConnectPtr is passed around loads of nwfilter code in order to provide it as a parameter to the callback registered by the virt drivers. None of the virt drivers use this param though, so it serves no purpose. Avoiding the need to pass a virConnectPtr means that the nwfilterStateReload method no longer needs to open a bogus QEMU driver connection. This addresses a race condition that can lead to a crash on startup. The nwfilter driver starts before the QEMU driver and registers some callbacks with DBus to detect firewalld reload. If the firewalld reload happens while the QEMU driver is still starting up though, the nwfilterStateReload method will open a connection to the partially initialized QEMU driver and cause a crash. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 999d72fb)
-
由 Daniel P. Berrange 提交于
The nwfilter driver only needs a reference to its private state object, not a full virConnectPtr. Update the domUpdateCBStruct struct to have a 'void *opaque' field instead of a virConnectPtr. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit ebca369e)
-
由 Daniel P. Berrange 提交于
None of the virNWFilterDefParse* methods require a virConnectPtr arg, so just drop it Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit b77b16ce)
-
由 Daniel P. Berrange 提交于
For inexplicable reasons, the nwfilter XML parser is intentionally ignoring errors that arise during parsing. As well as meaning that users don't get any feedback on their XML mistakes, this will lead it to silently drop data in OOM conditions. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 4f209434)
-
- 16 1月, 2014 9 次提交
-
-
由 Eric Blake 提交于
Ever since ACL filtering was added in commit 76397360 (v1.1.1), a user could still use event registration to obtain access to a domain that they could not normally access via virDomainLookup* or virConnectListAllDomains and friends. We already have the framework in the RPC generator for creating the filter, and previous cleanup patches got us to the point that we can now wire the filter through the entire object event stack. Furthermore, whether or not domain:getattr is honored, use of global events is a form of obtaining a list of networks, which is covered by connect:search_domains added in a93cd08f (v1.1.0). Ideally, we'd have a way to enforce connect:search_domains when doing global registrations while omitting that check on a per-domain registration. But this patch just unconditionally requires connect:search_domains, even when no list could be obtained, based on the following observations: 1. Administrators are unlikely to grant domain:getattr for one or all domains while still denying connect:search_domains - a user that is able to manage domains will want to be able to manage them efficiently, but efficient management includes being able to list the domains they can access. The idea of denying connect:search_domains while still granting access to individual domains is therefore not adding any real security, but just serves as a layer of obscurity to annoy the end user. 2. In the current implementation, domain events are filtered on the client; the server has no idea if a domain filter was requested, and must therefore assume that all domain event requests are global. Even if we fix the RPC protocol to allow for server-side filtering for newer client/server combos, making the connect:serach_domains ACL check conditional on whether the domain argument was NULL won't benefit older clients. Therefore, we choose to document that connect:search_domains is a pre-requisite to any domain event management. Network events need the same treatment, with the obvious change of using connect:search_networks and network:getattr. * src/access/viraccessperm.h (VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS) (VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional effect of the permission. * src/conf/domain_event.h (virDomainEventStateRegister) (virDomainEventStateRegisterID): Add new parameter. * src/conf/network_event.h (virNetworkEventStateRegisterID): Likewise. * src/conf/object_event_private.h (virObjectEventStateRegisterID): Likewise. * src/conf/object_event.c (_virObjectEventCallback): Track a filter. (virObjectEventDispatchMatchCallback): Use filter. (virObjectEventCallbackListAddID): Register filter. * src/conf/domain_event.c (virDomainEventFilter): New function. (virDomainEventStateRegister, virDomainEventStateRegisterID): Adjust callers. * src/conf/network_event.c (virNetworkEventFilter): New function. (virNetworkEventStateRegisterID): Adjust caller. * src/remote/remote_protocol.x (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER) (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY) (REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a filter, and require connect:search_domains instead of weaker connect:read. * src/test/test_driver.c (testConnectDomainEventRegister) (testConnectDomainEventRegisterAny) (testConnectNetworkEventRegisterAny): Update callers. * src/remote/remote_driver.c (remoteConnectDomainEventRegister) (remoteConnectDomainEventRegisterAny): Likewise. * src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister) (xenUnifiedConnectDomainEventRegisterAny): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise. * src/libxl/libxl_driver.c (libxlConnectDomainEventRegister) (libxlConnectDomainEventRegisterAny): Likewise. * src/qemu/qemu_driver.c (qemuConnectDomainEventRegister) (qemuConnectDomainEventRegisterAny): Likewise. * src/uml/uml_driver.c (umlConnectDomainEventRegister) (umlConnectDomainEventRegisterAny): Likewise. * src/network/bridge_driver.c (networkConnectNetworkEventRegisterAny): Likewise. * src/lxc/lxc_driver.c (lxcConnectDomainEventRegister) (lxcConnectDomainEventRegisterAny): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit f9f56340) Conflicts: 1.1.0 had a framework for generating filter methods, but nothing actually used them. Therefore, the only leak in this branch was the failure to honor connect:search_domains, and that is fixed by backporting just the patch to remote_protocol.x to properly annotate ACL categories, and to viraccessperms.h to document the scope of the ACL.
-
由 Eric Blake 提交于
While running objecteventtest, it was found that valgrind pointed out the following memory leak: ==13464== 5 bytes in 1 blocks are definitely lost in loss record 7 of 134 ==13464== at 0x4A0887C: malloc (vg_replace_malloc.c:270) ==13464== by 0x341F485E21: strdup (strdup.c:42) ==13464== by 0x4CAE28F: virStrdup (virstring.c:554) ==13464== by 0x4CF3CBE: virObjectEventCallbackListAddID (object_event.c:286) ==13464== by 0x4CF49CA: virObjectEventStateRegisterID (object_event.c:729) ==13464== by 0x4CF73FE: virDomainEventStateRegisterID (domain_event.c:1424) ==13464== by 0x4D7358F: testConnectDomainEventRegisterAny (test_driver.c:6032) ==13464== by 0x4D600C8: virConnectDomainEventRegisterAny (libvirt.c:19128) ==13464== by 0x402409: testDomainStartStopEvent (objecteventtest.c:232) ==13464== by 0x403451: virtTestRun (testutils.c:138) ==13464== by 0x402012: mymain (objecteventtest.c:395) ==13464== by 0x403AF2: virtTestMain (testutils.c:593) ==13464== (cherry picked from commit 34d52b34) Conflicts: src/conf/object_event.c - 1.2.1 refactoring to object_event not backported, so change applied directly in older domain_event.c instead
-
由 Michal Privoznik 提交于
The @list->callbacks is an array that is inflated whenever a new event is added, e.g. via virDomainEventCallbackListAddID(). However, when we are freeing the array, we free the items within it but forgot to actually free it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit ea13a759)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When writing commit 173c2914, I missed the fact virNetServerClientClose unlocks the client object before actually clearing client->sock and thus it is possible to hit a window when client->keepalive is NULL while client->sock is not NULL. I was thinking client->sock == NULL was a better check for a closed connection but apparently we have to go with client->keepalive == NULL to actually fix the crash. Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 066c8ef6)
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1047577 When a client closes its connection to libvirtd early during virConnectOpen, more specifically just after making REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for the result, libvirtd may crash due to a race in keep-alive initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, the daemon's event loop delegates it to a worker thread. In case the event loop detects EOF on the connection and calls virNetServerClientClose before the worker thread starts to handle REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be disposed by the time virNetServerClientStartKeepAlive gets called from remoteDispatchConnectSupportsFeature. Because the flow is common for both authenticated and read-only connections, even unprivileged clients may cause the daemon to crash. To avoid the crash, virNetServerClientStartKeepAlive needs to check if the connection is still open before starting keep-alive protocol. Every libvirt release since 0.9.8 is affected by this bug. (cherry picked from commit 173c2914)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit 3b564259)
-
由 Jiri Denemark 提交于
Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit ff5f30b6)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit f93d2caa)
-
由 Jiri Denemark 提交于
CVE-2013-6458 Generally, every API that is going to begin a job should do that before fetching data from vm->def. However, qemuDomainGetBlockInfo does not know whether it will have to start a job or not before checking vm->def. To avoid using disk alias that might have been freed while we were waiting for a job, we use its copy. In case the disk was removed in the meantime, we will fail with "cannot find statistics for device '...'" error message. (cherry picked from commit b7992595)
-
- 15 1月, 2014 1 次提交
-
-
由 Jiri Denemark 提交于
CVE-2013-6458 https://bugzilla.redhat.com/show_bug.cgi?id=1043069 When virDomainDetachDeviceFlags is called concurrently to virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats finds a disk in vm->def before getting a job on a domain and uses the disk pointer after getting the job. However, the domain in unlocked while waiting on a job condition and thus data behind the disk pointer may disappear. This happens when thread 1 runs virDomainDetachDeviceFlags and enters monitor to actually remove the disk. Then another thread starts running virDomainBlockStats, finds the disk in vm->def, and while it's waiting on the job condition (owned by the first thread), the first thread finishes the disk removal. When the second thread gets the job, the memory pointed to be the disk pointer is already gone. That said, every API that is going to begin a job should do that before fetching data from vm->def. (cherry picked from commit db86da5c)
-
- 20 12月, 2013 2 次提交
-
-
由 Martin Kletzander 提交于
The function doesn't check whether the request is made for active or inactive domain. Thus when the domain is not running it still tries accessing non-existing cgroups (priv->cgroup, which is NULL). I re-made the function in order for it to work the same way it's qemu counterpart does. Reproducer: 1) Define an LXC domain 2) Do 'virsh memtune <domain> --hard-limit 133T' Backtrace: Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)): #0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764 #1 0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824") at util/vircgroup.c:669 #2 0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740 #3 0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904 #4 0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576) at util/vircgroup.c:1944 #5 0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420, params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774 #6 0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420, params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051 #7 0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510) at remote_dispatch.h:7621 #8 0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510, ret=0x7fffe40b84f0) at remote_dispatch.h:7591 #9 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0) at rpc/virnetserverprogram.c:435 #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0) at rpc/virnetserverprogram.c:305 #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10, prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165 #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00) at rpc/virnetserver.c:186 #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144 #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161 #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308 #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit 9faf3f29) Conflicts: src/lxc/lxc_driver.c
-
由 Martin Kletzander 提交于
The function doesn't check whether the request is made for active or inactive domain. Thus when the domain is not running it still tries accessing non-existing cgroups (priv->cgroup, which is NULL). I re-made the function in order for it to work the same way it's qemu counterpart does. Reproducer: 1) Define an LXC domain 2) Do 'virsh memtune <domain>' Backtrace: Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)): #0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764 #1 0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705 #2 0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3, key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804 #3 0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8) at util/vircgroup.c:1962 #4 0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0, params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826 #5 0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0, params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137 #6 0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0, ret=0x7fffd4002420) at remote.c:1895 #7 0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0, ret=0x7fffd4002420) at remote_dispatch.h:4050 #8 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0) at rpc/virnetserverprogram.c:435 #9 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0, server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0) at rpc/virnetserverprogram.c:305 #10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0, prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165 #11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00) at rpc/virnetserver.c:186 #12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144 #13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161 #14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308 #15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit f8c1cb90) Conflicts: src/lxc/lxc_driver.c
-
- 13 11月, 2013 1 次提交
-
-
由 Ján Tomko 提交于
When opening a new connection to the driver, nwfilterOpen only succeeds if the driverState has been allocated. Move the privilege check in driver initialization before the state allocation to disable the driver. This changes the nwfilter-define error from: error: cannot create config directory (null): Bad address To: this function is not supported by the connection driver: virNWFilterDefineXML https://bugzilla.redhat.com/show_bug.cgi?id=1029266 (cherry picked from commit b7829f95)
-
- 21 10月, 2013 1 次提交
-
-
由 Daniel P. Berrange 提交于
The virConnectDomainXMLToNative API should require 'connect:write' not 'connect:read', since it will trigger execution of the QEMU binaries listed in the XML. Also make virConnectDomainXMLFromNative API require a full read-write connection and 'connect:write' permission. Although the current impl doesn't trigger execution of QEMU, we should not rely on that impl detail from an API permissioning POV. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> (cherry picked from commit 57687fd6)
-
- 18 10月, 2013 1 次提交
-
-
由 Zhou Yimin 提交于
Introduced by 7b87a3 When I quit the process which only register VIR_DOMAIN_EVENT_ID_REBOOT, I got error like: "libvirt: XML-RPC error : internal error: domain event 0 not registered". Then I add the following code, it fixed. Signed-off-by: NZhou Yimin <zhouyimin@huawei.com> Signed-off-by: NEric Blake <eblake@redhat.com> (cherry picked from commit 9712c251)
-