1. 16 1月, 2014 3 次提交
    • E
      event: filter global events by domain:getattr ACL [CVE-2014-0028] · 51afa9a2
      Eric Blake 提交于
      Ever since ACL filtering was added in commit 76397360 (v1.1.1), a
      user could still use event registration to obtain access to a
      domain that they could not normally access via virDomainLookup*
      or virConnectListAllDomains and friends.  We already have the
      framework in the RPC generator for creating the filter, and
      previous cleanup patches got us to the point that we can now
      wire the filter through the entire object event stack.
      
      Furthermore, whether or not domain:getattr is honored, use of
      global events is a form of obtaining a list of networks, which
      is covered by connect:search_domains added in a93cd08f (v1.1.0).
      Ideally, we'd have a way to enforce connect:search_domains when
      doing global registrations while omitting that check on a
      per-domain registration.  But this patch just unconditionally
      requires connect:search_domains, even when no list could be
      obtained, based on the following observations:
      1. Administrators are unlikely to grant domain:getattr for one
      or all domains while still denying connect:search_domains - a
      user that is able to manage domains will want to be able to
      manage them efficiently, but efficient management includes being
      able to list the domains they can access.  The idea of denying
      connect:search_domains while still granting access to individual
      domains is therefore not adding any real security, but just
      serves as a layer of obscurity to annoy the end user.
      2. In the current implementation, domain events are filtered
      on the client; the server has no idea if a domain filter was
      requested, and must therefore assume that all domain event
      requests are global.  Even if we fix the RPC protocol to
      allow for server-side filtering for newer client/server combos,
      making the connect:serach_domains ACL check conditional on
      whether the domain argument was NULL won't benefit older clients.
      Therefore, we choose to document that connect:search_domains
      is a pre-requisite to any domain event management.
      
      Network events need the same treatment, with the obvious
      change of using connect:search_networks and network:getattr.
      
      * src/access/viraccessperm.h
      (VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS)
      (VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional
      effect of the permission.
      * src/conf/domain_event.h (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Add new parameter.
      * src/conf/network_event.h (virNetworkEventStateRegisterID):
      Likewise.
      * src/conf/object_event_private.h (virObjectEventStateRegisterID):
      Likewise.
      * src/conf/object_event.c (_virObjectEventCallback): Track a filter.
      (virObjectEventDispatchMatchCallback): Use filter.
      (virObjectEventCallbackListAddID): Register filter.
      * src/conf/domain_event.c (virDomainEventFilter): New function.
      (virDomainEventStateRegister, virDomainEventStateRegisterID):
      Adjust callers.
      * src/conf/network_event.c (virNetworkEventFilter): New function.
      (virNetworkEventStateRegisterID): Adjust caller.
      * src/remote/remote_protocol.x
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER)
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY)
      (REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a
      filter, and require connect:search_domains instead of weaker
      connect:read.
      * src/test/test_driver.c (testConnectDomainEventRegister)
      (testConnectDomainEventRegisterAny)
      (testConnectNetworkEventRegisterAny): Update callers.
      * src/remote/remote_driver.c (remoteConnectDomainEventRegister)
      (remoteConnectDomainEventRegisterAny): Likewise.
      * src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
      (xenUnifiedConnectDomainEventRegisterAny): Likewise.
      * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise.
      * src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
      (libxlConnectDomainEventRegisterAny): Likewise.
      * src/qemu/qemu_driver.c (qemuConnectDomainEventRegister)
      (qemuConnectDomainEventRegisterAny): Likewise.
      * src/uml/uml_driver.c (umlConnectDomainEventRegister)
      (umlConnectDomainEventRegisterAny): Likewise.
      * src/network/bridge_driver.c
      (networkConnectNetworkEventRegisterAny): Likewise.
      * src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
      (lxcConnectDomainEventRegisterAny): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit f9f56340)
      
      Conflicts:
      	src/conf/object_event.c - not backporting event refactoring
      	src/conf/object_event_private.h - likewise
      	src/conf/network_event.c - not backporting network events
      	src/conf/network_event.h - likewise
      	src/network/bridge_driver.c - likewise
      	src/access/viraccessperm.h - likewise
      	src/remote/remote_protocol.x - likewise
      	src/conf/domain_event.c - includes code that upstream has in object_event
      	src/conf/domain_event.h - context
      	src/libxl/libxl_driver.c - context
      	src/lxc/lxc_driver.c - context
      	src/remote/remote_driver.c - context, not backporting network events
      	src/test/test_driver.c - context, not backporting network events
      	src/uml/uml_driver.c - context
      	src/xen/xen_driver.c - context
      51afa9a2
    • E
      Fix memory leak in virObjectEventCallbackListRemoveID() · 271c0e7b
      Eric Blake 提交于
      While running objecteventtest, it was found that valgrind pointed out the
      following memory leak:
      
      ==13464== 5 bytes in 1 blocks are definitely lost in loss record 7 of 134
      ==13464==    at 0x4A0887C: malloc (vg_replace_malloc.c:270)
      ==13464==    by 0x341F485E21: strdup (strdup.c:42)
      ==13464==    by 0x4CAE28F: virStrdup (virstring.c:554)
      ==13464==    by 0x4CF3CBE: virObjectEventCallbackListAddID (object_event.c:286)
      ==13464==    by 0x4CF49CA: virObjectEventStateRegisterID (object_event.c:729)
      ==13464==    by 0x4CF73FE: virDomainEventStateRegisterID (domain_event.c:1424)
      ==13464==    by 0x4D7358F: testConnectDomainEventRegisterAny (test_driver.c:6032)
      ==13464==    by 0x4D600C8: virConnectDomainEventRegisterAny (libvirt.c:19128)
      ==13464==    by 0x402409: testDomainStartStopEvent (objecteventtest.c:232)
      ==13464==    by 0x403451: virtTestRun (testutils.c:138)
      ==13464==    by 0x402012: mymain (objecteventtest.c:395)
      ==13464==    by 0x403AF2: virtTestMain (testutils.c:593)
      ==13464==
      
      (cherry picked from commit 34d52b34)
      
      Conflicts:
      	src/conf/object_event.c - 1.2.1 refactoring to object_event not
      backported, so change applied directly in older domain_event.c instead
      271c0e7b
    • M
      virDomainEventCallbackListFree: Don't leak @list->callbacks · 4f169b0e
      Michal Privoznik 提交于
      The @list->callbacks is an array that is inflated whenever a new event
      is added, e.g. via virDomainEventCallbackListAddID(). However, when we
      are freeing the array, we free the items within it but forgot to
      actually free it.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit ea13a759)
      4f169b0e
  2. 15 1月, 2014 7 次提交
    • J
      Really don't crash if a connection closes early · 8342adef
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When writing commit 173c2914, I missed the fact virNetServerClientClose
      unlocks the client object before actually clearing client->sock and thus
      it is possible to hit a window when client->keepalive is NULL while
      client->sock is not NULL. I was thinking client->sock == NULL was a
      better check for a closed connection but apparently we have to go with
      client->keepalive == NULL to actually fix the crash.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 066c8ef6)
      8342adef
    • J
      Don't crash if a connection closes early · 2328d9a8
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When a client closes its connection to libvirtd early during
      virConnectOpen, more specifically just after making
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if
      VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for
      the result, libvirtd may crash due to a race in keep-alive
      initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE
      call, the daemon's event loop delegates it to a worker thread. In case
      the event loop detects EOF on the connection and calls
      virNetServerClientClose before the worker thread starts to handle
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be
      disposed by the time virNetServerClientStartKeepAlive gets called from
      remoteDispatchConnectSupportsFeature. Because the flow is common for
      both authenticated and read-only connections, even unprivileged clients
      may cause the daemon to crash.
      
      To avoid the crash, virNetServerClientStartKeepAlive needs to check if
      the connection is still open before starting keep-alive protocol.
      
      Every libvirt release since 0.9.8 is affected by this bug.
      
      (cherry picked from commit 173c2914)
      2328d9a8
    • J
      qemu: Fix job usage in virDomainGetBlockIoTune · a7844b9e
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit 3b564259)
      a7844b9e
    • J
      qemu: Fix job usage in qemuDomainBlockCopy · 0c4822c1
      Jiri Denemark 提交于
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit ff5f30b6)
      0c4822c1
    • J
      qemu: Fix job usage in qemuDomainBlockJobImpl · 7354aaf4
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit f93d2caa)
      7354aaf4
    • J
      qemu: Avoid using stale data in virDomainGetBlockInfo · 0e98442e
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Generally, every API that is going to begin a job should do that before
      fetching data from vm->def. However, qemuDomainGetBlockInfo does not
      know whether it will have to start a job or not before checking vm->def.
      To avoid using disk alias that might have been freed while we were
      waiting for a job, we use its copy. In case the disk was removed in the
      meantime, we will fail with "cannot find statistics for device '...'"
      error message.
      
      (cherry picked from commit b7992595)
      0e98442e
    • J
      qemu: Do not access stale data in virDomainBlockStats · 1bfc35e3
      Jiri Denemark 提交于
      CVE-2013-6458
      https://bugzilla.redhat.com/show_bug.cgi?id=1043069
      
      When virDomainDetachDeviceFlags is called concurrently to
      virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
      finds a disk in vm->def before getting a job on a domain and uses the
      disk pointer after getting the job. However, the domain in unlocked
      while waiting on a job condition and thus data behind the disk pointer
      may disappear. This happens when thread 1 runs
      virDomainDetachDeviceFlags and enters monitor to actually remove the
      disk. Then another thread starts running virDomainBlockStats, finds the
      disk in vm->def, and while it's waiting on the job condition (owned by
      the first thread), the first thread finishes the disk removal. When the
      second thread gets the job, the memory pointed to be the disk pointer is
      already gone.
      
      That said, every API that is going to begin a job should do that before
      fetching data from vm->def.
      
      (cherry picked from commit db86da5c)
      1bfc35e3
  3. 09 1月, 2014 4 次提交
  4. 29 12月, 2013 1 次提交
    • D
      libxl: avoid crashing if calling `virsh numatune' on inactive domain · 5904ba60
      Dario Faggioli 提交于
      by, in libxlDomainGetNumaParameters(), calling libxl_bitmap_init() as soon as
      possible, which avoids getting to 'cleanup:', where libxl_bitmap_dispose()
      happens, without having initialized the nodemap, and hence crashing after some
      invalid free()-s:
      
       # ./daemon/libvirtd -v
       *** Error in `/home/xen/libvirt.git/daemon/.libs/lt-libvirtd': munmap_chunk(): invalid pointer: 0x00007fdd42592666 ***
       ======= Backtrace: =========
       /lib64/libc.so.6(+0x7bbe7)[0x7fdd3f767be7]
       /lib64/libxenlight.so.4.3(libxl_bitmap_dispose+0xd)[0x7fdd2c88c045]
       /home/xen/libvirt.git/daemon/.libs/../../src/.libs/libvirt_driver_libxl.so(+0x12d26)[0x7fdd2caccd26]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(virDomainGetNumaParameters+0x15c)[0x7fdd4247898c]
       /home/xen/libvirt.git/daemon/.libs/lt-libvirtd(+0x1d9a2)[0x7fdd42ecc9a2]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(virNetServerProgramDispatch+0x3da)[0x7fdd424e9eaa]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0x1a6f38)[0x7fdd424e3f38]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa81e5)[0x7fdd423e51e5]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa783e)[0x7fdd423e483e]
       /lib64/libpthread.so.0(+0x7c53)[0x7fdd3febbc53]
       /lib64/libc.so.6(clone+0x6d)[0x7fdd3f7e1dbd]
      Signed-off-by: NDario Faggili <dario.faggioli@citrix.com>
      Cc: Jim Fehlig <jfehlig@suse.com>
      Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
      (cherry picked from commit f9ee91d3)
      5904ba60
  5. 20 12月, 2013 2 次提交
    • M
      Fix crash in lxcDomainSetMemoryParameters · e98831d5
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain> --hard-limit 133T'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764
       #1  0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824")
           at util/vircgroup.c:669
       #2  0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740
       #3  0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904
       #4  0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576)
           at util/vircgroup.c:1944
       #5  0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774
       #6  0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051
       #7  0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510)
           at remote_dispatch.h:7621
       #8  0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510,
           ret=0x7fffe40b84f0) at remote_dispatch.h:7591
       #9  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:435
       #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:305
       #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10,
           prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165
       #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 9faf3f29)
      e98831d5
    • M
      CVE-2013-6436: fix crash in lxcDomainGetMemoryParameters · 66247dc5
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain>'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764
       #1  0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705
       #2  0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804
       #3  0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8)
           at util/vircgroup.c:1962
       #4  0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0,
           params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826
       #5  0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0,
           params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137
       #6  0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
           ret=0x7fffd4002420) at remote.c:1895
       #7  0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
           ret=0x7fffd4002420) at remote_dispatch.h:4050
       #8  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
           at rpc/virnetserverprogram.c:435
       #9  0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
           at rpc/virnetserverprogram.c:305
       #10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0,
           prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165
       #11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit f8c1cb90)
      66247dc5
  6. 15 12月, 2013 4 次提交
    • C
      Prep for release 1.1.3.2 · 69770f6a
      Cole Robinson 提交于
      69770f6a
    • C
      Tie SASL callbacks lifecycle to virNetSessionSASLContext · 38600eb4
      Christophe Fergeau 提交于
      The array of sasl_callback_t callbacks which is passed to sasl_client_new()
      must be kept alive as long as the created sasl_conn_t object is alive as
      cyrus-sasl uses this structure internally for things like logging, so
      the memory used for callbacks must only be freed after sasl_dispose() has
      been called.
      
      During testing of successful SASL logins with
      virsh -c qemu+tls:///system list --all
      I've been getting invalid read reports from valgrind
      
      ==9237== Invalid read of size 8
      ==9237==    at 0x6E93B6F: _sasl_getcallback (common.c:1745)
      ==9237==    by 0x6E95430: _sasl_log (common.c:1850)
      ==9237==    by 0x16593D87: digestmd5_client_mech_dispose (digestmd5.c:4580)
      ==9237==    by 0x6E91653: client_dispose (client.c:332)
      ==9237==    by 0x6E9476A: sasl_dispose (common.c:851)
      ==9237==    by 0x4E225A1: virNetSASLSessionDispose (virnetsaslcontext.c:678)
      ==9237==    by 0x4CBC551: virObjectUnref (virobject.c:262)
      ==9237==    by 0x4E254D1: virNetSocketDispose (virnetsocket.c:1042)
      ==9237==    by 0x4CBC551: virObjectUnref (virobject.c:262)
      ==9237==    by 0x4E2701C: virNetSocketEventFree (virnetsocket.c:1794)
      ==9237==    by 0x4C965D3: virEventPollCleanupHandles (vireventpoll.c:583)
      ==9237==    by 0x4C96987: virEventPollRunOnce (vireventpoll.c:652)
      ==9237==    by 0x4C94730: virEventRunDefaultImpl (virevent.c:274)
      ==9237==    by 0x12C7BA: vshEventLoop (virsh.c:2407)
      ==9237==    by 0x4CD3D04: virThreadHelper (virthreadpthread.c:161)
      ==9237==    by 0x7DAEF32: start_thread (pthread_create.c:309)
      ==9237==    by 0x8C86EAC: clone (clone.S:111)
      ==9237==  Address 0xe2d61b0 is 0 bytes inside a block of size 168 free'd
      ==9237==    at 0x4A07577: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==9237==    by 0x4C73827: virFree (viralloc.c:580)
      ==9237==    by 0x4DE4BC7: remoteAuthSASL (remote_driver.c:4219)
      ==9237==    by 0x4DE33D0: remoteAuthenticate (remote_driver.c:3639)
      ==9237==    by 0x4DDBFAA: doRemoteOpen (remote_driver.c:832)
      ==9237==    by 0x4DDC8DC: remoteConnectOpen (remote_driver.c:1031)
      ==9237==    by 0x4D8595F: do_open (libvirt.c:1239)
      ==9237==    by 0x4D863F3: virConnectOpenAuth (libvirt.c:1481)
      ==9237==    by 0x12762B: vshReconnect (virsh.c:337)
      ==9237==    by 0x12C9B0: vshInit (virsh.c:2470)
      ==9237==    by 0x12E9A5: main (virsh.c:3338)
      
      This commit changes virNetSASLSessionNewClient() to take ownership of the SASL
      callbacks. Then we can free them in virNetSASLSessionDispose() after the corresponding
      sasl_conn_t has been freed.
      
      (cherry picked from commit 13fdc6d6)
      38600eb4
    • J
      spec: Don't save/restore running VMs on libvirt-client update · ddbd9138
      Jiri Denemark 提交于
      The previous attempt (commit d65e0e14) removed just one of two
      libvirt-guests restarts that happened on libvirt-client update. Let's
      remove the last one too :-)
      
      https://bugzilla.redhat.com/show_bug.cgi?id=962225Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 604f79b3)
      ddbd9138
    • D
      Return right error code for baselineCPU · 085e2fe0
      Don Dugger 提交于
      This Python interface code is returning a -1 on errors for the
      `baselineCPU' API.  Since this API is supposed to return a pointer
      the error return value should really be VIR_PY_NONE.
      
      NB:  I've checked all the other APIs in this file and this is the
      only pointer API that is returning -1.
      Signed-off-by: NDon Dugger <donald.d.dugger@intel.com>
      
      (crobinso: Upstream in libvirt-python.git)
      085e2fe0
  7. 10 12月, 2013 6 次提交
  8. 03 12月, 2013 1 次提交
  9. 22 11月, 2013 1 次提交
  10. 20 11月, 2013 3 次提交
  11. 18 11月, 2013 3 次提交
  12. 13 11月, 2013 3 次提交
    • J
      Disable nwfilter driver when running unprivileged · 22a1dd95
      Ján Tomko 提交于
      When opening a new connection to the driver, nwfilterOpen
      only succeeds if the driverState has been allocated.
      
      Move the privilege check in driver initialization before
      the state allocation to disable the driver.
      
      This changes the nwfilter-define error from:
      error: cannot create config directory (null): Bad address
      To:
      this function is not supported by the connection driver:
      virNWFilterDefineXML
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1029266
      (cherry picked from commit b7829f95)
      22a1dd95
    • J
      qemu: don't use deprecated -no-kvm-pit-reinjection · e20a2c77
      Ján Tomko 提交于
      Since qemu-kvm 1.1 [1] (since 1.3. in upstream QEMU [2])
      '-no-kvm-pit-reinjection' has been deprecated.
      Use -global kvm-pit.lost_tick_policy=discard instead.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=978719
      
      [1] http://git.kernel.org/cgit/virt/kvm/qemu-kvm.git/commit/?id=4e4fa39
      [2] http://git.qemu.org/?p=qemu.git;a=commitdiff;h=c21fb4f
      
      (cherry picked from commit 1569fa14)
      
      Conflicts:
      	tests/qemucapabilitiesdata/caps_1.2.2-1.caps
      	tests/qemucapabilitiesdata/caps_1.2.2-1.replies
      	tests/qemucapabilitiesdata/caps_1.3.1-1.caps
      	tests/qemucapabilitiesdata/caps_1.3.1-1.replies
      	tests/qemucapabilitiesdata/caps_1.4.2-1.caps
      	tests/qemucapabilitiesdata/caps_1.4.2-1.replies
      	tests/qemucapabilitiesdata/caps_1.5.3-1.caps
      	tests/qemucapabilitiesdata/caps_1.5.3-1.replies
      	tests/qemucapabilitiesdata/caps_1.6.0-1.caps
      	tests/qemucapabilitiesdata/caps_1.6.0-1.replies
      	tests/qemucapabilitiesdata/caps_1.6.50-1.caps
      	tests/qemucapabilitiesdata/caps_1.6.50-1.replies
      (qemucapabilitiestest is not backported)
      e20a2c77
    • M
      qemu: Don't access vm->priv on unlocked domain · cc16220d
      Michal Privoznik 提交于
      Since 86d90b3a (yes, my patch; again) we are supporting NBD storage
      migration. However, on error recovery path we got the steps reversed.
      The correct order is: return NBD port to the virPortAllocator and then
      either unlock the vm or remove it from the driver. Not vice versa.
      
      ==11192== Invalid write of size 4
      ==11192==    at 0x11488559: qemuMigrationPrepareAny (qemu_migration.c:2459)
      ==11192==    by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652)
      ==11192==    by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332)
      ==11192==    by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290)
      ==11192==    by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798)
      ==11192==    by 0x12DECA: remoteDispatchDomainMigratePrepare3ParamsHelper (remote_dispatch.h:5741)
      ==11192==    by 0x5212127: virNetServerProgramDispatchCall (virnetserverprogram.c:435)
      ==11192==    by 0x5211C86: virNetServerProgramDispatch (virnetserverprogram.c:305)
      ==11192==    by 0x520A8FD: virNetServerProcessMsg (virnetserver.c:165)
      ==11192==    by 0x520A9E1: virNetServerHandleJob (virnetserver.c:186)
      ==11192==    by 0x50DA78F: virThreadPoolWorker (virthreadpool.c:144)
      ==11192==    by 0x50DA11C: virThreadHelper (virthreadpthread.c:161)
      ==11192==  Address 0x1368baa0 is 576 bytes inside a block of size 688 free'd
      ==11192==    at 0x4A07F5C: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==11192==    by 0x5079A2F: virFree (viralloc.c:580)
      ==11192==    by 0x11456C34: qemuDomainObjPrivateFree (qemu_domain.c:267)
      ==11192==    by 0x50F41B4: virDomainObjDispose (domain_conf.c:2034)
      ==11192==    by 0x50C2991: virObjectUnref (virobject.c:262)
      ==11192==    by 0x50F4CFC: virDomainObjListRemove (domain_conf.c:2361)
      ==11192==    by 0x1145C125: qemuDomainRemoveInactive (qemu_domain.c:2087)
      ==11192==    by 0x11488520: qemuMigrationPrepareAny (qemu_migration.c:2456)
      ==11192==    by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652)
      ==11192==    by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332)
      ==11192==    by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290)
      ==11192==    by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 1f2f879e)
      cc16220d
  13. 12 11月, 2013 2 次提交
    • M
      virpci: Don't error on unbinded devices · 79d347c9
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1018897
      
      If a PCI deivce is not binded to any driver (e.g. there's yet no PCI
      driver in the linux kernel) but still users want to passthru the device
      we fail the whole operation as we fail to resolve the 'driver' link
      under the PCI device sysfs tree. Obviously, this is not a fatal error
      and it shouldn't be error at all.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit df4283a5)
      79d347c9
    • M
      virSecurityLabelDefParseXML: Don't parse label on model='none' · 13cfcad6
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1027096
      
      If there's the following snippet in the domain XML, the domain will be
      lost upon the daemon restart (if the domain is started prior restart):
      
          <seclabel type='dynamic' relabel='yes'/>
      
      The problem is, the 'label', 'imagelabel' and 'baselabel' are parsed
      whenever the VIR_DOMAIN_XML_INACTIVE is *not* present or the label is
      static. The latter is not our case, obviously. So, when libvirtd starts
      up, it finds domain state xml and parse it. During parsing, many XML
      flags are enabled but VIR_DOMAIN_XML_INACTIVE. Hence, our parser tries
      to extract 'label', 'imagelabel' and 'baselabel' from the XML which
      fails for model='none'. Err, this model - even though not specified in
      XML - can be taken from qemu wide config file: /etc/libvirtd/qemu.conf.
      
      However, in order to know we are dealing with model='none' the code in
      question must be moved forward a bit. Then a new check must be
      introduced. This is what the first two chunks are doing.
      
      But this alone is not sufficient. The domain state XML won't contain the
      model attribute without slight modification. The model should be
      inserted into the XML even if equal to 'none' and the state XML is being
      generated - what if the origin (the @security_driver variable in
      qemu.conf) changes during libvirtd restarts?
      
      At the end, a test to catch this scenario is introduced.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 9fb3f957)
      13cfcad6