1. 15 1月, 2014 2 次提交
    • J
      qemu: Avoid using stale data in virDomainGetBlockInfo · 0e98442e
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Generally, every API that is going to begin a job should do that before
      fetching data from vm->def. However, qemuDomainGetBlockInfo does not
      know whether it will have to start a job or not before checking vm->def.
      To avoid using disk alias that might have been freed while we were
      waiting for a job, we use its copy. In case the disk was removed in the
      meantime, we will fail with "cannot find statistics for device '...'"
      error message.
      
      (cherry picked from commit b7992595)
      0e98442e
    • J
      qemu: Do not access stale data in virDomainBlockStats · 1bfc35e3
      Jiri Denemark 提交于
      CVE-2013-6458
      https://bugzilla.redhat.com/show_bug.cgi?id=1043069
      
      When virDomainDetachDeviceFlags is called concurrently to
      virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
      finds a disk in vm->def before getting a job on a domain and uses the
      disk pointer after getting the job. However, the domain in unlocked
      while waiting on a job condition and thus data behind the disk pointer
      may disappear. This happens when thread 1 runs
      virDomainDetachDeviceFlags and enters monitor to actually remove the
      disk. Then another thread starts running virDomainBlockStats, finds the
      disk in vm->def, and while it's waiting on the job condition (owned by
      the first thread), the first thread finishes the disk removal. When the
      second thread gets the job, the memory pointed to be the disk pointer is
      already gone.
      
      That said, every API that is going to begin a job should do that before
      fetching data from vm->def.
      
      (cherry picked from commit db86da5c)
      1bfc35e3
  2. 09 1月, 2014 4 次提交
  3. 29 12月, 2013 1 次提交
    • D
      libxl: avoid crashing if calling `virsh numatune' on inactive domain · 5904ba60
      Dario Faggioli 提交于
      by, in libxlDomainGetNumaParameters(), calling libxl_bitmap_init() as soon as
      possible, which avoids getting to 'cleanup:', where libxl_bitmap_dispose()
      happens, without having initialized the nodemap, and hence crashing after some
      invalid free()-s:
      
       # ./daemon/libvirtd -v
       *** Error in `/home/xen/libvirt.git/daemon/.libs/lt-libvirtd': munmap_chunk(): invalid pointer: 0x00007fdd42592666 ***
       ======= Backtrace: =========
       /lib64/libc.so.6(+0x7bbe7)[0x7fdd3f767be7]
       /lib64/libxenlight.so.4.3(libxl_bitmap_dispose+0xd)[0x7fdd2c88c045]
       /home/xen/libvirt.git/daemon/.libs/../../src/.libs/libvirt_driver_libxl.so(+0x12d26)[0x7fdd2caccd26]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(virDomainGetNumaParameters+0x15c)[0x7fdd4247898c]
       /home/xen/libvirt.git/daemon/.libs/lt-libvirtd(+0x1d9a2)[0x7fdd42ecc9a2]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(virNetServerProgramDispatch+0x3da)[0x7fdd424e9eaa]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0x1a6f38)[0x7fdd424e3f38]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa81e5)[0x7fdd423e51e5]
       /home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa783e)[0x7fdd423e483e]
       /lib64/libpthread.so.0(+0x7c53)[0x7fdd3febbc53]
       /lib64/libc.so.6(clone+0x6d)[0x7fdd3f7e1dbd]
      Signed-off-by: NDario Faggili <dario.faggioli@citrix.com>
      Cc: Jim Fehlig <jfehlig@suse.com>
      Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
      (cherry picked from commit f9ee91d3)
      5904ba60
  4. 20 12月, 2013 2 次提交
    • M
      Fix crash in lxcDomainSetMemoryParameters · e98831d5
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain> --hard-limit 133T'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764
       #1  0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824")
           at util/vircgroup.c:669
       #2  0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740
       #3  0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904
       #4  0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576)
           at util/vircgroup.c:1944
       #5  0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774
       #6  0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051
       #7  0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510)
           at remote_dispatch.h:7621
       #8  0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510,
           ret=0x7fffe40b84f0) at remote_dispatch.h:7591
       #9  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:435
       #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:305
       #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10,
           prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165
       #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 9faf3f29)
      e98831d5
    • M
      CVE-2013-6436: fix crash in lxcDomainGetMemoryParameters · 66247dc5
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain>'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764
       #1  0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705
       #2  0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804
       #3  0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8)
           at util/vircgroup.c:1962
       #4  0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0,
           params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826
       #5  0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0,
           params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137
       #6  0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
           ret=0x7fffd4002420) at remote.c:1895
       #7  0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
           ret=0x7fffd4002420) at remote_dispatch.h:4050
       #8  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
           at rpc/virnetserverprogram.c:435
       #9  0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
           at rpc/virnetserverprogram.c:305
       #10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0,
           prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165
       #11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit f8c1cb90)
      66247dc5
  5. 15 12月, 2013 4 次提交
    • C
      Prep for release 1.1.3.2 · 69770f6a
      Cole Robinson 提交于
      69770f6a
    • C
      Tie SASL callbacks lifecycle to virNetSessionSASLContext · 38600eb4
      Christophe Fergeau 提交于
      The array of sasl_callback_t callbacks which is passed to sasl_client_new()
      must be kept alive as long as the created sasl_conn_t object is alive as
      cyrus-sasl uses this structure internally for things like logging, so
      the memory used for callbacks must only be freed after sasl_dispose() has
      been called.
      
      During testing of successful SASL logins with
      virsh -c qemu+tls:///system list --all
      I've been getting invalid read reports from valgrind
      
      ==9237== Invalid read of size 8
      ==9237==    at 0x6E93B6F: _sasl_getcallback (common.c:1745)
      ==9237==    by 0x6E95430: _sasl_log (common.c:1850)
      ==9237==    by 0x16593D87: digestmd5_client_mech_dispose (digestmd5.c:4580)
      ==9237==    by 0x6E91653: client_dispose (client.c:332)
      ==9237==    by 0x6E9476A: sasl_dispose (common.c:851)
      ==9237==    by 0x4E225A1: virNetSASLSessionDispose (virnetsaslcontext.c:678)
      ==9237==    by 0x4CBC551: virObjectUnref (virobject.c:262)
      ==9237==    by 0x4E254D1: virNetSocketDispose (virnetsocket.c:1042)
      ==9237==    by 0x4CBC551: virObjectUnref (virobject.c:262)
      ==9237==    by 0x4E2701C: virNetSocketEventFree (virnetsocket.c:1794)
      ==9237==    by 0x4C965D3: virEventPollCleanupHandles (vireventpoll.c:583)
      ==9237==    by 0x4C96987: virEventPollRunOnce (vireventpoll.c:652)
      ==9237==    by 0x4C94730: virEventRunDefaultImpl (virevent.c:274)
      ==9237==    by 0x12C7BA: vshEventLoop (virsh.c:2407)
      ==9237==    by 0x4CD3D04: virThreadHelper (virthreadpthread.c:161)
      ==9237==    by 0x7DAEF32: start_thread (pthread_create.c:309)
      ==9237==    by 0x8C86EAC: clone (clone.S:111)
      ==9237==  Address 0xe2d61b0 is 0 bytes inside a block of size 168 free'd
      ==9237==    at 0x4A07577: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==9237==    by 0x4C73827: virFree (viralloc.c:580)
      ==9237==    by 0x4DE4BC7: remoteAuthSASL (remote_driver.c:4219)
      ==9237==    by 0x4DE33D0: remoteAuthenticate (remote_driver.c:3639)
      ==9237==    by 0x4DDBFAA: doRemoteOpen (remote_driver.c:832)
      ==9237==    by 0x4DDC8DC: remoteConnectOpen (remote_driver.c:1031)
      ==9237==    by 0x4D8595F: do_open (libvirt.c:1239)
      ==9237==    by 0x4D863F3: virConnectOpenAuth (libvirt.c:1481)
      ==9237==    by 0x12762B: vshReconnect (virsh.c:337)
      ==9237==    by 0x12C9B0: vshInit (virsh.c:2470)
      ==9237==    by 0x12E9A5: main (virsh.c:3338)
      
      This commit changes virNetSASLSessionNewClient() to take ownership of the SASL
      callbacks. Then we can free them in virNetSASLSessionDispose() after the corresponding
      sasl_conn_t has been freed.
      
      (cherry picked from commit 13fdc6d6)
      38600eb4
    • J
      spec: Don't save/restore running VMs on libvirt-client update · ddbd9138
      Jiri Denemark 提交于
      The previous attempt (commit d65e0e14) removed just one of two
      libvirt-guests restarts that happened on libvirt-client update. Let's
      remove the last one too :-)
      
      https://bugzilla.redhat.com/show_bug.cgi?id=962225Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 604f79b3)
      ddbd9138
    • D
      Return right error code for baselineCPU · 085e2fe0
      Don Dugger 提交于
      This Python interface code is returning a -1 on errors for the
      `baselineCPU' API.  Since this API is supposed to return a pointer
      the error return value should really be VIR_PY_NONE.
      
      NB:  I've checked all the other APIs in this file and this is the
      only pointer API that is returning -1.
      Signed-off-by: NDon Dugger <donald.d.dugger@intel.com>
      
      (crobinso: Upstream in libvirt-python.git)
      085e2fe0
  6. 10 12月, 2013 6 次提交
  7. 03 12月, 2013 1 次提交
  8. 22 11月, 2013 1 次提交
  9. 20 11月, 2013 3 次提交
  10. 18 11月, 2013 3 次提交
  11. 13 11月, 2013 3 次提交
    • J
      Disable nwfilter driver when running unprivileged · 22a1dd95
      Ján Tomko 提交于
      When opening a new connection to the driver, nwfilterOpen
      only succeeds if the driverState has been allocated.
      
      Move the privilege check in driver initialization before
      the state allocation to disable the driver.
      
      This changes the nwfilter-define error from:
      error: cannot create config directory (null): Bad address
      To:
      this function is not supported by the connection driver:
      virNWFilterDefineXML
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1029266
      (cherry picked from commit b7829f95)
      22a1dd95
    • J
      qemu: don't use deprecated -no-kvm-pit-reinjection · e20a2c77
      Ján Tomko 提交于
      Since qemu-kvm 1.1 [1] (since 1.3. in upstream QEMU [2])
      '-no-kvm-pit-reinjection' has been deprecated.
      Use -global kvm-pit.lost_tick_policy=discard instead.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=978719
      
      [1] http://git.kernel.org/cgit/virt/kvm/qemu-kvm.git/commit/?id=4e4fa39
      [2] http://git.qemu.org/?p=qemu.git;a=commitdiff;h=c21fb4f
      
      (cherry picked from commit 1569fa14)
      
      Conflicts:
      	tests/qemucapabilitiesdata/caps_1.2.2-1.caps
      	tests/qemucapabilitiesdata/caps_1.2.2-1.replies
      	tests/qemucapabilitiesdata/caps_1.3.1-1.caps
      	tests/qemucapabilitiesdata/caps_1.3.1-1.replies
      	tests/qemucapabilitiesdata/caps_1.4.2-1.caps
      	tests/qemucapabilitiesdata/caps_1.4.2-1.replies
      	tests/qemucapabilitiesdata/caps_1.5.3-1.caps
      	tests/qemucapabilitiesdata/caps_1.5.3-1.replies
      	tests/qemucapabilitiesdata/caps_1.6.0-1.caps
      	tests/qemucapabilitiesdata/caps_1.6.0-1.replies
      	tests/qemucapabilitiesdata/caps_1.6.50-1.caps
      	tests/qemucapabilitiesdata/caps_1.6.50-1.replies
      (qemucapabilitiestest is not backported)
      e20a2c77
    • M
      qemu: Don't access vm->priv on unlocked domain · cc16220d
      Michal Privoznik 提交于
      Since 86d90b3a (yes, my patch; again) we are supporting NBD storage
      migration. However, on error recovery path we got the steps reversed.
      The correct order is: return NBD port to the virPortAllocator and then
      either unlock the vm or remove it from the driver. Not vice versa.
      
      ==11192== Invalid write of size 4
      ==11192==    at 0x11488559: qemuMigrationPrepareAny (qemu_migration.c:2459)
      ==11192==    by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652)
      ==11192==    by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332)
      ==11192==    by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290)
      ==11192==    by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798)
      ==11192==    by 0x12DECA: remoteDispatchDomainMigratePrepare3ParamsHelper (remote_dispatch.h:5741)
      ==11192==    by 0x5212127: virNetServerProgramDispatchCall (virnetserverprogram.c:435)
      ==11192==    by 0x5211C86: virNetServerProgramDispatch (virnetserverprogram.c:305)
      ==11192==    by 0x520A8FD: virNetServerProcessMsg (virnetserver.c:165)
      ==11192==    by 0x520A9E1: virNetServerHandleJob (virnetserver.c:186)
      ==11192==    by 0x50DA78F: virThreadPoolWorker (virthreadpool.c:144)
      ==11192==    by 0x50DA11C: virThreadHelper (virthreadpthread.c:161)
      ==11192==  Address 0x1368baa0 is 576 bytes inside a block of size 688 free'd
      ==11192==    at 0x4A07F5C: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==11192==    by 0x5079A2F: virFree (viralloc.c:580)
      ==11192==    by 0x11456C34: qemuDomainObjPrivateFree (qemu_domain.c:267)
      ==11192==    by 0x50F41B4: virDomainObjDispose (domain_conf.c:2034)
      ==11192==    by 0x50C2991: virObjectUnref (virobject.c:262)
      ==11192==    by 0x50F4CFC: virDomainObjListRemove (domain_conf.c:2361)
      ==11192==    by 0x1145C125: qemuDomainRemoveInactive (qemu_domain.c:2087)
      ==11192==    by 0x11488520: qemuMigrationPrepareAny (qemu_migration.c:2456)
      ==11192==    by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652)
      ==11192==    by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332)
      ==11192==    by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290)
      ==11192==    by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 1f2f879e)
      cc16220d
  12. 12 11月, 2013 2 次提交
    • M
      virpci: Don't error on unbinded devices · 79d347c9
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1018897
      
      If a PCI deivce is not binded to any driver (e.g. there's yet no PCI
      driver in the linux kernel) but still users want to passthru the device
      we fail the whole operation as we fail to resolve the 'driver' link
      under the PCI device sysfs tree. Obviously, this is not a fatal error
      and it shouldn't be error at all.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit df4283a5)
      79d347c9
    • M
      virSecurityLabelDefParseXML: Don't parse label on model='none' · 13cfcad6
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1027096
      
      If there's the following snippet in the domain XML, the domain will be
      lost upon the daemon restart (if the domain is started prior restart):
      
          <seclabel type='dynamic' relabel='yes'/>
      
      The problem is, the 'label', 'imagelabel' and 'baselabel' are parsed
      whenever the VIR_DOMAIN_XML_INACTIVE is *not* present or the label is
      static. The latter is not our case, obviously. So, when libvirtd starts
      up, it finds domain state xml and parse it. During parsing, many XML
      flags are enabled but VIR_DOMAIN_XML_INACTIVE. Hence, our parser tries
      to extract 'label', 'imagelabel' and 'baselabel' from the XML which
      fails for model='none'. Err, this model - even though not specified in
      XML - can be taken from qemu wide config file: /etc/libvirtd/qemu.conf.
      
      However, in order to know we are dealing with model='none' the code in
      question must be moved forward a bit. Then a new check must be
      introduced. This is what the first two chunks are doing.
      
      But this alone is not sufficient. The domain state XML won't contain the
      model attribute without slight modification. The model should be
      inserted into the XML even if equal to 'none' and the state XML is being
      generated - what if the origin (the @security_driver variable in
      qemu.conf) changes during libvirtd restarts?
      
      At the end, a test to catch this scenario is introduced.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 9fb3f957)
      13cfcad6
  13. 07 11月, 2013 5 次提交
    • C
      Prep for release 1.1.3.1 · 9c9588b6
      Cole Robinson 提交于
      9c9588b6
    • D
      Push RPM deps down into libvirt-daemon-driver-XXXX sub-RPMs · e25e2b2f
      Daniel P. Berrange 提交于
      For inexplicable reasons, many of the 3rd party package deps
      were left against the 'libvirt-daemon' RPM when the drivers
      were split out. This makes a minimal install heavier that
      it should be. Push them all down into libvirt-daemon-driver-XXX
      so they're only pulled in when truly needed
      
      With this change applied, a minimal install of just the
      libvirt-daemon-driver-lxc RPM is reduced by 41 MB on a
      Fedora 19 host.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 23142ac9)
      e25e2b2f
    • D
      Fix race condition reconnecting to vms & loading configs · b044210e
      Daniel P. Berrange 提交于
      The following sequence
      
       1. Define a persistent QMEU guest
       2. Start the QEMU guest
       3. Stop libvirtd
       4. Kill the QEMU process
       5. Start libvirtd
       6. List persistent guests
      
      At the last step, the previously running persistent guest
      will be missing. This is because of a race condition in the
      QEMU driver startup code. It does
      
       1. Load all VM state files
       2. Spawn thread to reconnect to each VM
       3. Load all VM config files
      
      Only at the end of step 3, does the 'virDomainObjPtr' get
      marked as "persistent". There is therefore a window where
      the thread reconnecting to the VM will remove the persistent
      VM from the list.
      
      The easy fix is to simply switch the order of steps 2 & 3.
      
      In addition to this though, we must only attempt to reconnect
      to a VM which had a non-zero PID loaded from its state file.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit f26701f5)
      b044210e
    • D
      Fix leak of objects when reconnecting to QEMU instances · 5ddb57e0
      Daniel P. Berrange 提交于
      The 'error' cleanup block in qemuProcessReconnect() had a
      'return' statement in the middle of it. This caused a leak
      of virConnectPtr & virQEMUDriverConfigPtr instances. This
      was identified because netcf recently started checking its
      refcount in libvirtd shutdown:
      
      netcfStateCleanup:109 : internal error: Attempt to close netcf state driver with open connections
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 54a24112)
      5ddb57e0
    • D
      Don't update dom->persistent without lock held · 9311f8c6
      Daniel P. Berrange 提交于
      virDomainObjListLoadAllConfigs sets dom->persistent after
      having released its lock on the domain object. This exposes
      a possible race condition.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit b260a77e)
      9311f8c6
  14. 30 10月, 2013 3 次提交