1. 19 2月, 2014 21 次提交
  2. 06 2月, 2014 3 次提交
    • D
      Push nwfilter update locking up to top level · 419ea630
      Daniel P. Berrange 提交于
      The NWFilter code has as a deadlock race condition between
      the virNWFilter{Define,Undefine} APIs and starting of guest
      VMs due to mis-matched lock ordering.
      
      In the virNWFilter{Define,Undefine} codepaths the lock ordering
      is
      
        1. nwfilter driver lock
        2. virt driver lock
        3. nwfilter update lock
        4. domain object lock
      
      In the VM guest startup paths the lock ordering is
      
        1. virt driver lock
        2. domain object lock
        3. nwfilter update lock
      
      As can be seen the domain object and nwfilter update locks are
      not acquired in a consistent order.
      
      The fix used is to push the nwfilter update lock upto the top
      level resulting in a lock ordering for virNWFilter{Define,Undefine}
      of
      
        1. nwfilter driver lock
        2. nwfilter update lock
        3. virt driver lock
        4. domain object lock
      
      and VM start using
      
        1. nwfilter update lock
        2. virt driver lock
        3. domain object lock
      
      This has the effect of serializing VM startup once again, even if
      no nwfilters are applied to the guest. There is also the possibility
      of deadlock due to a call graph loop via virNWFilterInstantiate
      and virNWFilterInstantiateFilterLate.
      
      These two problems mean the lock must be turned into a read/write
      lock instead of a plain mutex at the same time. The lock is used to
      serialize changes to the "driver->nwfilters" hash, so the write lock
      only needs to be held by the define/undefine methods. All other
      methods can rely on a read lock which allows good concurrency.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 6e5c79a1)
      
      Conflicts:
      	src/conf/nwfilter_conf.c
                - virReportOOMError() in context of one hunk.
      	src/lxc/lxc_driver.c
                - functions renamed, and lxc object locking changed, creating
                  a conflict in the context.
      419ea630
    • D
      Add a read/write lock implementation · 75965943
      Daniel P. Berrange 提交于
      Add virRWLock backed up by a POSIX rwlock primitive
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit c065984b)
      75965943
    • D
      Don't ignore errors parsing nwfilter rules · 11a31c08
      Daniel P. Berrange 提交于
      For inexplicable reasons, the nwfilter XML parser is intentionally
      ignoring errors that arise during parsing. As well as meaning that
      users don't get any feedback on their XML mistakes, this will lead
      it to silently drop data in OOM conditions.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 4f209434)
      
      Conflicts:
      	tests/nwfilterxml2xmltest.c
                - args to virNWFilterDefParseString are different, causing
                  small conflict in context.
      11a31c08
  3. 17 1月, 2014 2 次提交
  4. 16 1月, 2014 8 次提交
    • J
      Really don't crash if a connection closes early · 99f8d97a
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When writing commit 173c2914, I missed the fact virNetServerClientClose
      unlocks the client object before actually clearing client->sock and thus
      it is possible to hit a window when client->keepalive is NULL while
      client->sock is not NULL. I was thinking client->sock == NULL was a
      better check for a closed connection but apparently we have to go with
      client->keepalive == NULL to actually fix the crash.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 066c8ef6)
      99f8d97a
    • J
      Don't crash if a connection closes early · c4d275c9
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When a client closes its connection to libvirtd early during
      virConnectOpen, more specifically just after making
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if
      VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for
      the result, libvirtd may crash due to a race in keep-alive
      initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE
      call, the daemon's event loop delegates it to a worker thread. In case
      the event loop detects EOF on the connection and calls
      virNetServerClientClose before the worker thread starts to handle
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be
      disposed by the time virNetServerClientStartKeepAlive gets called from
      remoteDispatchConnectSupportsFeature. Because the flow is common for
      both authenticated and read-only connections, even unprivileged clients
      may cause the daemon to crash.
      
      To avoid the crash, virNetServerClientStartKeepAlive needs to check if
      the connection is still open before starting keep-alive protocol.
      
      Every libvirt release since 0.9.8 is affected by this bug.
      
      (cherry picked from commit 173c2914)
      c4d275c9
    • J
      qemu: Fix job usage in virDomainGetBlockIoTune · dee5fc75
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit 3b564259)
      dee5fc75
    • J
      qemu: Fix job usage in qemuDomainBlockCopy · 0135324b
      Jiri Denemark 提交于
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit ff5f30b6)
      
      Conflicts:
      	src/qemu/qemu_driver.c - context
      0135324b
    • J
      qemu: Fix job usage in qemuDomainBlockJobImpl · 6cd87982
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit f93d2caa)
      6cd87982
    • J
      qemu: Avoid using stale data in virDomainGetBlockInfo · 92331918
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Generally, every API that is going to begin a job should do that before
      fetching data from vm->def. However, qemuDomainGetBlockInfo does not
      know whether it will have to start a job or not before checking vm->def.
      To avoid using disk alias that might have been freed while we were
      waiting for a job, we use its copy. In case the disk was removed in the
      meantime, we will fail with "cannot find statistics for device '...'"
      error message.
      
      (cherry picked from commit b7992595)
      
      Conflicts:
      	src/qemu/qemu_driver.c - VIR_STRDUP not backported
      92331918
    • J
      qemu: Do not access stale data in virDomainBlockStats · c67b0de0
      Jiri Denemark 提交于
      CVE-2013-6458
      https://bugzilla.redhat.com/show_bug.cgi?id=1043069
      
      When virDomainDetachDeviceFlags is called concurrently to
      virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
      finds a disk in vm->def before getting a job on a domain and uses the
      disk pointer after getting the job. However, the domain in unlocked
      while waiting on a job condition and thus data behind the disk pointer
      may disappear. This happens when thread 1 runs
      virDomainDetachDeviceFlags and enters monitor to actually remove the
      disk. Then another thread starts running virDomainBlockStats, finds the
      disk in vm->def, and while it's waiting on the job condition (owned by
      the first thread), the first thread finishes the disk removal. When the
      second thread gets the job, the memory pointed to be the disk pointer is
      already gone.
      
      That said, every API that is going to begin a job should do that before
      fetching data from vm->def.
      
      (cherry picked from commit db86da5c)
      
      Conflicts:
      	src/qemu/qemu_driver.c - context: no ACLs
      c67b0de0
    • E
      tests: be more explicit on qcow2 versions in virstoragetest · 979d1bac
      Eric Blake 提交于
      While working on v1.0.5-maint (the branch in use on Fedora 19)
      with the host at Fedora 20, I got a failure in virstoragetest.
      I traced it to the fact that we were using qemu-img to create a
      qcow2 file, but qemu-img changed from creating v2 files by
      default in F19 to creating v3 files in F20.  Rather than leaving
      it up to qemu-img, it is better to write the test to force
      testing of BOTH file formats (better code coverage and all).
      
      This patch alone does not fix all the failures in v1.0.5-maint;
      for that, we must decide to either teach the older branch to
      understand v3 files, or to reject them outright as unsupported.
      But for upstream, making the test less dependent on changing
      qemu-img defaults is always a good thing.
      
      * tests/virstoragetest.c (testPrepImages): Simplify creation of
      raw file; check if qemu supports compat and if so use it.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit 974e5914)
      
      Conflicts:
      	tests/virstoragetest.c - hardcode test to v2, since this branch doesn't handle v3 correctly
      979d1bac
  5. 09 1月, 2014 5 次提交
  6. 20 12月, 2013 1 次提交
    • M
      Fix crash in lxcDomainSetMemoryParameters · 2e35d287
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain> --hard-limit 133T'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764
       #1  0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824")
           at util/vircgroup.c:669
       #2  0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740
       #3  0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904
       #4  0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576)
           at util/vircgroup.c:1944
       #5  0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774
       #6  0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051
       #7  0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510)
           at remote_dispatch.h:7621
       #8  0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510,
           ret=0x7fffe40b84f0) at remote_dispatch.h:7591
       #9  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:435
       #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:305
       #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10,
           prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165
       #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 9faf3f29)
      
      Conflicts:
      	src/lxc/lxc_driver.c
      2e35d287