1. 18 2月, 2014 19 次提交
  2. 06 2月, 2014 6 次提交
    • D
      Push nwfilter update locking up to top level · 2331e5c8
      Daniel P. Berrange 提交于
      The NWFilter code has as a deadlock race condition between
      the virNWFilter{Define,Undefine} APIs and starting of guest
      VMs due to mis-matched lock ordering.
      
      In the virNWFilter{Define,Undefine} codepaths the lock ordering
      is
      
        1. nwfilter driver lock
        2. virt driver lock
        3. nwfilter update lock
        4. domain object lock
      
      In the VM guest startup paths the lock ordering is
      
        1. virt driver lock
        2. domain object lock
        3. nwfilter update lock
      
      As can be seen the domain object and nwfilter update locks are
      not acquired in a consistent order.
      
      The fix used is to push the nwfilter update lock upto the top
      level resulting in a lock ordering for virNWFilter{Define,Undefine}
      of
      
        1. nwfilter driver lock
        2. nwfilter update lock
        3. virt driver lock
        4. domain object lock
      
      and VM start using
      
        1. nwfilter update lock
        2. virt driver lock
        3. domain object lock
      
      This has the effect of serializing VM startup once again, even if
      no nwfilters are applied to the guest. There is also the possibility
      of deadlock due to a call graph loop via virNWFilterInstantiate
      and virNWFilterInstantiateFilterLate.
      
      These two problems mean the lock must be turned into a read/write
      lock instead of a plain mutex at the same time. The lock is used to
      serialize changes to the "driver->nwfilters" hash, so the write lock
      only needs to be held by the define/undefine methods. All other
      methods can rely on a read lock which allows good concurrency.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 6e5c79a1)
      
      Conflicts:
      	src/conf/nwfilter_conf.c
                - virReportOOMError() in context of one hunk.
      	src/lxc/lxc_driver.c
                - functions renamed, and lxc object locking changed, creating
                  a conflict in the context.
      2331e5c8
    • D
      Add a read/write lock implementation · 8e48acae
      Daniel P. Berrange 提交于
      Add virRWLock backed up by a POSIX rwlock primitive
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit c065984b)
      8e48acae
    • D
      Remove use of virConnectPtr from all remaining nwfilter code · 64a9166b
      Daniel P. Berrange 提交于
      The virConnectPtr is passed around loads of nwfilter code in
      order to provide it as a parameter to the callback registered
      by the virt drivers. None of the virt drivers use this param
      though, so it serves no purpose.
      
      Avoiding the need to pass a virConnectPtr means that the
      nwfilterStateReload method no longer needs to open a bogus
      QEMU driver connection. This addresses a race condition that
      can lead to a crash on startup.
      
      The nwfilter driver starts before the QEMU driver and registers
      some callbacks with DBus to detect firewalld reload. If the
      firewalld reload happens while the QEMU driver is still starting
      up though, the nwfilterStateReload method will open a connection
      to the partially initialized QEMU driver and cause a crash.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 999d72fb)
      64a9166b
    • D
      Don't pass virConnectPtr in nwfilter 'struct domUpdateCBStruct' · 9d30a748
      Daniel P. Berrange 提交于
      The nwfilter driver only needs a reference to its private
      state object, not a full virConnectPtr. Update the domUpdateCBStruct
      struct to have a 'void *opaque' field instead of a virConnectPtr.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit ebca369e)
      9d30a748
    • D
      Remove virConnectPtr arg from virNWFilterDefParse* · 3c7a39a2
      Daniel P. Berrange 提交于
      None of the virNWFilterDefParse* methods require a virConnectPtr
      arg, so just drop it
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit b77b16ce)
      3c7a39a2
    • D
      Don't ignore errors parsing nwfilter rules · d3b7a109
      Daniel P. Berrange 提交于
      For inexplicable reasons, the nwfilter XML parser is intentionally
      ignoring errors that arise during parsing. As well as meaning that
      users don't get any feedback on their XML mistakes, this will lead
      it to silently drop data in OOM conditions.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 4f209434)
      d3b7a109
  3. 16 1月, 2014 9 次提交
    • E
      event: filter global events by domain:getattr ACL [CVE-2014-0028] · cdf29d95
      Eric Blake 提交于
      Ever since ACL filtering was added in commit 76397360 (v1.1.1), a
      user could still use event registration to obtain access to a
      domain that they could not normally access via virDomainLookup*
      or virConnectListAllDomains and friends.  We already have the
      framework in the RPC generator for creating the filter, and
      previous cleanup patches got us to the point that we can now
      wire the filter through the entire object event stack.
      
      Furthermore, whether or not domain:getattr is honored, use of
      global events is a form of obtaining a list of networks, which
      is covered by connect:search_domains added in a93cd08f (v1.1.0).
      Ideally, we'd have a way to enforce connect:search_domains when
      doing global registrations while omitting that check on a
      per-domain registration.  But this patch just unconditionally
      requires connect:search_domains, even when no list could be
      obtained, based on the following observations:
      1. Administrators are unlikely to grant domain:getattr for one
      or all domains while still denying connect:search_domains - a
      user that is able to manage domains will want to be able to
      manage them efficiently, but efficient management includes being
      able to list the domains they can access.  The idea of denying
      connect:search_domains while still granting access to individual
      domains is therefore not adding any real security, but just
      serves as a layer of obscurity to annoy the end user.
      2. In the current implementation, domain events are filtered
      on the client; the server has no idea if a domain filter was
      requested, and must therefore assume that all domain event
      requests are global.  Even if we fix the RPC protocol to
      allow for server-side filtering for newer client/server combos,
      making the connect:serach_domains ACL check conditional on
      whether the domain argument was NULL won't benefit older clients.
      Therefore, we choose to document that connect:search_domains
      is a pre-requisite to any domain event management.
      
      Network events need the same treatment, with the obvious
      change of using connect:search_networks and network:getattr.
      
      * src/access/viraccessperm.h
      (VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS)
      (VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional
      effect of the permission.
      * src/conf/domain_event.h (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Add new parameter.
      * src/conf/network_event.h (virNetworkEventStateRegisterID):
      Likewise.
      * src/conf/object_event_private.h (virObjectEventStateRegisterID):
      Likewise.
      * src/conf/object_event.c (_virObjectEventCallback): Track a filter.
      (virObjectEventDispatchMatchCallback): Use filter.
      (virObjectEventCallbackListAddID): Register filter.
      * src/conf/domain_event.c (virDomainEventFilter): New function.
      (virDomainEventStateRegister, virDomainEventStateRegisterID):
      Adjust callers.
      * src/conf/network_event.c (virNetworkEventFilter): New function.
      (virNetworkEventStateRegisterID): Adjust caller.
      * src/remote/remote_protocol.x
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER)
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY)
      (REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a
      filter, and require connect:search_domains instead of weaker
      connect:read.
      * src/test/test_driver.c (testConnectDomainEventRegister)
      (testConnectDomainEventRegisterAny)
      (testConnectNetworkEventRegisterAny): Update callers.
      * src/remote/remote_driver.c (remoteConnectDomainEventRegister)
      (remoteConnectDomainEventRegisterAny): Likewise.
      * src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
      (xenUnifiedConnectDomainEventRegisterAny): Likewise.
      * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise.
      * src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
      (libxlConnectDomainEventRegisterAny): Likewise.
      * src/qemu/qemu_driver.c (qemuConnectDomainEventRegister)
      (qemuConnectDomainEventRegisterAny): Likewise.
      * src/uml/uml_driver.c (umlConnectDomainEventRegister)
      (umlConnectDomainEventRegisterAny): Likewise.
      * src/network/bridge_driver.c
      (networkConnectNetworkEventRegisterAny): Likewise.
      * src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
      (lxcConnectDomainEventRegisterAny): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit f9f56340)
      
      Conflicts:
      	1.1.0 had a framework for generating filter methods, but
      nothing actually used them.  Therefore, the only leak in this
      branch was the failure to honor connect:search_domains, and that
      is fixed by backporting just the patch to remote_protocol.x to
      properly annotate ACL categories, and to viraccessperms.h to
      document the scope of the ACL.
      cdf29d95
    • E
      Fix memory leak in virObjectEventCallbackListRemoveID() · 3e9f3f23
      Eric Blake 提交于
      While running objecteventtest, it was found that valgrind pointed out the
      following memory leak:
      
      ==13464== 5 bytes in 1 blocks are definitely lost in loss record 7 of 134
      ==13464==    at 0x4A0887C: malloc (vg_replace_malloc.c:270)
      ==13464==    by 0x341F485E21: strdup (strdup.c:42)
      ==13464==    by 0x4CAE28F: virStrdup (virstring.c:554)
      ==13464==    by 0x4CF3CBE: virObjectEventCallbackListAddID (object_event.c:286)
      ==13464==    by 0x4CF49CA: virObjectEventStateRegisterID (object_event.c:729)
      ==13464==    by 0x4CF73FE: virDomainEventStateRegisterID (domain_event.c:1424)
      ==13464==    by 0x4D7358F: testConnectDomainEventRegisterAny (test_driver.c:6032)
      ==13464==    by 0x4D600C8: virConnectDomainEventRegisterAny (libvirt.c:19128)
      ==13464==    by 0x402409: testDomainStartStopEvent (objecteventtest.c:232)
      ==13464==    by 0x403451: virtTestRun (testutils.c:138)
      ==13464==    by 0x402012: mymain (objecteventtest.c:395)
      ==13464==    by 0x403AF2: virtTestMain (testutils.c:593)
      ==13464==
      
      (cherry picked from commit 34d52b34)
      
      Conflicts:
      	src/conf/object_event.c - 1.2.1 refactoring to object_event not
      backported, so change applied directly in older domain_event.c instead
      3e9f3f23
    • M
      virDomainEventCallbackListFree: Don't leak @list->callbacks · 928a1a51
      Michal Privoznik 提交于
      The @list->callbacks is an array that is inflated whenever a new event
      is added, e.g. via virDomainEventCallbackListAddID(). However, when we
      are freeing the array, we free the items within it but forgot to
      actually free it.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit ea13a759)
      928a1a51
    • J
      Really don't crash if a connection closes early · c86813d5
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When writing commit 173c2914, I missed the fact virNetServerClientClose
      unlocks the client object before actually clearing client->sock and thus
      it is possible to hit a window when client->keepalive is NULL while
      client->sock is not NULL. I was thinking client->sock == NULL was a
      better check for a closed connection but apparently we have to go with
      client->keepalive == NULL to actually fix the crash.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 066c8ef6)
      c86813d5
    • J
      Don't crash if a connection closes early · 700b39d0
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When a client closes its connection to libvirtd early during
      virConnectOpen, more specifically just after making
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if
      VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for
      the result, libvirtd may crash due to a race in keep-alive
      initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE
      call, the daemon's event loop delegates it to a worker thread. In case
      the event loop detects EOF on the connection and calls
      virNetServerClientClose before the worker thread starts to handle
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be
      disposed by the time virNetServerClientStartKeepAlive gets called from
      remoteDispatchConnectSupportsFeature. Because the flow is common for
      both authenticated and read-only connections, even unprivileged clients
      may cause the daemon to crash.
      
      To avoid the crash, virNetServerClientStartKeepAlive needs to check if
      the connection is still open before starting keep-alive protocol.
      
      Every libvirt release since 0.9.8 is affected by this bug.
      
      (cherry picked from commit 173c2914)
      700b39d0
    • J
      qemu: Fix job usage in virDomainGetBlockIoTune · 8cc2474f
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit 3b564259)
      8cc2474f
    • J
      qemu: Fix job usage in qemuDomainBlockCopy · ebac034d
      Jiri Denemark 提交于
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit ff5f30b6)
      ebac034d
    • J
      qemu: Fix job usage in qemuDomainBlockJobImpl · 1478ebf2
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit f93d2caa)
      1478ebf2
    • J
      qemu: Avoid using stale data in virDomainGetBlockInfo · c1f8276a
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Generally, every API that is going to begin a job should do that before
      fetching data from vm->def. However, qemuDomainGetBlockInfo does not
      know whether it will have to start a job or not before checking vm->def.
      To avoid using disk alias that might have been freed while we were
      waiting for a job, we use its copy. In case the disk was removed in the
      meantime, we will fail with "cannot find statistics for device '...'"
      error message.
      
      (cherry picked from commit b7992595)
      c1f8276a
  4. 15 1月, 2014 1 次提交
    • J
      qemu: Do not access stale data in virDomainBlockStats · 5efb9963
      Jiri Denemark 提交于
      CVE-2013-6458
      https://bugzilla.redhat.com/show_bug.cgi?id=1043069
      
      When virDomainDetachDeviceFlags is called concurrently to
      virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
      finds a disk in vm->def before getting a job on a domain and uses the
      disk pointer after getting the job. However, the domain in unlocked
      while waiting on a job condition and thus data behind the disk pointer
      may disappear. This happens when thread 1 runs
      virDomainDetachDeviceFlags and enters monitor to actually remove the
      disk. Then another thread starts running virDomainBlockStats, finds the
      disk in vm->def, and while it's waiting on the job condition (owned by
      the first thread), the first thread finishes the disk removal. When the
      second thread gets the job, the memory pointed to be the disk pointer is
      already gone.
      
      That said, every API that is going to begin a job should do that before
      fetching data from vm->def.
      
      (cherry picked from commit db86da5c)
      5efb9963
  5. 20 12月, 2013 2 次提交
    • M
      Fix crash in lxcDomainSetMemoryParameters · 6933d055
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain> --hard-limit 133T'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764
       #1  0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824")
           at util/vircgroup.c:669
       #2  0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740
       #3  0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904
       #4  0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576)
           at util/vircgroup.c:1944
       #5  0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774
       #6  0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420,
           params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051
       #7  0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510)
           at remote_dispatch.h:7621
       #8  0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510,
           ret=0x7fffe40b84f0) at remote_dispatch.h:7591
       #9  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:435
       #10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
           at rpc/virnetserverprogram.c:305
       #11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10,
           prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165
       #12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 9faf3f29)
      
      Conflicts:
      	src/lxc/lxc_driver.c
      6933d055
    • M
      CVE-2013-6436: fix crash in lxcDomainGetMemoryParameters · 30a589bc
      Martin Kletzander 提交于
      The function doesn't check whether the request is made for active or
      inactive domain.  Thus when the domain is not running it still tries
      accessing non-existing cgroups (priv->cgroup, which is NULL).
      
      I re-made the function in order for it to work the same way it's qemu
      counterpart does.
      
      Reproducer:
       1) Define an LXC domain
       2) Do 'virsh memtune <domain>'
      
      Backtrace:
       Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)):
       #0  0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764
       #1  0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705
       #2  0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3,
           key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804
       #3  0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8)
           at util/vircgroup.c:1962
       #4  0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0,
           params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826
       #5  0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0,
           params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137
       #6  0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00,
           client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
           ret=0x7fffd4002420) at remote.c:1895
       #7  0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00,
           client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
           ret=0x7fffd4002420) at remote_dispatch.h:4050
       #8  0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
           at rpc/virnetserverprogram.c:435
       #9  0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
           server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
           at rpc/virnetserverprogram.c:305
       #10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0,
           prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165
       #11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00)
           at rpc/virnetserver.c:186
       #12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
       #13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
       #14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
       #15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit f8c1cb90)
      
      Conflicts:
      	src/lxc/lxc_driver.c
      30a589bc
  6. 13 11月, 2013 1 次提交
    • J
      Disable nwfilter driver when running unprivileged · d3334a53
      Ján Tomko 提交于
      When opening a new connection to the driver, nwfilterOpen
      only succeeds if the driverState has been allocated.
      
      Move the privilege check in driver initialization before
      the state allocation to disable the driver.
      
      This changes the nwfilter-define error from:
      error: cannot create config directory (null): Bad address
      To:
      this function is not supported by the connection driver:
      virNWFilterDefineXML
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1029266
      (cherry picked from commit b7829f95)
      d3334a53
  7. 21 10月, 2013 1 次提交
  8. 18 10月, 2013 1 次提交