1. 27 6月, 2014 1 次提交
  2. 06 5月, 2014 1 次提交
  3. 05 5月, 2014 1 次提交
    • J
      libxl: fix framebuffer port setting for HVM domains · 05afc985
      Jim Fehlig 提交于
      libxl uses the libxl_vnc_info and libxl_sdl_info fields from the
      hvm union in libxl_domain_build_info struct when generating QEMU
      args for VNC or SDL.  These fields were left unset by the libxl
      driver, causing libxl to ignore any user settings.  E.g. with
      
        <graphics type='vnc' port='5950'/>
      
      port would be ignored and QEMU would instead be invoked with
      
        -vnc 127.0.0.1:0,to=99
      
      Unlike the libxl_domain_config struct, the libxl_domain_build_info
      contains only a single libxl_vnc_info and libxl_sdl_info, so
      populate these fields from the first vfb in
      libxl_domain_config->vfbs.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Signed-off-by: NDavid Kiarie <davidkiarie4@gmail.com>
      (cherry picked from commit b55cc5f4)
      05afc985
  4. 03 5月, 2014 1 次提交
  5. 01 5月, 2014 1 次提交
  6. 10 4月, 2014 4 次提交
  7. 20 3月, 2014 1 次提交
    • M
      virNetClientSetTLSSession: Restore original signal mask · 6b48d5e6
      Michal Privoznik 提交于
      Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling
      poll(). This is okay, as we don't want poll() to be interrupted.
      However, then - immediately as we fall out from the poll() - we try to
      restore the original sigmask - again using SIG_BLOCK. But as the man
      page says, SIG_BLOCK adds signals to the signal mask:
      
      SIG_BLOCK
            The set of blocked signals is the union of the current set and the set argument.
      
      Therefore, when restoring the original mask, we need to completely
      overwrite the one we set earlier and hence we should be using:
      
      SIG_SETMASK
            The set of blocked signals is set to the argument set.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 3d4b4f5a)
      6b48d5e6
  8. 10 3月, 2014 1 次提交
    • D
      Add a mutex to serialize updates to firewall · 78c35ad5
      Daniel P. Berrange 提交于
      The nwfilter conf update mutex previously serialized
      updates to the internal data structures for firewall
      rules, and updates to the firewall itself. The latter
      was recently turned into a read/write lock, and filter
      instantiation allowed to proceed in parallel. It was
      believed that this was ok, since each filter is created
      on a separate iptables/ebtables chain.
      
      It turns out that there is a subtle lock ordering problem
      on virNWFilterObjPtr instances. __virNWFilterInstantiateFilter
      will hold a lock on the virNWFilterObjPtr it is instantiating.
      This in turn invokes virNWFilterInstantiate which then invokes
      virNWFilterDetermineMissingVarsRec which then invokes
      virNWFilterObjFindByName. This iterates over every single
      virNWFilterObjPtr in the list, locking them and checking their
      name. So if 2 or more threads try to instantiate a filter in
      parallel, they'll all hold 1 lock at the top level in the
      __virNWFilterInstantiateFilter method which will cause the
      other thread to deadlock in virNWFilterObjFindByName.
      
      The fix is to add an exclusive mutex to serialize the
      execution of __virNWFilterInstantiateFilter.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 925de19e)
      
      Conflicts:
      	src/nwfilter/nwfilter_gentech_driver.c
      78c35ad5
  9. 19 2月, 2014 22 次提交
  10. 06 2月, 2014 3 次提交
    • D
      Push nwfilter update locking up to top level · 419ea630
      Daniel P. Berrange 提交于
      The NWFilter code has as a deadlock race condition between
      the virNWFilter{Define,Undefine} APIs and starting of guest
      VMs due to mis-matched lock ordering.
      
      In the virNWFilter{Define,Undefine} codepaths the lock ordering
      is
      
        1. nwfilter driver lock
        2. virt driver lock
        3. nwfilter update lock
        4. domain object lock
      
      In the VM guest startup paths the lock ordering is
      
        1. virt driver lock
        2. domain object lock
        3. nwfilter update lock
      
      As can be seen the domain object and nwfilter update locks are
      not acquired in a consistent order.
      
      The fix used is to push the nwfilter update lock upto the top
      level resulting in a lock ordering for virNWFilter{Define,Undefine}
      of
      
        1. nwfilter driver lock
        2. nwfilter update lock
        3. virt driver lock
        4. domain object lock
      
      and VM start using
      
        1. nwfilter update lock
        2. virt driver lock
        3. domain object lock
      
      This has the effect of serializing VM startup once again, even if
      no nwfilters are applied to the guest. There is also the possibility
      of deadlock due to a call graph loop via virNWFilterInstantiate
      and virNWFilterInstantiateFilterLate.
      
      These two problems mean the lock must be turned into a read/write
      lock instead of a plain mutex at the same time. The lock is used to
      serialize changes to the "driver->nwfilters" hash, so the write lock
      only needs to be held by the define/undefine methods. All other
      methods can rely on a read lock which allows good concurrency.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 6e5c79a1)
      
      Conflicts:
      	src/conf/nwfilter_conf.c
                - virReportOOMError() in context of one hunk.
      	src/lxc/lxc_driver.c
                - functions renamed, and lxc object locking changed, creating
                  a conflict in the context.
      419ea630
    • D
      Add a read/write lock implementation · 75965943
      Daniel P. Berrange 提交于
      Add virRWLock backed up by a POSIX rwlock primitive
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit c065984b)
      75965943
    • D
      Don't ignore errors parsing nwfilter rules · 11a31c08
      Daniel P. Berrange 提交于
      For inexplicable reasons, the nwfilter XML parser is intentionally
      ignoring errors that arise during parsing. As well as meaning that
      users don't get any feedback on their XML mistakes, this will lead
      it to silently drop data in OOM conditions.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 4f209434)
      
      Conflicts:
      	tests/nwfilterxml2xmltest.c
                - args to virNWFilterDefParseString are different, causing
                  small conflict in context.
      11a31c08
  11. 17 1月, 2014 2 次提交
  12. 16 1月, 2014 2 次提交
    • J
      Really don't crash if a connection closes early · 99f8d97a
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When writing commit 173c2914, I missed the fact virNetServerClientClose
      unlocks the client object before actually clearing client->sock and thus
      it is possible to hit a window when client->keepalive is NULL while
      client->sock is not NULL. I was thinking client->sock == NULL was a
      better check for a closed connection but apparently we have to go with
      client->keepalive == NULL to actually fix the crash.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 066c8ef6)
      99f8d97a
    • J
      Don't crash if a connection closes early · c4d275c9
      Jiri Denemark 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577
      
      When a client closes its connection to libvirtd early during
      virConnectOpen, more specifically just after making
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if
      VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for
      the result, libvirtd may crash due to a race in keep-alive
      initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE
      call, the daemon's event loop delegates it to a worker thread. In case
      the event loop detects EOF on the connection and calls
      virNetServerClientClose before the worker thread starts to handle
      REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be
      disposed by the time virNetServerClientStartKeepAlive gets called from
      remoteDispatchConnectSupportsFeature. Because the flow is common for
      both authenticated and read-only connections, even unprivileged clients
      may cause the daemon to crash.
      
      To avoid the crash, virNetServerClientStartKeepAlive needs to check if
      the connection is still open before starting keep-alive protocol.
      
      Every libvirt release since 0.9.8 is affected by this bug.
      
      (cherry picked from commit 173c2914)
      c4d275c9