1. 25 7月, 2013 1 次提交
  2. 24 7月, 2013 4 次提交
    • M
      virLXCMonitorClose: Unlock domain while closing monitor · 4e5f0dd2
      Michal Privoznik 提交于
      There's a race in lxc driver causing a deadlock. If a domain is
      destroyed immediately after started, the deadlock can occur. When domain
      is started, the even loop tries to connect to the monitor. If the
      connecting succeeds, virLXCProcessMonitorInitNotify() is called with
      @mon->client locked. The first thing that callee does, is
      virObjectLock(vm). So the order of locking is: 1) @mon->client, 2) @vm.
      
      However, if there's another thread executing virDomainDestroy on the
      very same domain, the first thing done here is locking the @vm. Then,
      the corresponding libvirt_lxc process is killed and monitor is closed
      via calling virLXCMonitorClose(). This callee tries to lock @mon->client
      too. So the order is reversed to the first case. This situation results
      in deadlock and unresponsive libvirtd (since the eventloop is involved).
      
      The proper solution is to unlock the @vm in virLXCMonitorClose prior
      entering virNetClientClose(). See the backtrace as follows:
      
      Thread 25 (Thread 0x7f1b7c9b8700 (LWP 16312)):
      0  0x00007f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0
      1  0x00007f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0
      2  0x00007f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0
      3  0x00007f1b82a637cf in virMutexLock (m=0x7f1b3c0038d0) at util/virthreadpthread.c:85
      4  0x00007f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b3c0038c0) at util/virobject.c:320
      5  0x00007f1b82b861f6 in virNetClientCloseInternal (client=0x7f1b3c0038c0, reason=3) at rpc/virnetclient.c:696
      6  0x00007f1b82b862f5 in virNetClientClose (client=0x7f1b3c0038c0) at rpc/virnetclient.c:721
      7  0x00007f1b6ee12500 in virLXCMonitorClose (mon=0x7f1b3c007210) at lxc/lxc_monitor.c:216
      8  0x00007f1b6ee129f0 in virLXCProcessCleanup (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:174
      9  0x00007f1b6ee14106 in virLXCProcessStop (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:710
      10 0x00007f1b6ee1aa36 in lxcDomainDestroyFlags (dom=0x7f1b5c002560, flags=0) at lxc/lxc_driver.c:1291
      11 0x00007f1b6ee1ab1a in lxcDomainDestroy (dom=0x7f1b5c002560) at lxc/lxc_driver.c:1321
      12 0x00007f1b82b05be5 in virDomainDestroy (domain=0x7f1b5c002560) at libvirt.c:2303
      13 0x00007f1b835a7e85 in remoteDispatchDomainDestroy (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50) at remote_dispatch.h:3143
      14 0x00007f1b835a7d78 in remoteDispatchDomainDestroyHelper (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50, ret=0x7f1b5c0029e0) at remote_dispatch.h:3121
      15 0x00007f1b82b93704 in virNetServerProgramDispatchCall (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:435
      16 0x00007f1b82b93263 in virNetServerProgramDispatch (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:305
      17 0x00007f1b82b8c0f6 in virNetServerProcessMsg (srv=0x7f1b857419d0, client=0x7f1b8574ae40, prog=0x7f1b8573af90, msg=0x7f1b8574acf0) at rpc/virnetserver.c:163
      18 0x00007f1b82b8c1da in virNetServerHandleJob (jobOpaque=0x7f1b8574dca0, opaque=0x7f1b857419d0) at rpc/virnetserver.c:184
      19 0x00007f1b82a64158 in virThreadPoolWorker (opaque=0x7f1b8573cb10) at util/virthreadpool.c:144
      20 0x00007f1b82a63ae5 in virThreadHelper (data=0x7f1b8574b9f0) at util/virthreadpthread.c:161
      21 0x00007f1b80532f4a in start_thread () from /lib64/libpthread.so.0
      22 0x00007f1b7fc4f20d in clone () from /lib64/libc.so.6
      
      Thread 1 (Thread 0x7f1b83546740 (LWP 16297)):
      0  0x00007f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0
      1  0x00007f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0
      2  0x00007f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0
      3  0x00007f1b82a637cf in virMutexLock (m=0x7f1b680ceb80) at util/virthreadpthread.c:85
      4  0x00007f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b680ceb70) at util/virobject.c:320
      5  0x00007f1b6ee13bd7 in virLXCProcessMonitorInitNotify (mon=0x7f1b3c007210, initpid=4832, vm=0x7f1b680ceb70) at lxc/lxc_process.c:601
      6  0x00007f1b6ee11fd3 in virLXCMonitorHandleEventInit (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, evdata=0x7f1b8574a7d0, opaque=0x7f1b3c007210) at lxc/lxc_monitor.c:109
      7  0x00007f1b82b8a196 in virNetClientProgramDispatch (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, msg=0x7f1b3c003928) at rpc/virnetclientprogram.c:259
      8  0x00007f1b82b87030 in virNetClientCallDispatchMessage (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1019
      9  0x00007f1b82b876bb in virNetClientCallDispatch (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1140
      10 0x00007f1b82b87d41 in virNetClientIOHandleInput (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1312
      11 0x00007f1b82b88f51 in virNetClientIncomingEvent (sock=0x7f1b3c0044e0, events=1, opaque=0x7f1b3c0038c0) at rpc/virnetclient.c:1832
      12 0x00007f1b82b9e1c8 in virNetSocketEventHandle (watch=3321, fd=54, events=1, opaque=0x7f1b3c0044e0) at rpc/virnetsocket.c:1695
      13 0x00007f1b82a272cf in virEventPollDispatchHandles (nfds=21, fds=0x7f1b8574ded0) at util/vireventpoll.c:498
      14 0x00007f1b82a27af2 in virEventPollRunOnce () at util/vireventpoll.c:645
      15 0x00007f1b82a25a61 in virEventRunDefaultImpl () at util/virevent.c:273
      16 0x00007f1b82b8e97e in virNetServerRun (srv=0x7f1b857419d0) at rpc/virnetserver.c:1097
      17 0x00007f1b8359db6b in main (argc=2, argv=0x7ffff98dbaa8) at libvirtd.c:1512
      4e5f0dd2
    • J
      lxc: Resolve Coverity warning · 8134b37d
      John Ferlan 提交于
      Commit 'c8695053' resulted in the following:
      
      Coverity error seen in the output:
          ERROR: REVERSE_INULL
          FUNCTION: lxcProcessAutoDestroy
      
      Due to the 'dom' being checked before 'dom->persistent' since 'dom'
      is already dereferenced prior to that.
      8134b37d
    • D
      Create + setup cgroups atomically for LXC process · da704c87
      Daniel P. Berrange 提交于
      Currently the LXC driver creates the VM's cgroup prior to
      forking, and then libvirt_lxc moves the child process
      into the cgroup. This won't work with systemd whose APIs
      do the creation of cgroups + attachment of processes atomically.
      
      Fortunately we simply move the entire cgroups setup into
      the libvirt_lxc child process. We make it take place before
      fork'ing into the background, so by the time virCommandRun
      returns in the LXC driver, the cgroup is guaranteed to be
      present.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      da704c87
    • D
      Auto-detect existing cgroup placement · 87b2e6fa
      Daniel P. Berrange 提交于
      Use the new virCgroupNewDetect function to determine cgroup
      placement of existing running VMs. This will allow the legacy
      cgroups creation APIs to be removed entirely
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      87b2e6fa
  3. 22 7月, 2013 5 次提交
  4. 18 7月, 2013 11 次提交
  5. 17 7月, 2013 2 次提交
    • M
      lxc_container: Don't call virGetGroupList during exec · 192a86ca
      Michal Privoznik 提交于
      Commit 75c12564 states that virGetGroupList must not be called
      between fork and exec, then commit ee777e99 promptly violated
      that for lxc.
      
      Patch originally posted by Eric Blake <eblake@redhat.com>.
      192a86ca
    • M
      lxcCapsInit: Allocate primary security driver unconditionally · 37d96498
      Michal Privoznik 提交于
      Currently, if the primary security driver is 'none', we skip
      initializing caps->host.secModels. This means, later, when LXC domain
      XML is parsed and <seclabel type='none'/> is found (see
      virSecurityLabelDefsParseXML), the model name is not copied to the
      seclabel. This leads to subsequent crash in virSecurityManagerGenLabel
      where we call STREQ() over the model (note, that we are expecting model
      to be !NULL).
      37d96498
  6. 16 7月, 2013 7 次提交
  7. 12 7月, 2013 3 次提交
    • D
      Add a couple of debug statements to LXC driver · f45dbdb2
      Daniel P. Berrange 提交于
      When failing to start a container due to inaccessible root
      filesystem path, we did not log any meaningful error. Add a
      few debug statements to assist diagnosis
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      f45dbdb2
    • E
      util: make virSetUIDGID async-signal-safe · ee777e99
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=964358
      
      POSIX states that multi-threaded apps should not use functions
      that are not async-signal-safe between fork and exec, yet we
      were using getpwuid_r and initgroups.  Although rare, it is
      possible to hit deadlock in the child, when it tries to grab
      a mutex that was already held by another thread in the parent.
      I actually hit this deadlock when testing multiple domains
      being started in parallel with a command hook, with the following
      backtrace in the child:
      
       Thread 1 (Thread 0x7fd56bbf2700 (LWP 3212)):
       #0  __lll_lock_wait ()
           at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:136
       #1  0x00007fd5761e7388 in _L_lock_854 () from /lib64/libpthread.so.0
       #2  0x00007fd5761e7257 in __pthread_mutex_lock (mutex=0x7fd56be00360)
           at pthread_mutex_lock.c:61
       #3  0x00007fd56bbf9fc5 in _nss_files_getpwuid_r (uid=0, result=0x7fd56bbf0c70,
           buffer=0x7fd55c2a65f0 "", buflen=1024, errnop=0x7fd56bbf25b8)
           at nss_files/files-pwd.c:40
       #4  0x00007fd575aeff1d in __getpwuid_r (uid=0, resbuf=0x7fd56bbf0c70,
           buffer=0x7fd55c2a65f0 "", buflen=1024, result=0x7fd56bbf0cb0)
           at ../nss/getXXbyYY_r.c:253
       #5  0x00007fd578aebafc in virSetUIDGID (uid=0, gid=0) at util/virutil.c:1031
       #6  0x00007fd578aebf43 in virSetUIDGIDWithCaps (uid=0, gid=0, capBits=0,
           clearExistingCaps=true) at util/virutil.c:1388
       #7  0x00007fd578a9a20b in virExec (cmd=0x7fd55c231f10) at util/vircommand.c:654
       #8  0x00007fd578a9dfa2 in virCommandRunAsync (cmd=0x7fd55c231f10, pid=0x0)
           at util/vircommand.c:2247
       #9  0x00007fd578a9d74e in virCommandRun (cmd=0x7fd55c231f10, exitstatus=0x0)
           at util/vircommand.c:2100
       #10 0x00007fd56326fde5 in qemuProcessStart (conn=0x7fd53c000df0,
           driver=0x7fd55c0dc4f0, vm=0x7fd54800b100, migrateFrom=0x0, stdin_fd=-1,
           stdin_path=0x0, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
           flags=1) at qemu/qemu_process.c:3694
       ...
      
      The solution is to split the work of getpwuid_r/initgroups into the
      unsafe portions (getgrouplist, called pre-fork) and safe portions
      (setgroups, called post-fork).
      
      * src/util/virutil.h (virSetUIDGID, virSetUIDGIDWithCaps): Adjust
      signature.
      * src/util/virutil.c (virSetUIDGID): Add parameters.
      (virSetUIDGIDWithCaps): Adjust clients.
      * src/util/vircommand.c (virExec): Likewise.
      * src/util/virfile.c (virFileAccessibleAs, virFileOpenForked)
      (virDirCreate): Likewise.
      * src/security/security_dac.c (virSecurityDACSetProcessLabel):
      Likewise.
      * src/lxc/lxc_container.c (lxcContainerSetID): Likewise.
      * configure.ac (AC_CHECK_FUNCS_ONCE): Check for setgroups, not
      initgroups.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      ee777e99
    • J
      testutils: Resolve Coverity issues · 8283ef9e
      John Ferlan 提交于
      Recent changes uncovered a NEGATIVE_RETURNS in the return from sysconf()
      when processing a for loop in virtTestCaptureProgramExecChild() in
      testutils.c
      
      Code review uncovered 3 other code paths with the same condition that
      weren't found by Covirity, so fixed those as well.
      8283ef9e
  8. 11 7月, 2013 2 次提交
  9. 10 7月, 2013 2 次提交
  10. 09 7月, 2013 3 次提交