1. 03 10月, 2013 2 次提交
  2. 01 10月, 2013 1 次提交
  3. 30 9月, 2013 1 次提交
  4. 27 9月, 2013 2 次提交
  5. 26 9月, 2013 1 次提交
  6. 23 9月, 2013 2 次提交
  7. 17 9月, 2013 1 次提交
  8. 16 9月, 2013 1 次提交
  9. 12 9月, 2013 3 次提交
  10. 11 9月, 2013 2 次提交
  11. 09 9月, 2013 1 次提交
  12. 06 9月, 2013 1 次提交
  13. 05 9月, 2013 1 次提交
  14. 13 8月, 2013 2 次提交
  15. 08 8月, 2013 1 次提交
  16. 05 8月, 2013 1 次提交
    • M
      Introduce max_queued_clients · 1199edb1
      Michal Privoznik 提交于
      This configuration knob lets user to set the length of queue of
      connection requests waiting to be accept()-ed by the daemon. IOW, it
      just controls the @backlog passed to listen:
      
        int listen(int sockfd, int backlog);
      1199edb1
  17. 02 8月, 2013 1 次提交
  18. 01 8月, 2013 1 次提交
  19. 30 7月, 2013 1 次提交
  20. 27 7月, 2013 1 次提交
  21. 26 7月, 2013 3 次提交
  22. 25 7月, 2013 1 次提交
  23. 24 7月, 2013 4 次提交
    • M
      virLXCMonitorClose: Unlock domain while closing monitor · 4e5f0dd2
      Michal Privoznik 提交于
      There's a race in lxc driver causing a deadlock. If a domain is
      destroyed immediately after started, the deadlock can occur. When domain
      is started, the even loop tries to connect to the monitor. If the
      connecting succeeds, virLXCProcessMonitorInitNotify() is called with
      @mon->client locked. The first thing that callee does, is
      virObjectLock(vm). So the order of locking is: 1) @mon->client, 2) @vm.
      
      However, if there's another thread executing virDomainDestroy on the
      very same domain, the first thing done here is locking the @vm. Then,
      the corresponding libvirt_lxc process is killed and monitor is closed
      via calling virLXCMonitorClose(). This callee tries to lock @mon->client
      too. So the order is reversed to the first case. This situation results
      in deadlock and unresponsive libvirtd (since the eventloop is involved).
      
      The proper solution is to unlock the @vm in virLXCMonitorClose prior
      entering virNetClientClose(). See the backtrace as follows:
      
      Thread 25 (Thread 0x7f1b7c9b8700 (LWP 16312)):
      0  0x00007f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0
      1  0x00007f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0
      2  0x00007f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0
      3  0x00007f1b82a637cf in virMutexLock (m=0x7f1b3c0038d0) at util/virthreadpthread.c:85
      4  0x00007f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b3c0038c0) at util/virobject.c:320
      5  0x00007f1b82b861f6 in virNetClientCloseInternal (client=0x7f1b3c0038c0, reason=3) at rpc/virnetclient.c:696
      6  0x00007f1b82b862f5 in virNetClientClose (client=0x7f1b3c0038c0) at rpc/virnetclient.c:721
      7  0x00007f1b6ee12500 in virLXCMonitorClose (mon=0x7f1b3c007210) at lxc/lxc_monitor.c:216
      8  0x00007f1b6ee129f0 in virLXCProcessCleanup (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:174
      9  0x00007f1b6ee14106 in virLXCProcessStop (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:710
      10 0x00007f1b6ee1aa36 in lxcDomainDestroyFlags (dom=0x7f1b5c002560, flags=0) at lxc/lxc_driver.c:1291
      11 0x00007f1b6ee1ab1a in lxcDomainDestroy (dom=0x7f1b5c002560) at lxc/lxc_driver.c:1321
      12 0x00007f1b82b05be5 in virDomainDestroy (domain=0x7f1b5c002560) at libvirt.c:2303
      13 0x00007f1b835a7e85 in remoteDispatchDomainDestroy (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50) at remote_dispatch.h:3143
      14 0x00007f1b835a7d78 in remoteDispatchDomainDestroyHelper (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50, ret=0x7f1b5c0029e0) at remote_dispatch.h:3121
      15 0x00007f1b82b93704 in virNetServerProgramDispatchCall (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:435
      16 0x00007f1b82b93263 in virNetServerProgramDispatch (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:305
      17 0x00007f1b82b8c0f6 in virNetServerProcessMsg (srv=0x7f1b857419d0, client=0x7f1b8574ae40, prog=0x7f1b8573af90, msg=0x7f1b8574acf0) at rpc/virnetserver.c:163
      18 0x00007f1b82b8c1da in virNetServerHandleJob (jobOpaque=0x7f1b8574dca0, opaque=0x7f1b857419d0) at rpc/virnetserver.c:184
      19 0x00007f1b82a64158 in virThreadPoolWorker (opaque=0x7f1b8573cb10) at util/virthreadpool.c:144
      20 0x00007f1b82a63ae5 in virThreadHelper (data=0x7f1b8574b9f0) at util/virthreadpthread.c:161
      21 0x00007f1b80532f4a in start_thread () from /lib64/libpthread.so.0
      22 0x00007f1b7fc4f20d in clone () from /lib64/libc.so.6
      
      Thread 1 (Thread 0x7f1b83546740 (LWP 16297)):
      0  0x00007f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0
      1  0x00007f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0
      2  0x00007f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0
      3  0x00007f1b82a637cf in virMutexLock (m=0x7f1b680ceb80) at util/virthreadpthread.c:85
      4  0x00007f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b680ceb70) at util/virobject.c:320
      5  0x00007f1b6ee13bd7 in virLXCProcessMonitorInitNotify (mon=0x7f1b3c007210, initpid=4832, vm=0x7f1b680ceb70) at lxc/lxc_process.c:601
      6  0x00007f1b6ee11fd3 in virLXCMonitorHandleEventInit (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, evdata=0x7f1b8574a7d0, opaque=0x7f1b3c007210) at lxc/lxc_monitor.c:109
      7  0x00007f1b82b8a196 in virNetClientProgramDispatch (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, msg=0x7f1b3c003928) at rpc/virnetclientprogram.c:259
      8  0x00007f1b82b87030 in virNetClientCallDispatchMessage (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1019
      9  0x00007f1b82b876bb in virNetClientCallDispatch (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1140
      10 0x00007f1b82b87d41 in virNetClientIOHandleInput (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1312
      11 0x00007f1b82b88f51 in virNetClientIncomingEvent (sock=0x7f1b3c0044e0, events=1, opaque=0x7f1b3c0038c0) at rpc/virnetclient.c:1832
      12 0x00007f1b82b9e1c8 in virNetSocketEventHandle (watch=3321, fd=54, events=1, opaque=0x7f1b3c0044e0) at rpc/virnetsocket.c:1695
      13 0x00007f1b82a272cf in virEventPollDispatchHandles (nfds=21, fds=0x7f1b8574ded0) at util/vireventpoll.c:498
      14 0x00007f1b82a27af2 in virEventPollRunOnce () at util/vireventpoll.c:645
      15 0x00007f1b82a25a61 in virEventRunDefaultImpl () at util/virevent.c:273
      16 0x00007f1b82b8e97e in virNetServerRun (srv=0x7f1b857419d0) at rpc/virnetserver.c:1097
      17 0x00007f1b8359db6b in main (argc=2, argv=0x7ffff98dbaa8) at libvirtd.c:1512
      4e5f0dd2
    • J
      lxc: Resolve Coverity warning · 8134b37d
      John Ferlan 提交于
      Commit 'c8695053' resulted in the following:
      
      Coverity error seen in the output:
          ERROR: REVERSE_INULL
          FUNCTION: lxcProcessAutoDestroy
      
      Due to the 'dom' being checked before 'dom->persistent' since 'dom'
      is already dereferenced prior to that.
      8134b37d
    • D
      Create + setup cgroups atomically for LXC process · da704c87
      Daniel P. Berrange 提交于
      Currently the LXC driver creates the VM's cgroup prior to
      forking, and then libvirt_lxc moves the child process
      into the cgroup. This won't work with systemd whose APIs
      do the creation of cgroups + attachment of processes atomically.
      
      Fortunately we simply move the entire cgroups setup into
      the libvirt_lxc child process. We make it take place before
      fork'ing into the background, so by the time virCommandRun
      returns in the LXC driver, the cgroup is guaranteed to be
      present.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      da704c87
    • D
      Auto-detect existing cgroup placement · 87b2e6fa
      Daniel P. Berrange 提交于
      Use the new virCgroupNewDetect function to determine cgroup
      placement of existing running VMs. This will allow the legacy
      cgroups creation APIs to be removed entirely
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      87b2e6fa
  24. 22 7月, 2013 5 次提交