1. 02 10月, 2018 4 次提交
    • J
      util: Data overrun may lead to divide by zero · efdbfe57
      John Ferlan 提交于
      Commit 87a8a30d added the function based on the virsh function,
      but used an unsigned long long instead of a double and thus that
      limits the maximum result.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      efdbfe57
    • J
      tests: Inline a sysconf call for linuxCPUStatsToBuf · fddf9283
      John Ferlan 提交于
      While unlikely, sysconf(_SC_CLK_TCK) could fail leading to
      indeterminate results for the subsequent division. So let's
      just remove the # define and inline the same change.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      fddf9283
    • J
      libxl: Fix possible object refcnt issue · 425b9f8a
      John Ferlan 提交于
      When libxlDomainMigrationDstPrepare adds the @args to an
      virNetSocketAddIOCallback using libxlMigrateDstReceive as
      the target of the virNetSocketIOFunc @func with the knowledge
      that the libxlMigrateDstReceive will virObjectUnref @args
      at the end thus not needing to Unref during normal processing
      for libxlDomainMigrationDstPrepare.
      
      However, Coverity believes there's an issue with this. The
      problem is there can be @nsocks virNetSocketAddIOCallback's
      added, but only one virObjectUnref. That means the first
      one done will Unref and the subsequent callers may not get
      the @args (or @opaque) as they expected. If there's only
      one socket returned from virNetSocketNewListenTCP, then sure
      that works. However, if it returned more than one there's
      going to be a problem.
      
      To resolve this, since we start with 1 reference from the
      virObjectNew for @args, we will add 1 reference for each
      time @args is used for virNetSocketAddIOCallback. Then
      since libxlDomainMigrationDstPrepare would be done with
      @args, move it's virObjectUnref from the error: label to
      the done: label (since error: falls through). That way
      once the last IOCallback is done, then @args will be freed.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      425b9f8a
    • J
      lxc: Only check @nparams in lxcDomainBlockStatsFlags · 6830c813
      John Ferlan 提交于
      Remove the "!params" check from the condition since it's possible
      someone could pass a non NULL value there, but a 0 for the nparams
      and thus continue on.  The external API only checks if @nparams is
      non-zero, then check for NULL @params.
      
      Found by Coverity
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      6830c813
  2. 01 10月, 2018 13 次提交
  3. 29 9月, 2018 1 次提交
  4. 27 9月, 2018 6 次提交
  5. 26 9月, 2018 15 次提交
  6. 25 9月, 2018 1 次提交
    • M
      lxc_monitor: Avoid AB / BA lock race · 7882c6ec
      Mark Asselstine 提交于
      A deadlock situation can occur when autostarting a LXC domain 'guest'
      due to two threads attempting to take opposing locks while holding
      opposing locks (AB BA problem). Thread A takes and holds the 'vm' lock
      while attempting to take the 'client' lock, meanwhile, thread B takes
      and holds the 'client' lock while attempting to take the 'vm' lock.
      
      The potential for this can be seen as follows:
      
      Thread A:
      virLXCProcessAutostartDomain (takes vm lock)
       --> virLXCProcessStart
        --> virLXCProcessConnectMonitor
         --> virLXCMonitorNew
          --> virNetClientSetCloseCallback (wants client lock)
      
      Thread B:
      virNetClientIncomingEvent (takes client lock)
       --> virNetClientIOHandleInput
        --> virNetClientCallDispatch
         --> virNetClientCallDispatchMessage
          --> virNetClientProgramDispatch
           --> virLXCMonitorHandleEventInit
            --> virLXCProcessMonitorInitNotify (wants vm lock)
      
      Since these threads are scheduled independently and are preemptible it
      is possible for the deadlock scenario to occur where each thread locks
      their first lock but both will fail to get their second lock and just
      spin forever. You get something like:
      
      virLXCProcessAutostartDomain (takes vm lock)
       --> virLXCProcessStart
        --> virLXCProcessConnectMonitor
         --> virLXCMonitorNew
      <...>
      virNetClientIncomingEvent (takes client lock)
       --> virNetClientIOHandleInput
        --> virNetClientCallDispatch
         --> virNetClientCallDispatchMessage
          --> virNetClientProgramDispatch
           --> virLXCMonitorHandleEventInit
            --> virLXCProcessMonitorInitNotify (wants vm lock but spins)
      <...>
          --> virNetClientSetCloseCallback (wants client lock but spins)
      
      Neither thread ever gets the lock it needs to be able to continue
      while holding the lock that the other thread needs.
      
      The actual window for preemption which can cause this deadlock is
      rather small, between the calls to virNetClientProgramNew() and
      execution of virNetClientSetCloseCallback(), both in
      virLXCMonitorNew(). But it can be seen in real world use that this
      small window is enough.
      
      By moving the call to virNetClientSetCloseCallback() ahead of
      virNetClientProgramNew() we can close any possible chance of the
      deadlock taking place. There should be no other implications to the
      move since the close callback (in the unlikely event was called) will
      spin on the vm lock. The remaining work that takes place between the
      old call location of virNetClientSetCloseCallback() and the new
      location is unaffected by the move.
      Signed-off-by: NMark Asselstine <mark.asselstine@windriver.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      7882c6ec