1. 09 12月, 2015 1 次提交
  2. 05 12月, 2015 1 次提交
    • D
      qemu: include hostname in QEMU log files · 45c7b9e6
      Daniel P. Berrange 提交于
      Often when debugging bug reports one is given a copy of the file
      from /var/log/libvirt/qemu/$NAME.log along with other supporting
      files. In a number of cases I've been given sets of files which
      were from different machines. Including the hostname in the QEMU
      log file will help identify when the bug reporter is providing
      bad information.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      45c7b9e6
  3. 01 12月, 2015 1 次提交
    • W
      qemu_agent: fix deadlock in qemuProcessHandleAgentEOF · fe51174f
      Wang Yufei 提交于
      If VM A is shutdown a by qemu agent at appoximately the same time
      an agent EOF of VM A happened, there's a chance that deadlock may occur:
      
      qemuProcessHandleAgentEOF in main thread
      A)  priv->agent = NULL; //A happened before B
      
          //deadlock when we get agent lock which's held by worker thread
          qemuAgentClose(agent);
      
      qemuDomainObjExitAgent called by qemuDomainShutdownFlags in worker thread
      B)  hasRefs = virObjectUnref(priv->agent); // priv->agent is NULL,
                                                 // return false
          if (hasRefs)
              virObjectUnlock(priv->agent); //agent lock will not be released here
      
      In order to resolve, during EOF close the agent first, then set priv->agent
      to NULL to fix the deadlock.
      
      This essentially reverts commit id '1020a504'. It's also of note that commit
      id '362d0477' notes a possible/rare deadlock similar to what was seen in
      the monitor in commit id '25f582e3'. However, it seems interceding changes
      including commit id 'd960d06f' should remove the deadlock issue.
      
      With this change, if EOF is called:
      
          Get VM lock
          Check if !priv->agent || priv->beingDestroyed, then unlock VM
          Call qemuAgentClose
          Unlock VM
      
      When qemuAgentClose is called
          Get Agent lock
          If Agent->fd open, close it
          Unlock Agent
          Unref Agent
      
      qemuDomainObjEnterAgent
          Enter with VM lock
          Get Agent lock
          Increase Agent refcnt
          Unlock VM
      
      After running agent command, calling qemuDomainObjExitAgent
          Enter with Agent lock
          Unref Agent
          If not last reference, unlock Agent
          Get VM lock
      
      If we were in the middle of an EnterAgent, call Agent command, and
      ExitAgent sequence and the EOF code is triggered, then the EOF code
      can get the VM lock, make it's checks against !priv->agent ||
      priv->beingDestroyed, and call qemuAgentClose. The CloseAgent
      would wait to get agent lock. The other thread then will eventually
      call ExitAgent, release the Agent lock and unref the Agent. Once
      ExitAgent releases the Agent lock, AgentClose will get the Agent
      Agent lock, close the fd, unlock the agent, and unref the agent.
      The final unref would cause deletion of the agent.
      Signed-off-by: NWang Yufei <james.wangyufei@huawei.com>
      Reviewed-by: NRen Guannan <renguannan@huawei.com>
      fe51174f
  4. 26 11月, 2015 6 次提交
  5. 25 11月, 2015 3 次提交
  6. 24 11月, 2015 1 次提交
    • J
      qemu: pass the asyncJob to qemuProcessStartCPUs · 668a0fef
      Ján Tomko 提交于
      Now that new domains are started inside a QEMU_ASYNC_JOB_START job,
      we need to pass it down to qemuProcessStartCPUs too.
      
      This removes the warning:
      qemuDomainObjEnterMonitorInternal:1750 : This thread seems to be the
      async job owner; entering monitor without asking for a nested job is
      dangerous
      
      Introduced by commit 04c721f2, before that this code path was only
      executed with QEMU_ASYNC_JOB_NONE.
      
      (This code is not executed on migration, because qemuMigrationPrepareAny
       sets the VIR_QEMU_PROCESS_START_PAUSED flag.)
      668a0fef
  7. 19 11月, 2015 14 次提交
  8. 12 11月, 2015 1 次提交
  9. 10 11月, 2015 1 次提交
  10. 04 11月, 2015 5 次提交
  11. 26 10月, 2015 4 次提交
  12. 16 10月, 2015 2 次提交
    • J
      qemu: Fix qemu startup check for QEMU_CAPS_OBJECT_IOTHREAD · cc2d49f9
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1249981
      
      When qemuDomainPinIOThread was added in commit id 'fb562614', a check
      for the IOThread capability was not needed since a check for iothreadpids
      covered the condition where the support for IOThreads was not present.
      The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
      was able to query the monitor for IOThreads. It would only do that if
      the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
      
      However, when iothreadids were added in commit id '8d4614a5' and the
      check for iothreadpids was replaced by a search through the iothreadids[]
      array for the matching iothread_id that left open the possibility that
      an iothreadids[] array was defined, but the entries essentially pointed
      to elements with only the 'iothread_id' defined leaving the 'thread_id'
      value of 0 and eventually the cpumap entry of NULL.
      
      This was because, the original IOThreads commit id '72edaae7' only
      checked if IOThreads were defined and if the emulator had the IOThreads
      capability, then IOThread objects were added at startup. The "capability
      failure" check was only done when a disk was assigned to an IOThread in
      qemuCheckIOThreads. This was because the initial implementation had no way
      to dynamically add IOThreads, but it was possible to dynamically add a
      disk to the domain. So the decision was if the domain supported it, then
      add the IOThread objects. Then if a disk with an IOThread defined was
      added, it could check the capability and fail to add if not there. This
      just meant the 'iothreads' value was essentially ignored.
      
      Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
      deletion of IOThread objects. So it was no longer necessary to generate
      IOThread objects to dynamically attach a disk to. However, the startup
      and disk check code was not modified to reflect this.
      
      This patch will move the capability failure check to when IOThread
      objects are being added to the command line. Thus a domain that has
      IOThreads defined will not be started if the emulator doesn't support
      the capability. This means when qemuCheckIOThreads is called to add
      a disk, it's no longer necessary to check the capability. Instead the
      code can use the IOThreadFind call to indicate that the IOThread
      doesn't exist.
      
      Finally because it could be possible to have a domain running with the
      iothreadids[] defined prior to this change if libvirtd is restarted each
      having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
      if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
      check fails and remove the elements and array if it exists.
      
      With these changes in place, it turns out the cputune-numatune test
      was failing because the right bit wasn't set in the test. So used the
      opportunity to fix that and create a test that would expect to fail
      with some sort of iothreads defined and used, but not having the
      correct capability.
      cc2d49f9
    • J
      qemu: Use 'niothreadids' instead of 'iothreads' · 4f8e8887
      John Ferlan 提交于
      Although theoretically both should be the same value, the niothreadids
      should be used in favor of iothreads when performing comparisons. This
      leaves the iothreads as a purely numeric value to be saved in the config
      file.  The one exception to the rule is virDomainIOThreadIDDefArrayInit
      where the iothreadids are being generated from the iothreads count since
      iothreadids were added after initial iothreads support.
      4f8e8887