1. 19 11月, 2015 1 次提交
  2. 12 11月, 2015 1 次提交
  3. 10 11月, 2015 1 次提交
  4. 04 11月, 2015 5 次提交
  5. 26 10月, 2015 4 次提交
  6. 16 10月, 2015 2 次提交
    • J
      qemu: Fix qemu startup check for QEMU_CAPS_OBJECT_IOTHREAD · cc2d49f9
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1249981
      
      When qemuDomainPinIOThread was added in commit id 'fb562614', a check
      for the IOThread capability was not needed since a check for iothreadpids
      covered the condition where the support for IOThreads was not present.
      The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
      was able to query the monitor for IOThreads. It would only do that if
      the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
      
      However, when iothreadids were added in commit id '8d4614a5' and the
      check for iothreadpids was replaced by a search through the iothreadids[]
      array for the matching iothread_id that left open the possibility that
      an iothreadids[] array was defined, but the entries essentially pointed
      to elements with only the 'iothread_id' defined leaving the 'thread_id'
      value of 0 and eventually the cpumap entry of NULL.
      
      This was because, the original IOThreads commit id '72edaae7' only
      checked if IOThreads were defined and if the emulator had the IOThreads
      capability, then IOThread objects were added at startup. The "capability
      failure" check was only done when a disk was assigned to an IOThread in
      qemuCheckIOThreads. This was because the initial implementation had no way
      to dynamically add IOThreads, but it was possible to dynamically add a
      disk to the domain. So the decision was if the domain supported it, then
      add the IOThread objects. Then if a disk with an IOThread defined was
      added, it could check the capability and fail to add if not there. This
      just meant the 'iothreads' value was essentially ignored.
      
      Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
      deletion of IOThread objects. So it was no longer necessary to generate
      IOThread objects to dynamically attach a disk to. However, the startup
      and disk check code was not modified to reflect this.
      
      This patch will move the capability failure check to when IOThread
      objects are being added to the command line. Thus a domain that has
      IOThreads defined will not be started if the emulator doesn't support
      the capability. This means when qemuCheckIOThreads is called to add
      a disk, it's no longer necessary to check the capability. Instead the
      code can use the IOThreadFind call to indicate that the IOThread
      doesn't exist.
      
      Finally because it could be possible to have a domain running with the
      iothreadids[] defined prior to this change if libvirtd is restarted each
      having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
      if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
      check fails and remove the elements and array if it exists.
      
      With these changes in place, it turns out the cputune-numatune test
      was failing because the right bit wasn't set in the test. So used the
      opportunity to fix that and create a test that would expect to fail
      with some sort of iothreads defined and used, but not having the
      correct capability.
      cc2d49f9
    • J
      qemu: Use 'niothreadids' instead of 'iothreads' · 4f8e8887
      John Ferlan 提交于
      Although theoretically both should be the same value, the niothreadids
      should be used in favor of iothreads when performing comparisons. This
      leaves the iothreads as a purely numeric value to be saved in the config
      file.  The one exception to the rule is virDomainIOThreadIDDefArrayInit
      where the iothreadids are being generated from the iothreads count since
      iothreadids were added after initial iothreads support.
      4f8e8887
  7. 08 10月, 2015 1 次提交
  8. 05 10月, 2015 2 次提交
  9. 24 9月, 2015 2 次提交
  10. 23 9月, 2015 1 次提交
  11. 18 9月, 2015 1 次提交
  12. 16 9月, 2015 1 次提交
    • J
      virfile: Check for existence of dir in virFileDeleteTree · b421a708
      John Ferlan 提交于
      Commit id 'f1f68ca3' added code to remove the directory paths for
      auto-generated sockets, but that code could be called before the
      paths were created resulting in generating error messages from
      virFileDeleteTree indicating that the file doesn't exist.
      
      Rather than "enforce" all callers to make the non-NULL and existence
      checks, modify the virFileDeleteTree API to silently ignore NULL on
      input and non-existent directory trees.
      b421a708
  13. 14 9月, 2015 1 次提交
  14. 09 9月, 2015 1 次提交
  15. 26 8月, 2015 1 次提交
  16. 24 8月, 2015 1 次提交
  17. 03 8月, 2015 1 次提交
    • M
      qemu: Remove double unlock for domains · c43c661f
      Martin Kletzander 提交于
      The virDomainObjListRemove() function unlocks a domain that it's given
      due to legacy code.  And because of that code, which should be
      refactored, that last virObjectUnlock() cannot be just removed.  So
      instead, lock it right back for qemu for now.  All calls to
      qemuDomainRemoveInactive() are followed by code that unlocks the domain
      again, plus the domain should be locked during qemuDomainObjEndJob(), so
      the right place to lock it is right after virDomainObjListRemove().
      
      The only place where this would cause a problem is the autodestroy
      callback, so we need to get another reference there and uref+unlock it
      afterwards.  Luckily, returning NULL from that function doesn't mean an
      error, and only means that it doesn't need to be unlocked anymore.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      c43c661f
  18. 31 7月, 2015 2 次提交
  19. 14 7月, 2015 2 次提交
    • P
      qemu: process: Improve update of maximum balloon state at startup · c212e0c7
      Peter Krempa 提交于
      In commit 641a145d I've added code that
      resets the balloon memory value to full size prior to resuming the vCPUs
      since the size certainly was not reduced at that point.
      
      Since qemuProcessStart is used also in code paths with already booted
      up guests (migration, save/restore) the assumption is not entirely true
      since the guest might already been running before.
      
      This patch adds a function that queries the monitor rather than using
      the full size since a balloon event would not be reissued in case we are
      recovering a saved migration state.
      
      Additionally the new function is used also when reconnecting to a VM
      after libvirtd restart since we might have missed a few balloon events
      while libvirtd was not running.
      c212e0c7
    • J
      nodeinfo: Add sysfs_prefix to nodeGetCPUCount · f1a43a0f
      John Ferlan 提交于
      Add the sysfs_prefix argument to the call to allow for setting the
      path for tests to something other than SYSFS_SYSTEM_PATH.
      f1a43a0f
  20. 13 7月, 2015 1 次提交
    • M
      qemuProcessHandleMigrationStatus: Update migration status more frequently · 45cc2fca
      Michal Privoznik 提交于
      After Jirka's migration patches libvirt is listening on migration
      events from qemu instead of actively polling on the monitor. There is,
      however, a little regression (introduced in 6d2edb6a). The
      problem is, the current status of migration job is updated in
      qemuProcessHandleMigrationStatus if and only if migration job was
      started. But eventually every asynchronous job may result in
      migration. Therefore, since this job is not strictly a
      migration job, internal state was not updated and later checks failed:
      
        virsh # save fedora22 /tmp/fedora22_ble.save
        error: Failed to save domain fedora22 to /tmp/fedora22_ble.save
        error: operation failed: domain save job: is not active
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      45cc2fca
  21. 10 7月, 2015 7 次提交
  22. 19 6月, 2015 1 次提交