1. 10 11月, 2015 2 次提交
  2. 05 11月, 2015 2 次提交
  3. 04 11月, 2015 5 次提交
  4. 29 10月, 2015 1 次提交
  5. 27 10月, 2015 1 次提交
  6. 26 10月, 2015 4 次提交
  7. 21 10月, 2015 1 次提交
  8. 16 10月, 2015 4 次提交
    • S
      Close the source fd if the destination qemu exits during tunnelled migration · b39a1fe1
      Shivaprasad G Bhat 提交于
      Tunnelled migration can hang if the destination qemu exits despite all the
      ABI checks. This happens whenever the destination qemu exits before the
      complete transfer is noticed by source qemu. The savevm state checks at
      runtime can fail at destination and cause qemu to error out.
      The source qemu cant notice it as the EPIPE is not propogated to it.
      The qemuMigrationIOFunc() notices the stream being broken from virStreamSend()
      and it cleans up the stream alone. The qemuMigrationWaitForCompletion() would
      never get to 100% transfer completion.
      The qemuMigrationWaitForCompletion() never breaks out as well since
      the ssh connection to destination is healthy, and the source qemu also thinks
      the migration is ongoing as the Fd to which it transfers, is never
      closed or broken. So, the migration will hang forever. Even Ctrl-C on the
      virsh migrate wouldn't be honoured. Close the source side FD when there is
      an error in the stream. That way, the source qemu updates itself and
      qemuMigrationWaitForCompletion() notices the failure.
      
      Close the FD for all kinds of errors to be sure. The error message is not
      copied for EPIPE so that the destination error is copied instead later.
      
      Note:
      Reproducible with repeated migrations between Power hosts running in different
      subcores-per-core modes.
      Signed-off-by: NShivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
      b39a1fe1
    • J
      qemu: Fix qemu startup check for QEMU_CAPS_OBJECT_IOTHREAD · cc2d49f9
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1249981
      
      When qemuDomainPinIOThread was added in commit id 'fb562614', a check
      for the IOThread capability was not needed since a check for iothreadpids
      covered the condition where the support for IOThreads was not present.
      The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
      was able to query the monitor for IOThreads. It would only do that if
      the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
      
      However, when iothreadids were added in commit id '8d4614a5' and the
      check for iothreadpids was replaced by a search through the iothreadids[]
      array for the matching iothread_id that left open the possibility that
      an iothreadids[] array was defined, but the entries essentially pointed
      to elements with only the 'iothread_id' defined leaving the 'thread_id'
      value of 0 and eventually the cpumap entry of NULL.
      
      This was because, the original IOThreads commit id '72edaae7' only
      checked if IOThreads were defined and if the emulator had the IOThreads
      capability, then IOThread objects were added at startup. The "capability
      failure" check was only done when a disk was assigned to an IOThread in
      qemuCheckIOThreads. This was because the initial implementation had no way
      to dynamically add IOThreads, but it was possible to dynamically add a
      disk to the domain. So the decision was if the domain supported it, then
      add the IOThread objects. Then if a disk with an IOThread defined was
      added, it could check the capability and fail to add if not there. This
      just meant the 'iothreads' value was essentially ignored.
      
      Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
      deletion of IOThread objects. So it was no longer necessary to generate
      IOThread objects to dynamically attach a disk to. However, the startup
      and disk check code was not modified to reflect this.
      
      This patch will move the capability failure check to when IOThread
      objects are being added to the command line. Thus a domain that has
      IOThreads defined will not be started if the emulator doesn't support
      the capability. This means when qemuCheckIOThreads is called to add
      a disk, it's no longer necessary to check the capability. Instead the
      code can use the IOThreadFind call to indicate that the IOThread
      doesn't exist.
      
      Finally because it could be possible to have a domain running with the
      iothreadids[] defined prior to this change if libvirtd is restarted each
      having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
      if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
      check fails and remove the elements and array if it exists.
      
      With these changes in place, it turns out the cputune-numatune test
      was failing because the right bit wasn't set in the test. So used the
      opportunity to fix that and create a test that would expect to fail
      with some sort of iothreads defined and used, but not having the
      correct capability.
      cc2d49f9
    • J
      qemu: Check for niothreads == 0 in qemuSetupCgroupForIOThreads · 10604cb8
      John Ferlan 提交于
      If there are no IOThreads defined, no sense making other checks
      10604cb8
    • J
      qemu: Use 'niothreadids' instead of 'iothreads' · 4f8e8887
      John Ferlan 提交于
      Although theoretically both should be the same value, the niothreadids
      should be used in favor of iothreads when performing comparisons. This
      leaves the iothreads as a purely numeric value to be saved in the config
      file.  The one exception to the rule is virDomainIOThreadIDDefArrayInit
      where the iothreadids are being generated from the iothreads count since
      iothreadids were added after initial iothreads support.
      4f8e8887
  9. 09 10月, 2015 1 次提交
    • M
      virJSONValueArraySize: return ssize_t · 4f77c48c
      Michal Privoznik 提交于
      The internal representation of a JSON array counts the items in
      size_t. However, for some reason, when asking for the count it's
      reported as int. Firstly, we need the function to return a signed
      type as it's returning -1 on an error. But, not every system has
      integer the same size as size_t. Therefore, lets return ssize_t.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      4f77c48c
  10. 08 10月, 2015 1 次提交
  11. 07 10月, 2015 7 次提交
  12. 06 10月, 2015 5 次提交
  13. 05 10月, 2015 3 次提交
  14. 02 10月, 2015 3 次提交
    • M
      qemu: Use memory-backing-file only when needed · 41c2aa72
      Martin Kletzander 提交于
      We are using memory-backing-file even when it's not needed, for example
      if user requests hugepages for memory backing, but does not specify any
      pagesize or memory node pinning.  This causes migrations to fail when
      migrating from older libvirt that did not do this.  So similarly to
      commit 7832fac8 which does it for
      memory-backend-ram, this commit makes is more generic and
      backend-agnostic, so the backend is not used if there is no specific
      pagesize of hugepages requested, no nodeset the memory node should be
      bound to, no memory access change required, and so on.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1266856Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      41c2aa72
    • M
      qemu: Add -mem-path even with numa · a2dba3ce
      Martin Kletzander 提交于
      So since the introduction of the memory-backend-file object until now we
      only added '-mem-path' for non-NUMA guests and we used the parameters of
      the memory-backend-file object to specify the path to the hugetlbfs
      mount.  But hugepages can be also used without memory-backend-file
      object, as it used to be before its introduction.  Let's just get this
      part of the code back and properly append the '-mem-path' for NUMA
      guests as well, but only when the memory backend is not needed.
      
      This parameter is already being applied when no numa is requested and
      because we still use memory-object-file unconditionally for
      hugepage-backed NUMA guests, this should not fire until later.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      a2dba3ce
    • M
      qemu: Extract -mem-path building into its own function · ad8ab88c
      Martin Kletzander 提交于
      That function is called qemuBuildMemPathStr() and will be used in
      other places in the future.  The change in the test suite is proper due
      to the fact that -mem-prealloc makes only sense with -mem-path (from
      qemu documentation -- html/qemu-doc.html).
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      ad8ab88c