1. 14 6月, 2019 1 次提交
    • J
      qemu: Try harder to remove pr-helper object and kill pr-helper process · 7a232286
      Jie Wang 提交于
      If libvirt receives DISCONNECTED event and prDaemonRunning is set
      to false, and qemuDomainRemoveDiskDevice() is performing in the
      meantime, then qemuDomainRemoveDiskDevice() will fail to remove
      pr-helper object because prDaemonRunning is false. But removing
      that check from qemuHotplugRemoveManagedPR() is not enough,
      because after removing the object through monitor the
      qemuProcessKillManagedPRDaemon() is called which contains the
      same check. Thus the pr-helper process might be left behind.
      Signed-off-by: NJie Wang <wangjie88@huawei.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      7a232286
  2. 12 6月, 2019 1 次提交
    • P
      qemu: Use proper block job name when reconnecting to VM · 56c6893f
      Peter Krempa 提交于
      The hash table returned by qemuMonitorGetAllBlockJobInfo is organized by
      the frontend name (which skipps the 'drive-' prefix). While our code
      properly matches the jobs to the disk, qemu needs the full job name
      including the 'drive-' prefix to be able to identify jobs.
      
      Fix this by adding an argument to qemuMonitorGetAllBlockJobInfo which
      does not modify the job name before filling the hash.
      
      This fixes a regression where users would not be able to cancel/pivot
      block jobs after restarting libvirtd while a blockjob is running.
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      56c6893f
  3. 06 6月, 2019 1 次提交
  4. 04 6月, 2019 4 次提交
    • A
      qemu: Drop cleanup label from qemuProcessInitCpuAffinity() · de563ebc
      Andrea Bolognani 提交于
      We're using VIR_AUTOPTR() for everything now, plus the
      cleanup section was not doing anything useful anyway.
      Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      de563ebc
    • A
      qemu: Fix leak in qemuProcessInitCpuAffinity() · 2f2254c7
      Andrea Bolognani 提交于
      In two out of three scenarios we are cleaning up properly after
      ourselves, but commit 5f2212c0 has changed the remaining one
      in a way that caused it to start leaking cpumapToSet.
      
      Refactor the logic so that cpumapToSet is always a freshly
      allocated bitmap that gets cleaned up automatically thanks to
      VIR_AUTOPTR(); this also allows us to remove the hostcpumap
      variable.
      Reported-by: NJohn Ferlan <jferlan@redhat.com>
      Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      2f2254c7
    • A
      qemu: Fix qemuProcessInitCpuAffinity() · 5f2212c0
      Andrea Bolognani 提交于
      Ever since the feature was introduced with commit 0f8e7ae3,
      it has contained a logic error in that it attempted to use a NUMA
      node map where a CPU map was expected.
      
      Because of that, guests using <numatune> might fail to start:
      
        # virsh start guest
        error: Failed to start domain guest
        error: cannot set CPU affinity on process 40055: Invalid argument
      
      This was particularly easy to trigger on POWER 8 machines, where
      secondary threads always show up as offline in the host: having
      
        <numatune>
          <memory mode='strict' placement='static' nodeset='1'/>
        </numatune>
      
      in the guest configuration, for example, would result in libvirt
      trying to set the process affinity so that it would prefer
      running on CPU 1, but since that's a secondary thread and thus
      shows up as offline, the operation would fail, and so would
      starting the guest.
      
      Use the newly introduced virNumaNodesetToCPUset() to convert the
      NUMA node map to a CPU map, which in the example above would be
      48,56,64,72,80,88 - a valid input for virProcessSetAffinity().
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1703661Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      5f2212c0
    • J
      qemu: Check TSC frequency before starting QEMU · 7da62c91
      Jiri Denemark 提交于
      When migrating a domain with invtsc CPU feature enabled, the TSC
      frequency of the destination host must match the frequency used when the
      domain was started on the source host or the destination host has to
      support TSC scaling.
      
      If the frequencies do not match and the destination host does not
      support TSC scaling, QEMU will fail to set the right TSC frequency when
      starting vCPUs on the destination and thus migration will fail. However,
      this is quite late since both host might have spent significant time
      transferring memory and perhaps even storage data.
      
      By adding the check to libvirt we can let migration fail before any data
      starts to be sent over. If for some reason libvirt is unable to detect
      the host's TSC frequency or scaling support, we'll just let QEMU try and
      the migration will either succeed or fail later.
      
      Luckily, we mandate TSC frequency to be explicitly set in the domain XML
      to even allow migration of domains with invtsc. We can just check
      whether the requested frequency is compatible with the current host
      before starting QEMU.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1641702Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      7da62c91
  5. 27 5月, 2019 1 次提交
  6. 30 4月, 2019 1 次提交
  7. 18 4月, 2019 2 次提交
    • M
      qemu: Set up EMULATOR thread and cpuset.mems before exec()-ing qemu · 0eaa4716
      Michal Privoznik 提交于
      It's funny how this went unnoticed for such a long time. Long
      story short, if a domain is configured with
      VIR_DOMAIN_NUMATUNE_MEM_STRICT libvirt doesn't really honour
      that. This is because of 7e72ac78 after which libvirt allowed
      qemu to allocate memory just anywhere and only after that it used
      some magic involving cpuset.memory_migrate and cpuset.mems to
      move the memory to desired NUMA nodes. This was done in order to
      work around some KVM bug where KVM would fail if there wasn't a
      DMA zone available on the NUMA node. Well, while the work around
      might stopped libvirt tickling the KVM bug it also caused a bug
      on libvirt side: if there is not enough memory on configured NUMA
      node(s) then any attempt to start a domain must fail. Because of
      the way we play with guest memory domains can start just happily.
      
      The solution is to move the child we've just forked into emulator
      cgroup, set up cpuset.mems and exec() qemu only after that.
      
      This basically reverts 7e72ac78 which was a workaround
      for kernel bug. This bug was apparently fixed because I've tested
      this successfully with recent kernel.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NMartin Kletzander <mkletzan@redhat.com>
      0eaa4716
    • D
      virt drivers: don't handle type=network after resolving actual network type · 2f5e6502
      Daniel P. Berrangé 提交于
      The call to resolve the actual network type will turn any NICs with
      type=network into one of the other types. Thus there should be no need
      to handle type=network in later switch() statements jumping off the
      actual type.
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      2f5e6502
  8. 16 4月, 2019 3 次提交
  9. 13 4月, 2019 1 次提交
  10. 04 4月, 2019 3 次提交
  11. 22 3月, 2019 1 次提交
  12. 12 3月, 2019 1 次提交
  13. 25 2月, 2019 2 次提交
  14. 20 2月, 2019 18 次提交