1. 30 4月, 2019 1 次提交
  2. 18 4月, 2019 2 次提交
    • M
      qemu: Set up EMULATOR thread and cpuset.mems before exec()-ing qemu · 0eaa4716
      Michal Privoznik 提交于
      It's funny how this went unnoticed for such a long time. Long
      story short, if a domain is configured with
      VIR_DOMAIN_NUMATUNE_MEM_STRICT libvirt doesn't really honour
      that. This is because of 7e72ac78 after which libvirt allowed
      qemu to allocate memory just anywhere and only after that it used
      some magic involving cpuset.memory_migrate and cpuset.mems to
      move the memory to desired NUMA nodes. This was done in order to
      work around some KVM bug where KVM would fail if there wasn't a
      DMA zone available on the NUMA node. Well, while the work around
      might stopped libvirt tickling the KVM bug it also caused a bug
      on libvirt side: if there is not enough memory on configured NUMA
      node(s) then any attempt to start a domain must fail. Because of
      the way we play with guest memory domains can start just happily.
      
      The solution is to move the child we've just forked into emulator
      cgroup, set up cpuset.mems and exec() qemu only after that.
      
      This basically reverts 7e72ac78 which was a workaround
      for kernel bug. This bug was apparently fixed because I've tested
      this successfully with recent kernel.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NMartin Kletzander <mkletzan@redhat.com>
      0eaa4716
    • D
      virt drivers: don't handle type=network after resolving actual network type · 2f5e6502
      Daniel P. Berrangé 提交于
      The call to resolve the actual network type will turn any NICs with
      type=network into one of the other types. Thus there should be no need
      to handle type=network in later switch() statements jumping off the
      actual type.
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      2f5e6502
  3. 16 4月, 2019 3 次提交
  4. 13 4月, 2019 1 次提交
  5. 04 4月, 2019 3 次提交
  6. 22 3月, 2019 1 次提交
  7. 12 3月, 2019 1 次提交
  8. 25 2月, 2019 2 次提交
  9. 20 2月, 2019 18 次提交
  10. 14 2月, 2019 1 次提交
  11. 08 2月, 2019 1 次提交
  12. 04 2月, 2019 1 次提交
  13. 01 2月, 2019 1 次提交
    • M
      qemu: Rework setting process affinity · f136b831
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1503284
      
      The way we currently start qemu from CPU affinity POV is as
      follows:
      
        1) the child process is set affinity to all online CPUs (unless
        some vcpu pinning was given in the domain XML)
      
        2) Once qemu is running, cpuset cgroup is configured taking
        memory pinning into account
      
      Problem is that we let qemu allocate its memory just anywhere in
      1) and then rely in 2) to be able to move the memory to
      configured NUMA nodes. This might not be always possible (e.g.
      qemu might lock some parts of its memory) and is very suboptimal
      (copying large memory between NUMA nodes takes significant amount
      of time).
      
      The solution is to set affinity to one of (in priority order):
        - The CPUs associated with NUMA memory affinity mask
        - The CPUs associated with emulator pinning
        - All online host CPUs
      
      Later (once QEMU has allocated its memory) we then change this
      again to (again in priority order):
        - The CPUs associated with emulator pinning
        - The CPUs returned by numad
        - The CPUs associated with vCPU pinning
        - All online host CPUs
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      f136b831
  14. 31 1月, 2019 2 次提交
    • P
      qemu: Label backing chain of user-provided target of blockCopy when starting the job · d56afb8e
      Peter Krempa 提交于
      Be more sensible when setting labels of the target of a
      virDomainBlockCopy operation. Previously we'd relabel everything in case
      it's a copy job even if there's no unlabelled backing chain. Since we
      are also not sure whether the backing chain is shared we don't relabel
      the chain on completion of the blockjob. This certainly won't play nice
      with the image permission relabelling feature.
      
      While this does not fix the case where the image is reused and has
      backing chain it certainly sanitizes all the other cases. Later on it
      will also allow to do the correct thing in cases where only one layer
      was introduced.
      
      The change is necessary as in case when -blockdev will be used we will
      need to hotplug the backing chain and thus labeling needs to be setup in
      advance and not only at the time of pivot.  To avoid multiple code paths
      move the labeling now.
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      d56afb8e
    • P
      qemu: domain: Allow overriding disk source in qemuDomainDetermineDiskChain · 33b0a3ba
      Peter Krempa 提交于
      When we need to detect a chain for a image which will become the new
      source for a disk (e.g. after a disk media change or a blockjob) we'd
      need to replace disk->src temporarily to do so.
      
      Move the 'disksrc' temporary variable to an argument and adjust callers.
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      33b0a3ba
  15. 25 1月, 2019 1 次提交
  16. 23 1月, 2019 1 次提交