1. 15 4月, 2019 4 次提交
    • J
      qemu: Don't cache microcode version · 673c62a3
      Jiri Denemark 提交于
      My earlier commit be46f613 was incomplete. It removed caching of
      microcode version in the CPU driver, which means the capabilities XML
      will see the correct microcode version. But it is also cached in the
      QEMU capabilities cache where it is used to detect whether we need to
      reprobe QEMU. By missing the second place, the original commit
      be46f613 made the situation even worse since libvirt would report
      correct microcode version while still using the old host CPU model
      (visible in domain capabilities XML).
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      673c62a3
    • J
      Delete QEMU_CAPS_KQEMU and QEMU_CAPS_ENABLE_KQEMU · 5dd6e7f9
      Ján Tomko 提交于
      Support for kqemu was dropped in libvirt by commit 8e91a400 and even
      back then we never set these capabilities when doing QMP probing.
      
      Since no QEMU we aim to support has these, drop them completely.
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
      5dd6e7f9
    • D
      PPC64 support for NVIDIA V100 GPU with NVLink2 passthrough · 1a922648
      Daniel Henrique Barboza 提交于
      The NVIDIA V100 GPU has an onboard RAM that is mapped into the
      host memory and accessible as normal RAM via an NVLink2 bridge. When
      passed through in a guest, QEMU puts the NVIDIA RAM window in a
      non-contiguous area, above the PCI MMIO area that starts at 32TiB.
      This means that the NVIDIA RAM window starts at 64TiB and go all the
      way to 128TiB.
      
      This means that the guest might request a 64-bit window, for each PCI
      Host Bridge, that goes all the way to 128TiB. However, the NVIDIA RAM
      window isn't counted as regular RAM, thus this window is considered
      only for the allocation of the Translation and Control Entry (TCE).
      For more information about how NVLink2 support works in QEMU,
      refer to the accepted implementation [1].
      
      This memory layout differs from the existing VFIO case, requiring its
      own formula. This patch changes the PPC64 code of
      @qemuDomainGetMemLockLimitBytes to:
      
      - detect if we have a NVLink2 bridge being passed through to the
      guest. This is done by using the @ppc64VFIODeviceIsNV2Bridge function
      added in the previous patch. The existence of the NVLink2 bridge in
      the guest means that we are dealing with the NVLink2 memory layout;
      
      - if an IBM NVLink2 bridge exists, passthroughLimit is calculated in a
      different way to account for the extra memory the TCE table can alloc.
      The 64TiB..128TiB window is more than enough to fit all possible
      GPUs, thus the memLimit is the same regardless of passing through 1 or
      multiple V100 GPUs.
      
      Further reading explaining the background
      [1] https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg03700.html
      [2] https://www.redhat.com/archives/libvir-list/2019-March/msg00660.html
      [3] https://www.redhat.com/archives/libvir-list/2019-April/msg00527.htmlSigned-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NErik Skultety <eskultet@redhat.com>
      1a922648
    • D
      qemu_domain: NVLink2 bridge detection function for PPC64 · cc9f0380
      Daniel Henrique Barboza 提交于
      The NVLink2 support in QEMU implements the detection of NVLink2
      capable devices by verifying the attributes of the VFIO mem region
      QEMU allocates for the NVIDIA GPUs. To properly allocate an
      adequate amount of memLock, Libvirt needs this information before
      a QEMU instance is even created, thus querying QEMU is not
      possible and opening a VFIO window is too much.
      
      An alternative is presented in this patch. Making the following
      assumptions:
      
      - if we want GPU RAM to be available in the guest, an NVLink2 bridge
      must be passed through;
      
      - an unknown PCI device can be classified as a NVLink2 bridge
      if its device tree node has 'ibm,gpu', 'ibm,nvlink',
      'ibm,nvlink-speed' and 'memory-region'.
      
      This patch introduces a helper called @ppc64VFIODeviceIsNV2Bridge
      that checks the device tree node of a given PCI device and
      check if it meets the criteria to be a NVLink2 bridge. This
      new function will be used in a follow-up patch that, using the
      first assumption, will set up the rlimits of the guest
      accordingly.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      cc9f0380
  2. 13 4月, 2019 4 次提交
  3. 12 4月, 2019 5 次提交
  4. 10 4月, 2019 21 次提交
  5. 04 4月, 2019 5 次提交
  6. 03 4月, 2019 1 次提交