1. 14 2月, 2020 1 次提交
  2. 07 2月, 2020 2 次提交
  3. 06 2月, 2020 1 次提交
  4. 30 1月, 2020 2 次提交
    • L
      qemu: support interface <teaming> functionality · eb9f6cc4
      Laine Stump 提交于
      The QEMU driver uses the <teaming type='persistent|transient'
      persistent='blah'/> element to setup a "failover" pair of devices -
      the persistent device must be a virtio emulated NIC, with the only
      extra configuration being the addition of ",failover=on" to the device
      commandline, and the transient device must be a hostdev NIC
      (<interface type='hostdev'> or <interface type='network'> with a
      network that is a pool of SRIOV VFs) where the extra configuration is
      the addition of ",failover_pair_id=$aliasOfVirtio" to the device
      commandline. These new options are supported in QEMU 4.2.0 and later.
      
      Extra qemu-specific validation is added to ensure that the device
      type/model is appropriate and that the qemu binary supports these
      commandline options.
      
      The result of this will be:
      
      1) The virtio device presented to the guest will have an extra bit set
      in its PCI capabilities indicating that it can be used as a failover
      backup device. The virtio guest driver will need to be equipped to do
      something with this information - this is included in the Linux
      virtio-net driver in kernel 4.18 and above (and also backported to
      some older distro kernels). Unfortunately there is no way for libvirt
      to learn whether or not the guest driver supports failover - if it
      doesn't then the extra PCI capability will be ignored and the guest OS
      will just see two independent devices. (NB: the current virtio guest
      driver also requires that the MAC addresses of the two NICs match in
      order to pair them into a bond).
      
      2) When a migration is requested, QEMu will automatically unplug the
      transient/hostdev NIC from the guest on the source host before
      starting migration, and automatically re-plug a similar device after
      restarting the guest CPUs on the destination host. While the transient
      NIC is unplugged, all network traffic will go through the
      persistent/virtio device, but when the hostdev NIC is plugged in, it
      will get all the traffic. This means that in normal circumstances the
      guest gets the performance advantage of vfio-assigned "real hardware"
      networking, but it can still be migrated with the only downside being
      a performance penalty (due to using an emulated NIC) during the
      migration.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      eb9f6cc4
    • L
      conf: parse/format <teaming> subelement of <interface> · fb0509d0
      Laine Stump 提交于
      The subelement <teaming> of <interface> devices is used to configure a
      simple teaming association between two interfaces in a domain. Example:
      
        <interface type='bridge'>
          <source bridge='br0'/>
          <model type='virtio'/>
          <mac address='00:11:22:33:44:55'/>
          <alias name='ua-backup0'/>
          <teaming type='persistent'/>
        </interface>
        <interface type='hostdev'>
          <source>
            <address type='pci' bus='0x02' slot='0x10' function='0x4'/>
          </source>
          <mac address='00:11:22:33:44:55'/>
          <teaming type='transient' persistent='ua-backup0'/>
        </interface>
      
      The interface with <teaming type='persistent'/> is assumed to always
      be present, while the interface with type='transient' may be be
      unplugged and later re-plugged; the persistent='blah' attribute (and
      in the one currently available implementation, also the matching MAC
      addresses) is what associates the two devices with each other. It is
      up to the hypervisor and the guest network drivers to determine what
      to do with this information.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      fb0509d0
  5. 27 1月, 2020 13 次提交
  6. 25 1月, 2020 1 次提交
  7. 16 1月, 2020 2 次提交
  8. 13 1月, 2020 2 次提交
  9. 09 1月, 2020 1 次提交
  10. 19 12月, 2019 2 次提交
  11. 17 12月, 2019 3 次提交
    • M
      qemu: Generate command line of NVMe disks · 8e2026cc
      Michal Privoznik 提交于
      Now, that we have everything prepared, we can generate command
      line for NVMe disks.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      8e2026cc
    • M
      schemas: Introduce disk type NVMe · e1b02289
      Michal Privoznik 提交于
      There is this class of PCI devices that act like disks: NVMe.
      Therefore, they are both PCI devices and disks. While we already
      have <hostdev/> (and can assign a NVMe device to a domain
      successfully) we don't have disk representation. There are three
      problems with PCI assignment in case of a NVMe device:
      
      1) domains with <hostdev/> can't be migrated
      
      2) NVMe device is assigned whole, there's no way to assign only a
         namespace
      
      3) Because hypervisors see <hostdev/> they don't put block layer
         on top of it - users don't get all the fancy features like
         snapshots
      
      NVMe namespaces are way of splitting one continuous NVDIMM memory
      into smaller ones, effectively creating smaller NVMe-s (which can
      then be partitioned, LVMed, etc.)
      
      Because of all of this the following XML was chosen to model a
      NVMe device:
      
        <disk type='nvme' device='disk'>
          <driver name='qemu' type='raw'/>
          <source type='pci' managed='yes' namespace='1'>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </source>
          <target dev='vda' bus='virtio'/>
        </disk>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      e1b02289
    • D
      qemu: command: move qemuBuildGraphicsSPICECommandLine validation to qemu_domain.c · c19bb8c0
      Daniel Henrique Barboza 提交于
      Move the SPICE caps validation from qemuBuildGraphicsSPICECommandLine()
      to a new function called qemuDomainDeviceDefValidateSPICEGraphics().
      This function is called by qemuDomainDeviceDefValidateGraphics(),
      which in turn is called by qemuDomainDefValidate(), validating the graphics
      parameters in domain define time.
      
      This validation move exposed a flaw in the 'default-video-type' tests
      for PPC64, AARCH64 and s390 archs. The XML was considering 'spice' as
      the default video type, which isn't true for those architectures.
      This was flying under the radar until now because the SPICE validation
      was being made in 'virsh start' time, while the XML validation done in
      qemuxml2xmltest.c considers define time.
      
      All other tests were adapted to consider SPICE validation in this
      earlier stage.
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      c19bb8c0
  12. 03 12月, 2019 1 次提交
  13. 25 11月, 2019 2 次提交
  14. 22 11月, 2019 1 次提交
  15. 21 11月, 2019 5 次提交
  16. 15 11月, 2019 1 次提交