1. 06 10月, 2017 1 次提交
  2. 05 10月, 2017 1 次提交
  3. 04 10月, 2017 1 次提交
    • L
      qemu: Support multiqueue virtio-blk · abca72fa
      Lin Ma 提交于
      qemu 2.7.0 introduces multiqueue virtio-blk(commit 2f27059).
      This patch introduces a new attribute "queues". An example of
      the XML:
      
      <disk type='file' device='disk'>
        <driver name='qemu' type='qcow2' queues='4'/>
      
      The corresponding QEMU command line:
      
      -device virtio-blk-pci,scsi=off,num-queues=4,id=virtio-disk0
      Signed-off-by: NLin Ma <lma@suse.com>
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      abca72fa
  4. 28 9月, 2017 1 次提交
    • A
      util: Add TLS attributes to virStorageSource · f1705485
      Ashish Mittal 提交于
      Add an optional virTristateBool haveTLS to virStorageSource to
      manage whether a storage source will be using TLS.
      
      Sample XML for a VxHS disk:
      
      <disk type='network' device='disk'>
        <driver name='qemu' type='raw' cache='none'/>
        <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251' tls='yes'>
          <host name='192.168.0.1' port='9999'/>
        </source>
        <target dev='vda' bus='virtio'/>
      </disk>
      
      Additionally add a tlsFromConfig boolean to control whether the TLS
      setting was due to domain configuration or qemu.conf global setting
      in order to decide whether to Format the haveTLS setting for either
      a live or saved domain configuration file.
      
      Update the qemuxml2xmltest in order to add a test to show the proper
      parsing.
      
      Also update the docs to describe the tls attribute.
      Signed-off-by: NAshish Mittal <Ashish.Mittal@veritas.com>
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      f1705485
  5. 21 9月, 2017 1 次提交
  6. 20 9月, 2017 1 次提交
    • A
      docs: Add schema and docs for Veritas HyperScale (VxHS) · e6a7fa26
      Ashish Mittal 提交于
      Alter the schema to allow a VxHS block device. Sample XML is:
      
        <disk type='network' device='disk'>
          <driver name='qemu' type='raw' cache='none'/>
          <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
            <host name='192.168.0.1' port='9999'/>
          </source>
          <target dev='vda' bus='virtio'/>
          <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </disk>
      
      Update the html docs to describe the capability for VxHS.
      
      Alter the qemuxml2xmltest to validate the formatting.
      Signed-off-by: NAshish Mittal <Ashish.Mittal@veritas.com>
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      e6a7fa26
  7. 18 9月, 2017 1 次提交
  8. 11 9月, 2017 1 次提交
    • J
      tests: merge iommu tests · 190a5bc1
      Ján Tomko 提交于
      Using intremap without <ioapic driver='qemu'/> does not work.
      Merge the tests to avoid a duplicit test once we start validating it.
      190a5bc1
  9. 05 9月, 2017 1 次提交
  10. 10 8月, 2017 1 次提交
  11. 03 8月, 2017 1 次提交
  12. 02 8月, 2017 1 次提交
  13. 21 7月, 2017 1 次提交
    • S
      qemu: Enable NUMA node tag in pci-root for PPC64 · e5a05799
      Shivaprasad G Bhat 提交于
      This patch addresses the same aspects on PPC the bug 1103314 addressed
      on x86.
      
      PCI expander bus creates multiple primary PCI busses, where each of these
      busses can be assigned a specific NUMA affinity, which, on x86 is
      advertised through ACPI on a per-bus basis.
      
      For SPAPR, a PHB's NUMA affinities are assigned on a per-PHB basis, and
      there is no mechanism for advertising NUMA affinities to a guest on a
      per-bus basis. So, even if qemu-ppc manages to get some sort of multi-bus
      topology working using PXB, there is no way to expose the affinities
      of these busses to the guest. It can only be exposed on a per-PHB/per-domain
      basis.
      
      So patch enables NUMA node tag in pci-root controller on PPC.
      
      The way to set the NUMA node is through the numa_node option of
      spapr-pci-host-bridge device. However for the implicit PHB, the only way
      to set the numa_node is from the -global option. The -global option applies
      to all the PHBs unless explicitly specified with the option on the
      respective PHB of CLI. The default PHB has the emulated devices only, so
      the patch prevents setting the NUMA node for the default PHB.
      Signed-off-by: NShivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
      Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
      e5a05799
  14. 20 7月, 2017 1 次提交
  15. 15 7月, 2017 5 次提交
  16. 11 7月, 2017 4 次提交
  17. 20 6月, 2017 1 次提交
  18. 13 6月, 2017 3 次提交
  19. 08 6月, 2017 3 次提交
  20. 26 5月, 2017 1 次提交
  21. 16 5月, 2017 2 次提交
  22. 15 5月, 2017 2 次提交
  23. 21 4月, 2017 1 次提交
    • M
      conf, docs: Add support for coalesce setting(s) · 523c9960
      Martin Kletzander 提交于
      We are currently parsing only rx/frames/max because that's the only
      value that makes sense for us.  The tun device just added support for
      this one and the others are only supported by hardware devices which
      we don't need to worry about as the only way we'd pass those to the
      domain is using <hostdev/> or <interface type='hostdev'/>.  And in
      those cases the guest can modify the settings itself.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      523c9960
  24. 11 4月, 2017 2 次提交
  25. 04 4月, 2017 1 次提交
  26. 27 3月, 2017 1 次提交