1. 27 4月, 2020 3 次提交
  2. 24 4月, 2020 1 次提交
  3. 21 4月, 2020 2 次提交
    • J
      conf: add xen hypervisor feature 'passthrough' · fadbaa23
      Jim Fehlig 提交于
      'passthrough' is Xen-Specific guest configuration option new to Xen 4.13
      that enables IOMMU mappings for a guest and hence whether it supports PCI
      passthrough. The default is disabled. See the xl.cfg(5) man page and
      xen.git commit babde47a3fe for more details.
      
      The default state of disabled prevents hotlugging PCI devices. However,
      if the guest configuration contains a PCI passthrough device at time of
      creation, libxl will automatically enable 'passthrough' and subsequent
      hotplugging of PCI devices will also be possible. It is not possible to
      unconditionally enable 'passthrough' since it would introduce a migration
      incompatibility due to guest ABI change. Instead, introduce another Xen
      hypervisor feature that can be used to enable guest PCI passthrough
      
        <features>
          <xen>
            <passthrough state='on'/>
          </xen>
        </features>
      
      To allow finer control over how IOMMU maps to guest P2M table, the
      passthrough element also supports a 'mode' attribute with values
      restricted to snyc_pt and share_pt, similar to xl.cfg(5) 'passthrough'
      setting .
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      fadbaa23
    • M
      conf: add xen specific feature: e820_host · b7d6648d
      Marek Marczykowski-Górecki 提交于
      e820_host is a Xen-specific option, only available for PV domains, that
      provides the domain a virtual e820 memory map based on the host one. It
      is enabled with a new Xen hypervisor feature, e.g.
      
        <features>
          <xen>
            <e820_host state='on'/>
          </xen>
        </features>
      
      e820_host is required when using PCI passthrough and is generally
      considered safe for any PV kernel. e820_host is silently ignored if set
      in HVM domain configuration. See xl.cfg(5) man page in the Xen
      documentation for more details.
      Signed-off-by: NMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
      Reviewed-by: NJim Fehlig <jfehlig@suse.com>
      b7d6648d
  4. 13 4月, 2020 1 次提交
    • L
      conf: new attribute "hotplug" for pci controllers · 78f4d5e6
      Laine Stump 提交于
      a <controller type='pci'...> element can now have a "hotplug"
      attribute in the <target> subelement. This is intended to control
      whether or not the slot(s) of the controller support
      hotplugging/unplugging a device:
      
         <controller type='pci' model='pcie-root-port'>
           <target hotplug='off'/>
         </controller>
      
      The default value of hotplug is "on".
      
      Since support for configuring such an option is hypervisor-dependent
      (and will vary among different types of PCI controllers even on a
      single hypervisor), no validation is done in this patch - that
      validation will be done in the patch that wires support for the
      setting into the hypervisor.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      78f4d5e6
  5. 10 4月, 2020 1 次提交
    • J
      conf: Add a new xenbus controller option for event channels · 8e669b38
      Jim Fehlig 提交于
      Event channels are like PV interrupts and in conjuction with grant frames
      form a data transfer mechanism for PV drivers. They are also used for
      inter-processor interrupts. Guests with a large number of vcpus and/or
      many PV devices many need to increase the maximum default value of 1023.
      For this reason the native Xen config format supports the
      'max_event_channels' setting. See xl.cfg(5) man page for more details.
      
      Similar to the existing maxGrantFrames option, add a new xenbus controller
      option 'maxEventChannels', allowing to adjust the maximum value via libvirt.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      8e669b38
  6. 08 4月, 2020 1 次提交
  7. 06 4月, 2020 1 次提交
  8. 30 3月, 2020 1 次提交
  9. 27 3月, 2020 1 次提交
  10. 26 3月, 2020 1 次提交
  11. 24 3月, 2020 1 次提交
  12. 16 3月, 2020 3 次提交
  13. 04 3月, 2020 3 次提交
  14. 21 2月, 2020 1 次提交
  15. 14 2月, 2020 2 次提交
  16. 06 2月, 2020 1 次提交
  17. 30 1月, 2020 1 次提交
    • L
      conf: parse/format <teaming> subelement of <interface> · fb0509d0
      Laine Stump 提交于
      The subelement <teaming> of <interface> devices is used to configure a
      simple teaming association between two interfaces in a domain. Example:
      
        <interface type='bridge'>
          <source bridge='br0'/>
          <model type='virtio'/>
          <mac address='00:11:22:33:44:55'/>
          <alias name='ua-backup0'/>
          <teaming type='persistent'/>
        </interface>
        <interface type='hostdev'>
          <source>
            <address type='pci' bus='0x02' slot='0x10' function='0x4'/>
          </source>
          <mac address='00:11:22:33:44:55'/>
          <teaming type='transient' persistent='ua-backup0'/>
        </interface>
      
      The interface with <teaming type='persistent'/> is assumed to always
      be present, while the interface with type='transient' may be be
      unplugged and later re-plugged; the persistent='blah' attribute (and
      in the one currently available implementation, also the matching MAC
      addresses) is what associates the two devices with each other. It is
      up to the hypervisor and the guest network drivers to determine what
      to do with this information.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      fb0509d0
  18. 25 1月, 2020 1 次提交
  19. 09 1月, 2020 1 次提交
  20. 25 12月, 2019 1 次提交
  21. 21 12月, 2019 1 次提交
  22. 19 12月, 2019 1 次提交
    • D
      Introducing new address type='unassigned' for PCI hostdevs · 96999404
      Daniel Henrique Barboza 提交于
      This patch introduces a new PCI hostdev address type called
      'unassigned'. This new type gives users the option to add
      PCI hostdevs to the domain XML in an 'unassigned' state, meaning
      that the device exists in the domain, is managed by Libvirt
      like any regular PCI hostdev, but the guest does not have
      access to it.
      
      This adds extra options for managing PCI device binding
      inside Libvirt, for example, making all the managed PCI hostdevs
      declared in the domain XML to be detached from the host and bind
      to the chosen driver and, at the same time, allowing just a
      subset of these devices to be usable by the guest.
      
      Next patch will use this new address type in the QEMU driver to
      avoid adding unassigned devices to the QEMU launch command line.
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      96999404
  23. 17 12月, 2019 1 次提交
    • M
      schemas: Introduce disk type NVMe · e1b02289
      Michal Privoznik 提交于
      There is this class of PCI devices that act like disks: NVMe.
      Therefore, they are both PCI devices and disks. While we already
      have <hostdev/> (and can assign a NVMe device to a domain
      successfully) we don't have disk representation. There are three
      problems with PCI assignment in case of a NVMe device:
      
      1) domains with <hostdev/> can't be migrated
      
      2) NVMe device is assigned whole, there's no way to assign only a
         namespace
      
      3) Because hypervisors see <hostdev/> they don't put block layer
         on top of it - users don't get all the fancy features like
         snapshots
      
      NVMe namespaces are way of splitting one continuous NVDIMM memory
      into smaller ones, effectively creating smaller NVMe-s (which can
      then be partitioned, LVMed, etc.)
      
      Because of all of this the following XML was chosen to model a
      NVMe device:
      
        <disk type='nvme' device='disk'>
          <driver name='qemu' type='raw'/>
          <source type='pci' managed='yes' namespace='1'>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </source>
          <target dev='vda' bus='virtio'/>
        </disk>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      e1b02289
  24. 13 12月, 2019 2 次提交
    • H
      conf: create memory bandwidth monitor. · 40a070ae
      Huaqiang 提交于
      Following domain configuration changes create two memory bandwidth
      monitors: one is monitoring the bandwidth consumed by vCPU 0,
      another is for vCPU 5.
      
      ```
                     <cputune>
                       <memorytune vcpus='0-4'>
                         <node id='0' bandwidth='20'/>
                         <node id='1' bandwidth='30'/>
             +           <monitor vcpus='0'/>
                       </memorytune>
             +         <memorytune vcpus='5'>
             +           <monitor vcpus='5'/>
             +         </memorytune>
      
                     </cputune>
          ```
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: NHuaqiang <huaqiang.wang@intel.com>
      40a070ae
    • H
      cachetune schema: a looser check for the order of <cache> and <monitor> element · 1d0c3c3a
      Huaqiang 提交于
      Originally, inside <cputune/cachetune>, it requires the <cache> element to
      be in the position before <monitor>, and following configuration is not
      permitted by schema, but it is better to let it be valid.
      
        <cputune>
          <cachetune vcpus='0-1'>
            <monitor level='3' vcpus='0-1'/>
                  ^
                  |__ Not permitted originally because it is in the place
                      before <cache> element.
      
            <cache id='0' level='3' type='both' size='3' unit='MiB'/>
            <cache id='1' level='3' type='both' size='3' unit='MiB'/>
          </cachetune>
          ...
        </cputune>
      
      And, let schema do more strict check by identifying following configuration to
      be invalid, due to <cachetune> should contain at least one <cache> or <monitor>
      element.
      
        <cputune>
          <cachetune vcpus='0-1'>
              ^
              |__ a <cachetune> SHOULD contain at least one <cache> or <monitor>
      
          </cachetune>
          ...
        </cputune>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: NHuaqiang <huaqiang.wang@intel.com>
      1d0c3c3a
  25. 04 12月, 2019 2 次提交
  26. 15 11月, 2019 1 次提交
  27. 18 10月, 2019 1 次提交
    • J
      conf: Add 'x' and 'y' resolution into video XML definition · 72862797
      Julio Faracco 提交于
      This commit adds resolution element with parameters 'x' and 'y' into video
      XML domain group definition. Both, properties were added into an element
      called 'resolution' and it was added inside 'model' element. They are set
      as optional. This element does not follow QEMU properties 'xres' and
      'yres' format. Both HTML documentation and schema were changed too. This
      commit includes a simple test case to cover resolution for QEMU video
      models. The new XML format for resolution looks like:
      
          <model ...>
            <resolution x='800' y='600'/>
          </model>
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      Signed-off-by: NJulio Faracco <jcfaracco@gmail.com>
      72862797
  28. 10 10月, 2019 2 次提交
  29. 25 9月, 2019 1 次提交