1. 26 6月, 2020 1 次提交
  2. 23 6月, 2020 2 次提交
  3. 18 6月, 2020 1 次提交
  4. 10 6月, 2020 2 次提交
  5. 26 5月, 2020 1 次提交
  6. 22 5月, 2020 1 次提交
  7. 06 5月, 2020 1 次提交
    • J
      docs: Xen improvements · 57687260
      Jim Fehlig 提交于
      In formatdomain, using 'libxl' and 'xen' is redundant since they now
      both refer to the same driver. 'xen' predates 'libxl' and unambiguously
      identifies the Xen hypervisor, so drop the use of 'libxl'.
      
      In aclpolkit, the connection URI was erroneously identified as 'libxl'
      and the name 'xenlight'. Change the URI to 'xen' and driver name to 'Xen'.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      57687260
  8. 27 4月, 2020 3 次提交
  9. 24 4月, 2020 1 次提交
  10. 22 4月, 2020 1 次提交
  11. 21 4月, 2020 2 次提交
    • J
      conf: add xen hypervisor feature 'passthrough' · fadbaa23
      Jim Fehlig 提交于
      'passthrough' is Xen-Specific guest configuration option new to Xen 4.13
      that enables IOMMU mappings for a guest and hence whether it supports PCI
      passthrough. The default is disabled. See the xl.cfg(5) man page and
      xen.git commit babde47a3fe for more details.
      
      The default state of disabled prevents hotlugging PCI devices. However,
      if the guest configuration contains a PCI passthrough device at time of
      creation, libxl will automatically enable 'passthrough' and subsequent
      hotplugging of PCI devices will also be possible. It is not possible to
      unconditionally enable 'passthrough' since it would introduce a migration
      incompatibility due to guest ABI change. Instead, introduce another Xen
      hypervisor feature that can be used to enable guest PCI passthrough
      
        <features>
          <xen>
            <passthrough state='on'/>
          </xen>
        </features>
      
      To allow finer control over how IOMMU maps to guest P2M table, the
      passthrough element also supports a 'mode' attribute with values
      restricted to snyc_pt and share_pt, similar to xl.cfg(5) 'passthrough'
      setting .
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      fadbaa23
    • M
      conf: add xen specific feature: e820_host · b7d6648d
      Marek Marczykowski-Górecki 提交于
      e820_host is a Xen-specific option, only available for PV domains, that
      provides the domain a virtual e820 memory map based on the host one. It
      is enabled with a new Xen hypervisor feature, e.g.
      
        <features>
          <xen>
            <e820_host state='on'/>
          </xen>
        </features>
      
      e820_host is required when using PCI passthrough and is generally
      considered safe for any PV kernel. e820_host is silently ignored if set
      in HVM domain configuration. See xl.cfg(5) man page in the Xen
      documentation for more details.
      Signed-off-by: NMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
      Reviewed-by: NJim Fehlig <jfehlig@suse.com>
      b7d6648d
  12. 15 4月, 2020 1 次提交
  13. 13 4月, 2020 1 次提交
    • L
      conf: new attribute "hotplug" for pci controllers · 78f4d5e6
      Laine Stump 提交于
      a <controller type='pci'...> element can now have a "hotplug"
      attribute in the <target> subelement. This is intended to control
      whether or not the slot(s) of the controller support
      hotplugging/unplugging a device:
      
         <controller type='pci' model='pcie-root-port'>
           <target hotplug='off'/>
         </controller>
      
      The default value of hotplug is "on".
      
      Since support for configuring such an option is hypervisor-dependent
      (and will vary among different types of PCI controllers even on a
      single hypervisor), no validation is done in this patch - that
      validation will be done in the patch that wires support for the
      setting into the hypervisor.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      78f4d5e6
  14. 10 4月, 2020 2 次提交
    • D
      formatdomain.html.in: document emulator/vcpu pin delay · f68601dd
      Daniel Henrique Barboza 提交于
      In a guest with only one vcpu, when pinning the emulator in say CPU184
      and the vcpu0 in CPU0 of the host, the user might expect that only
      CPU0 and CPU184 of the host will be used by the guest.
      
      The reality is that Libvirt takes some time to honor the emulator
      and vcpu pinning, taking care of NUMA constraints first. This will
      result in other CPUs of the host being potentially used by the
      QEMU thread until the emulator/vcpu pinning is done. The user
      then might be confused by the output of 'virsh cpu-stats' in this
      scenario, showing around 200 microseconds of cycles being spent
      in other CPUs.
      
      Let's document this behavior, which is explained in detail in
      Libvirt commit v5.0.0-199-gf136b831, in the cputune section
      of formatdomain.html.in.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
      f68601dd
    • J
      conf: Add a new xenbus controller option for event channels · 8e669b38
      Jim Fehlig 提交于
      Event channels are like PV interrupts and in conjuction with grant frames
      form a data transfer mechanism for PV drivers. They are also used for
      inter-processor interrupts. Guests with a large number of vcpus and/or
      many PV devices many need to increase the maximum default value of 1023.
      For this reason the native Xen config format supports the
      'max_event_channels' setting. See xl.cfg(5) man page for more details.
      
      Similar to the existing maxGrantFrames option, add a new xenbus controller
      option 'maxEventChannels', allowing to adjust the maximum value via libvirt.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      8e669b38
  15. 08 4月, 2020 1 次提交
  16. 06 4月, 2020 2 次提交
    • D
      formatdomain.html.in: fix 'sockets' info in topology element · 0895a0e7
      Daniel Henrique Barboza 提交于
      In the 'topology' element it is mentioned, regarding the sockets
      value, "They refer to the total number of CPU sockets".
      
      This is not accurate. What we're doing is calculating the number
      of sockets per NUMA node, which can be checked in the current
      implementation of virHostCPUGetInfoPopulateLinux(). Calculating
      the total number of sockets would break the topology sanity
      check nodes*sockets*cores*threads=online_cpus.
      
      This documentation fix is important to avoid user confusion when
      seeing the output of 'virsh capabilities' and expecting it to be
      equal to the output of 'lscpu'. E.g in a Power 9 host this 'lscpu'
      output:
      
      Architecture:        ppc64le
      Byte Order:          Little Endian
      CPU(s):              160
      On-line CPU(s) list: 0-159
      Thread(s) per core:  4
      Core(s) per socket:  20
      Socket(s):           2
      NUMA node(s):        2
      Model:               2.2 (pvr 004e 1202)
      Model name:          POWER9, altivec supported
      
      And this XML output from virsh capabilities:
      
          <cpu>
            <arch>ppc64le</arch>
            <model>POWER9</model>
            <vendor>IBM</vendor>
            <topology sockets='1' dies='1' cores='20' threads='4'/>
            (...)
          </cpu>
      
      Both are correct, as long as we mention in the Libvirt documentation
      that 'sockets' in the topology element represents the number of sockets
      per NUMA node.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      0895a0e7
    • C
      conf: add 'multidevs' option · c3a18568
      Christian Schoenebeck 提交于
      Introduce new 'multidevs' option for filesystem.
      
        <filesystem type='mount' accessmode='mapped' multidevs='remap'>
          <source dir='/path'/>
          <target dir='mount_tag'>
        </filesystem>
      
      This option prevents misbehaviours on guest if a qemu 9pfs export
      contains multiple devices, due to the potential file ID collisions
      this otherwise may cause.
      Signed-off-by: NChristian Schoenebeck <qemu_oss@crudebyte.com>
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      c3a18568
  17. 30 3月, 2020 2 次提交
  18. 28 3月, 2020 1 次提交
  19. 24 3月, 2020 2 次提交
  20. 18 3月, 2020 2 次提交
  21. 16 3月, 2020 4 次提交
  22. 04 3月, 2020 2 次提交
  23. 27 2月, 2020 1 次提交
  24. 21 2月, 2020 1 次提交
    • A
      docs: Expand documentation for the tickpolicy timer attribute · 2f067570
      Andrea Bolognani 提交于
      The current documentation is fairly terse and not easy to decode
      for someone who's not intimately familiar with the inner workings
      of timer devices. Expand on it by providing a somewhat verbose
      description of what behavior each policy will result in, as seen
      from both the guest OS and host point of view.
      
      This is lifted directly from QEMU commit
      
        commit 2a7d957596786404c4ed16b089273de95a9580ad
        Author: Andrea Bolognani <abologna@redhat.com>
        Date:   Tue Feb 11 19:37:44 2020 +0100
      
          qapi: Expand documentation for LostTickPolicy
      
        v4.2.0-1442-g2a7d957596
      
      The original text also matched word for word the documentation
      found in QEMU.
      Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      2f067570
  25. 14 2月, 2020 2 次提交