You need to sign in or sign up before continuing.
  1. 25 5月, 2018 2 次提交
  2. 12 9月, 2017 1 次提交
  3. 16 6月, 2017 1 次提交
    • M
      Report more correct information for cache control · cc9f0521
      Martin Kletzander 提交于
      On some platforms the number of bits in the cbm_mask might not be
      divisible by 4 (and not even by 2), so we need to properly count the
      bits.  Similar file, min_cbm_bits, is properly parsed and used, but if
      the number is greater than one, we lose the information about
      granularity when reporting the data in capabilities.  For that matter
      always report granularity, but if it is not the same as the minimum,
      add that information in there as well.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      cc9f0521
  4. 05 6月, 2017 1 次提交
    • E
      Expose resource control capabilities for caches · 0ab409cc
      Eli Qiao 提交于
      Add cache resource control into capabilities for CAT without CDP:
      
        <cache>
          <bank id='0' level='3' type='unified' size='15360' unit='KiB' cpus='0-5'>
            <control min='768' unit='KiB' scope='both' max_allocation='4'/>
          </bank>
        </cache>
      
      and with CDP:
      
        <cache>
          <bank id='0' level='3' type='unified' size='15360' unit='KiB' cpus='0-5'>
            <control min='768' unit='KiB' scope='code' max_allocation='4'/>
            <control min='768' unit='KiB' scope='data' max_allocation='4'/>
          </bank>
        </cache>
      
      Also add new test cases for vircaps2xmltest.
      Signed-off-by: NEli Qiao <liyong.qiao@intel.com>
      0ab409cc
  5. 09 5月, 2017 1 次提交
    • M
      Add host cache information in capabilities · 4ad6a73b
      Martin Kletzander 提交于
      We're only adding only info about L3 caches, we can add more
      later (just by changing one line), but for now that's more than enough
      without overwhelming anyone.
      
      XML snippet of how this should look like (also seen as part of the commit):
      
        <cache>
          <bank id='0' level='3' type='both' size='8192' unit='KiB' cpus='0-7'/>
        </cache>
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      4ad6a73b
  6. 29 4月, 2015 1 次提交
    • C
      domain: conf: Drop unused OSTYPE_AIX · 066f7c7c
      Cole Robinson 提交于
      The phyp driver stuffed it into a DomainDefPtr during its attachdevice
      routine, but the value is never advertised via capabilities so it should
      be safe to drop.
      
      Have the phyp driver use OSTYPE_LINUX, which is what it advertises via
      capabilities.
      066f7c7c
  7. 17 9月, 2014 1 次提交
  8. 19 6月, 2014 1 次提交
    • M
      virCaps: expose pages info · 02129b7c
      Michal Privoznik 提交于
      There are two places where you'll find info on page sizes. The first
      one is under <cpu/> element, where all supported pages sizes are
      listed. Then the second one is under each <cell/> element which refers
      to concrete NUMA node. At this place, the size of page's pool is
      reported. So the capabilities XML looks something like this:
      
      <capabilities>
      
        <host>
          <uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
          <cpu>
            <arch>x86_64</arch>
            <model>Westmere</model>
            <vendor>Intel</vendor>
            <topology sockets='1' cores='1' threads='1'/>
            ...
            <pages unit='KiB' size='4'/>
            <pages unit='KiB' size='2048'/>
            <pages unit='KiB' size='1048576'/>
          </cpu>
          ...
          <topology>
            <cells num='4'>
              <cell id='0'>
                <memory unit='KiB'>4054408</memory>
                <pages unit='KiB' size='4'>1013602</pages>
                <pages unit='KiB' size='2048'>3</pages>
                <pages unit='KiB' size='1048576'>1</pages>
                <distances/>
                <cpus num='1'>
                  <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
                </cpus>
              </cell>
              <cell id='1'>
                <memory unit='KiB'>4071072</memory>
                <pages unit='KiB' size='4'>1017768</pages>
                <pages unit='KiB' size='2048'>3</pages>
                <pages unit='KiB' size='1048576'>1</pages>
                <distances/>
                <cpus num='1'>
                  <cpu id='1' socket_id='0' core_id='0' siblings='1'/>
                </cpus>
              </cell>
              ...
            </cells>
          </topology>
          ...
        </host>
      
        <guest/>
      
      </capabilities>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      02129b7c
  9. 04 6月, 2014 1 次提交
    • M
      virCaps: Expose distance between host NUMA nodes · 8ba0a58f
      Michal Privoznik 提交于
      If user or management application wants to create a guest,
      it may be useful to know the cost of internode latencies
      before the guest resources are pinned. For example:
      
      <capabilities>
      
        <host>
          ...
          <topology>
            <cells num='2'>
              <cell id='0'>
                <memory unit='KiB'>4004132</memory>
                <distances>
                  <sibling id='0' value='10'/>
                  <sibling id='1' value='20'/>
                </distances>
                <cpus num='2'>
                  <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
                  <cpu id='2' socket_id='0' core_id='2' siblings='2'/>
                </cpus>
              </cell>
              <cell id='1'>
                <memory unit='KiB'>4030064</memory>
                <distances>
                  <sibling id='0' value='20'/>
                  <sibling id='1' value='10'/>
                </distances>
                <cpus num='2'>
                  <cpu id='1' socket_id='0' core_id='0' siblings='1'/>
                  <cpu id='3' socket_id='0' core_id='2' siblings='3'/>
                </cpus>
              </cell>
            </cells>
          </topology>
          ...
        </host>
        ...
      </capabilities>
      
      We can see the distance from node1 to node0 is 20 and within nodes 10.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      8ba0a58f
  10. 26 3月, 2014 1 次提交
  11. 29 10月, 2013 1 次提交
    • G
      capabilities: add baselabel per sec driver/virt type to secmodel · b51038a4
      Giuseppe Scrivano 提交于
      Expand the "secmodel" XML fragment of "host" with a sequence of
      baselabel's which describe the default security context used by
      libvirt with a specific security model and virtualization type:
      
      <secmodel>
        <model>selinux</model>
        <doi>0</doi>
        <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
        <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
      </secmodel>
      <secmodel>
        <model>dac</model>
        <doi>0</doi>
        <baselabel type='kvm'>107:107</baselabel>
        <baselabel type='qemu'>107:107</baselabel>
      </secmodel>
      
      "baselabel" is driver-specific information, e.g. in the DAC security
      model, it indicates USER_ID:GROUP_ID.
      Signed-off-by: NGiuseppe Scrivano <gscrivan@redhat.com>
      Signed-off-by: NEric Blake <eblake@redhat.com>
      b51038a4
  12. 01 7月, 2013 1 次提交
  13. 09 3月, 2013 1 次提交
  14. 22 2月, 2013 1 次提交
  15. 24 1月, 2013 1 次提交
    • P
      schemas: Add schemas for more CPU topology information in the caps XML · 828820e2
      Peter Krempa 提交于
      This patch adds RNG schemas for adding more information in the topology
      output of the NUMA section in the capabilities XML.
      
      The added elements are designed to provide more information about the
      placement and topology of the processors in the system to management
      applications.
      
      A demonstration of supported XML added by this patch:
      <capabilities>
        <host>
          <topology>
            <cells num='3'>
              <cell id='0'>
                <cpus num='4'> <!-- this is node with Hyperthreading -->
                  <cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
                  <cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
                  <cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
                  <cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
                </cpus>
              </cell>
              <cell id='1'>
                <cpus num='4'> <!-- this is node with modules (Bulldozer) -->
                  <cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
                  <cpu id='5' socket_id='0' core_id='3' siblings='4-5'/>
                  <cpu id='6' socket_id='0' core_id='4' siblings='6-7'/>
                  <cpu id='7' socket_id='0' core_id='5' siblings='6-7'/>
                </cpus>
               </cell>
              <cell id='2'>
                <cpus num='4'> <!-- this is a normal multi-core node -->
                  <cpu id='8' socket_id='1' core_id='0' siblings='8'/>
                  <cpu id='9' socket_id='1' core_id='1' siblings='9'/>
                  <cpu id='10' socket_id='1' core_id='2' siblings='10'/>
                  <cpu id='11' socket_id='1' core_id='3' siblings='11'/>
                </cpus>
               </cell>
            </cells>
          </topology>
        </host>
      </capabilities>
      
      The socket_id field represents identification of the physical socket the
      CPU is plugged in. This ID may not be identical to the physical socket
      ID reported by the kernel.
      
      The core_id identifies a core within a socket. Also this field may not
      accurately represent physical ID's.
      
      The core_id is guaranteed to be unique within a cell and a socket. There
      may be duplicates between sockets. Only cores sharing core_id within one
      cell and one socket can be considered as threads. Cores sharing core_id
      within sparate cells are distinct cores.
      
      The siblings field is a list of CPU id's the cpu id's the CPU is sibling
      with - thus a thread. The list is in the cpuset format.
      828820e2
  16. 23 1月, 2013 1 次提交
  17. 21 8月, 2012 1 次提交
  18. 03 8月, 2012 1 次提交
    • J
      Update xml schemas according to libvirt source · 37a10129
      Ján Tomko 提交于
      capability.rng: Guest features can be in any order.
      nodedev.rng: Added <driver> element, <capability> phys_function and
      virt_functions for PCI devices.
      storagepool.rng: Owner or group ID can be -1.
      
      schema tests: New capabilities and nodedev files; changed owner and
      group to -1 in pool-dir.xml.
      storage_conf: Print uid_t and gid_t as signed to storage pool XML.
      37a10129
  19. 08 3月, 2012 1 次提交
  20. 30 11月, 2011 1 次提交
    • D
      Fix capabilities XML to use generic terms for suspend targets · ae5e5528
      Daniel P. Berrange 提交于
      The capabilities XML uses the x86 specific terms 'S3', 'S4'
      and 'Hybrid-Syspend'. Switch it to use the same terminology
      as the API constants and virsh options, eg 'suspend_mem'
      'suspend_disk' and 'suspend_hybrid'
      
      * docs/formatcaps.html.in, docs/schemas/capability.rng,
        src/conf/capabilities.c: Rename suspend constants
      ae5e5528
  21. 29 11月, 2011 1 次提交
    • S
      Add 'Hybrid-Suspend' power management discovery for the host · 302743f1
      Srivatsa S. Bhat 提交于
      Some systems support a feature known as 'Hybrid-Suspend', apart from the
      usual system-wide sleep states such as Suspend-to-RAM (S3) or Suspend-to-Disk
      (S4). Add the functionality to discover this power management feature and
      export it in the capabilities XML under the <power_management> tag.
      302743f1
  22. 22 11月, 2011 1 次提交
    • S
      Export KVM Host Power Management capabilities · e352b164
      Srivatsa S. Bhat 提交于
      This patch exports KVM Host Power Management capabilities as XML so that
      higher-level systems management software can make use of these features
      available in the host.
      
      The script "pm-is-supported" (from pm-utils package) is run to discover if
      Suspend-to-RAM (S3) or Suspend-to-Disk (S4) is supported by the host.
      If either of them are supported, then a new tag "<power_management>" is
      introduced in the XML under the <host> tag.
      
      However in case the query to check for power management features succeeded,
      but the host does not support any such feature, then the XML will contain
      an empty <power_management/> tag. In the event that the PM query itself
      failed, the XML will not contain any "power_management" tag.
      
      To use this, new APIs could be implemented in libvirt to exploit power
      management features such as S3/S4.
      e352b164
  23. 04 11月, 2011 1 次提交
  24. 08 7月, 2011 1 次提交
  25. 07 7月, 2010 1 次提交
    • J
      cpu: Add support for CPU vendor · af53714f
      Jiri Denemark 提交于
      By specifying <vendor> element in CPU requirements a guest can be
      restricted to run only on CPUs by a given vendor. Host CPU vendor is
      also specified in capabilities XML.
      
      The vendor is checked when migrating a guest but it's not forced, i.e.,
      guests configured without <vendor> element can be freely migrated.
      af53714f
  26. 26 5月, 2010 1 次提交
    • D
      Expose a host UUID in the capabilities XML · 60881161
      Daniel P. Berrange 提交于
      Allow for a host UUID in the capabilities XML. Local drivers
      will initialize this from the SMBIOS data. If a sanity check
      shows SMBIOS uuid is invalid, allow an override from the
      libvirtd.conf configuration file
      
      * daemon/libvirtd.c, daemon/libvirtd.conf: Support a host_uuid
        configuration option
      * docs/schemas/capability.rng: Add optional host uuid field
      * src/conf/capabilities.c, src/conf/capabilities.h: Include
        host UUID in XML
      * src/libvirt_private.syms: Export new uuid.h functions
      * src/lxc/lxc_conf.c, src/qemu/qemu_driver.c,
        src/uml/uml_conf.c: Set host UUID in capabilities
      * src/util/uuid.c, src/util/uuid.h: Support for host UUIDs
      * src/node_device/node_device_udev.c: Use the host UUID functions
      * tests/confdata/libvirtd.conf, tests/confdata/libvirtd.out: Add
        new host_uuid config option to test
      60881161
  27. 02 3月, 2010 1 次提交
    • J
      maint: convert leading TABs in *.rng files to equivalent spaces · aa7847d3
      Jim Meyering 提交于
      * docs/schemas/capability.rng: Likewise.
      * docs/schemas/network.rng: Likewise.
      * docs/schemas/nodedev.rng: Likewise.
      * docs/schemas/storagepool.rng: Likewise.
      * docs/schemas/storagevol.rng: Likewise.
      Use these commands:
      t=$'\t'
      git ls-files | grep '\.rng$' | xargs grep -lE "^ *$t" \
        | xargs perl -MText::Tabs -ni -le \
          '$m=/^( *\t[ \t]*)(.*)/; print $m ? expand($1) . $2 : $_'
      aa7847d3
  28. 18 12月, 2009 1 次提交
    • J
      XML schema for CPU flags · 6df8b363
      Jiri Denemark 提交于
      XML schema for CPU flags
      
      Firstly, CPU topology and model with optional features have to be
      advertised in host capabilities:
      
          <host>
              <cpu>
                  <arch>ARCHITECTURE</arch>
                  <features>
                      <!-- old-style features are here -->
                  </features>
                  <model>NAME</model>
                  <topology sockets="S" cores="C" threads="T"/>
                  <feature name="NAME"/>
              </cpu>
              ...
          </host>
      
      Secondly, drivers which support detailed CPU specification have to
      advertise
      it in guest capabilities:
      
          <guest>
          ...
          <features>
                  <cpuselection/>
              </features>
          </guest>
      
      And finally, CPU may be configured in domain XML configuration:
      
      <domain>
          ...
          <cpu match="MATCH">
              <model>NAME</model>
              <topology sockets="S" cores="C" threads="T"/>
              <feature policy="POLICY" name="NAME"/>
          </cpu>
      </domain>
      
      Where MATCH can be one of:
          - 'minimum'     specified CPU is the minimum requested CPU
          - 'exact'       disable all additional features provided by host CPU
          - 'strict'      fail if host CPU doesn't exactly match
      
      POLICY can be one of:
          - 'force'       turn on the feature, even if host doesn't have it
          - 'require'     fail if host doesn't have the feature
          - 'optional'    match host
          - 'disable'     turn off the feature, even if host has it
          - 'forbid'      fail if host has the feature
      
      'force' and 'disable' policies turn on/off the feature regardless of its
      availability on host. 'force' is unlikely to be used but its there for
      completeness since Xen and VMWare allow it.
      
      'require' and 'forbid' policies prevent a guest from being started on a host
      which doesn't/does have the feature. 'forbid' is for cases where you disable
      the feature but a guest may still try to access it anyway and you don't want
      it to succeed.
      
      'optional' policy sets the feature according to its availability on host.
      When a guest is booted on a host that has the feature and then migrated to
      another host, the policy changes to 'require' as we can't take the feature
      away from a running guest.
      
      Default policy for features provided by host CPU but not specified in domain
      configuration is set using match attribute of cpu tag. If 'minimum' match is
      requested, additional features will be treated as if they were specified
      with 'optional' policy. 'exact' match implies 'disable' policy and 'strict'
      match stands for 'forbid' policy.
      
      * docs/schemas/capability.rng docs/schemas/domain.rng: extend the
        RelaxNG schemas to add CPU flags support
      6df8b363
  29. 10 9月, 2009 3 次提交
  30. 05 8月, 2009 1 次提交
    • A
      Typo and comment fixes · 3879b334
      Aron Griffis 提交于
      * docs/schemas/*.rng: the comments were wrong
      * src/qemu_conf.c: typo in an error message
      3879b334
  31. 27 7月, 2009 1 次提交
  32. 16 7月, 2009 1 次提交
    • J
      remove all trailing blank lines · 07613d20
      Jim Meyering 提交于
      by running this command:
      git ls-files -z | xargs -0 perl -pi -0777 -e 's/\n\n+$/\n/'
      This is in preparation for a more strict make syntax-check
      rule that will detect trailing blank lines.
      07613d20
  33. 03 3月, 2009 1 次提交
  34. 27 1月, 2009 1 次提交