1. 29 1月, 2020 1 次提交
  2. 28 1月, 2020 3 次提交
  3. 25 1月, 2020 1 次提交
  4. 24 1月, 2020 2 次提交
  5. 23 1月, 2020 2 次提交
  6. 17 1月, 2020 3 次提交
  7. 16 1月, 2020 8 次提交
  8. 13 1月, 2020 2 次提交
  9. 08 1月, 2020 3 次提交
  10. 07 1月, 2020 3 次提交
  11. 03 1月, 2020 4 次提交
  12. 21 12月, 2019 1 次提交
  13. 20 12月, 2019 1 次提交
  14. 19 12月, 2019 2 次提交
  15. 18 12月, 2019 3 次提交
    • D
      conf: fix populating of fake NUMA in multi-node hosts · e67ccd3c
      Daniel P. Berrangé 提交于
      If the host OS doesn't have NUMA present, we fallback to
      populating fake NUMA info and the code thus assumes only a
      single NUMA node.
      
      Unfortunately we also fallback to fake NUMA if numactl-devel
      was not present, and in this case we can still have multiple
      NUMA nodes. In this case we create all CPUs, but only the
      CPUs in the first node have any data filled in, resulting in
      capabilities like:
      
          <topology>
            <cells num='1'>
              <cell id='0'>
                <memory unit='KiB'>15977572</memory>
                <cpus num='48'>
                  <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
                  <cpu id='1' socket_id='0' core_id='0' siblings='1'/>
                  <cpu id='2' socket_id='0' core_id='1' siblings='2'/>
                  <cpu id='3' socket_id='0' core_id='1' siblings='3'/>
                  <cpu id='4' socket_id='0' core_id='2' siblings='4'/>
                  <cpu id='5' socket_id='0' core_id='2' siblings='5'/>
                  <cpu id='6' socket_id='0' core_id='3' siblings='6'/>
                  <cpu id='7' socket_id='0' core_id='3' siblings='7'/>
                  <cpu id='8' socket_id='0' core_id='4' siblings='8'/>
                  <cpu id='9' socket_id='0' core_id='4' siblings='9'/>
                  <cpu id='10' socket_id='0' core_id='5' siblings='10'/>
                  <cpu id='11' socket_id='0' core_id='5' siblings='11'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                  <cpu id='0'/>
                </cpus>
              </cell>
            </cells>
          </topology>
      
      With this new code we get something slightly less broken
      
          <topology>
            <cells num='4'>
              <cell id='0'>
                <memory unit='KiB'>15977572</memory>
                <cpus num='12'>
                  <cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
                  <cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
                  <cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
                  <cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
                  <cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
                  <cpu id='5' socket_id='0' core_id='2' siblings='4-5'/>
                  <cpu id='6' socket_id='0' core_id='3' siblings='6-7'/>
                  <cpu id='7' socket_id='0' core_id='3' siblings='6-7'/>
                  <cpu id='8' socket_id='0' core_id='4' siblings='8-9'/>
                  <cpu id='9' socket_id='0' core_id='4' siblings='8-9'/>
                  <cpu id='10' socket_id='0' core_id='5' siblings='10-11'/>
                  <cpu id='11' socket_id='0' core_id='5' siblings='10-11'/>
                </cpus>
              </cell>
              <cell id='0'>
                <memory unit='KiB'>15977572</memory>
                <cpus num='12'>
                  <cpu id='12' socket_id='0' core_id='0' siblings='12-13'/>
                  <cpu id='13' socket_id='0' core_id='0' siblings='12-13'/>
                  <cpu id='14' socket_id='0' core_id='1' siblings='14-15'/>
                  <cpu id='15' socket_id='0' core_id='1' siblings='14-15'/>
                  <cpu id='16' socket_id='0' core_id='2' siblings='16-17'/>
                  <cpu id='17' socket_id='0' core_id='2' siblings='16-17'/>
                  <cpu id='18' socket_id='0' core_id='3' siblings='18-19'/>
                  <cpu id='19' socket_id='0' core_id='3' siblings='18-19'/>
                  <cpu id='20' socket_id='0' core_id='4' siblings='20-21'/>
                  <cpu id='21' socket_id='0' core_id='4' siblings='20-21'/>
                  <cpu id='22' socket_id='0' core_id='5' siblings='22-23'/>
                  <cpu id='23' socket_id='0' core_id='5' siblings='22-23'/>
                </cpus>
              </cell>
            </cells>
          </topology>
      
      The topology at least now reflects what 'virsh nodeinfo' reports.
      The main bug is that the CPU "id" values won't match what the Linux
      host actually uses.
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      e67ccd3c
    • D
      conf: avoid mem leak re-allocating fake NUMA capabilities · fb5aaf3d
      Daniel P. Berrangé 提交于
      The 'caps' object is already allocated when the fake NUMA
      initialization takes place.
      Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      fb5aaf3d
    • M
      virCapabilitiesHostNUMAUnref: Accept NULL · 599f9c73
      Michal Privoznik 提交于
      Fortunately, this is not causing any problems now because glib
      does this check for us when calling this function via attribute
      cleanup. But in a future commit we will explicitly call this
      function over a struct member that might be NULL.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      599f9c73
  16. 17 12月, 2019 1 次提交