- 15 7月, 2015 2 次提交
-
-
由 Andrea Bolognani 提交于
-
由 Andrea Bolognani 提交于
Make sure sysfs_prefix, when present, is always the first argument to a function; don't use a different name to refer to it; check whether it is NULL, and hence SYSFS_SYSTEM_PATH should be used, only when using it directly and not just passing it down to another function; always pass down the same value we've been passed when calling another function.
-
- 14 7月, 2015 9 次提交
-
-
由 Kothapally Madhu Pavan 提交于
This patch resolves a situation where a core is defective and is not in the present mask during boot. Optionally a host can have empty sockets could be brought online if the socket is added. In this case the present mask contains the cpu's that are actually there in the sockets even though they might be offline for some reason. This patch excludes the cpu's that are offline because the socket is defective/empty by checking the present mask before reading the cpu directory. Otherwise, the nodeinfo on such hosts always displays wrong output which includes the defective/empty sockets as set of offline cpu's. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_CPU_PATH which is a derivative of SYSFS_SYSTEM_PATH Use cpupath for nodeCapsInitNUMAFake and remove SYSFS_CPU_PATH
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
由 John Ferlan 提交于
The API will print the path to the /cpu/present file using the sysfs_prefix. NB: This is setup for future patches which will allow local/test sysfs paths.
-
- 02 6月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
Use a for loop instead of while. Do not opencode c_isxdigit and virHexToBin.
-
由 Ján Tomko 提交于
Use virFileReadAll which reports an error when the file is larger than the specified maximum. https://bugzilla.redhat.com/show_bug.cgi?id=1207849
-
- 28 5月, 2015 1 次提交
-
-
由 Kothapally Madhu Pavan 提交于
Virsh capabilities will list offline cpus as online when libvirt is compiled with numactl option disabled. This fix will list correct set of online cpus.
-
- 27 3月, 2015 1 次提交
-
-
由 Wei Huang 提交于
Current libvirt can only handle up to 1023 bytes when it reads Linux sysfs topology/thread_siblings. This isn't enough for Linux distributions that support a large value. This patch fixes the problem by using VIR_ALLOC()/VIR_FREE(), instead of using a fixed-size (1024) local char array. In the meanwhile SYSFS_THREAD_SIBLINGS_LIST_LENGTH_MAX is increased to 8192 which should be large enough for a foreseeable future. Signed-off-by: NWei Huang <wei@redhat.com>
-
- 13 3月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
A helper that never returns an error and treats bits out of bitmap range as false. Use it everywhere we use ignore_value on virBitmapGetBit, or loop over the bitmap size.
-
- 23 1月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
Per-cpu stats are only shown for present CPUs in the cgroups, but we were only parsing the largest CPU number from /sys/devices/system/cpu/present and looking for stats even for non-present CPUs. This resulted in: internal error: cpuacct parse error
-
- 10 11月, 2014 1 次提交
-
-
由 Jincheng Miao 提交于
nodeSetMemoryParameters() will call nodeSetMemoryParameterValue() to set parameters. But it just filter the return code '-2' as failure. Indeed we should report error when rc is negative. https://bugzilla.redhat.com/show_bug.cgi?id=1161541Signed-off-by: NJincheng Miao <jmiao@redhat.com>
-
- 25 9月, 2014 2 次提交
-
-
由 Michal Privoznik 提交于
And add stubs to other drivers like: lxc, qemu, uml and vbox. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
In the previous patch I've changed the for loop bounds but forgot to 'git add' changes that adapt the rest of the code. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 24 9月, 2014 1 次提交
-
-
由 Jincheng Miao 提交于
In nodeGetFreePages, if startCell is given by '0', and the max node number is '0' too. The for-loop wouldn't be executed. So convert it to while-loop. Before: > virsh freepages --cellno 0 --pagesize 4 error: internal error: no suitable info found After: > virsh freepages --cellno 0 --pagesize 4 4KiB: 472637 Signed-off-by: NJincheng Miao <jmiao@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 23 9月, 2014 2 次提交
-
-
由 Michal Privoznik 提交于
It's better to use a macro instead of if-else construct. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jincheng Miao 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1145050Signed-off-by: NJincheng Miao <jmiao@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 9月, 2014 1 次提交
-
-
由 John Ferlan 提交于
If the virNumaGetNodeCPUs() call fails with -1, then jumping to cleanup with 'cpus == NULL' and calling virCapabilitiesClearHostNUMACellCPUTopology will cause issues. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 20 8月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
In case the host has 2 or more NUMA nodes, we fetch CPU map for each node. However, we need to free the CPU map in between loops: ==29513== 96 (72 direct, 24 indirect) bytes in 3 blocks are definitely lost in loss record 951 of 1,264 ==29513== at 0x4C2A700: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==29513== by 0x52AD24B: virAlloc (viralloc.c:144) ==29513== by 0x52AF0E6: virBitmapNew (virbitmap.c:78) ==29513== by 0x52FB720: virNumaGetNodeCPUs (virnuma.c:294) ==29513== by 0x53C700B: nodeCapsInitNUMA (nodeinfo.c:1886) ==29513== by 0x11759708: vboxCapsInit (vbox_common.c:398) ==29513== by 0x11759CC4: vboxConnectOpen (vbox_common.c:514) ==29513== by 0x53C965F: do_open (libvirt.c:1147) ==29513== by 0x53C9EBC: virConnectOpen (libvirt.c:1317) ==29513== by 0x142905: remoteDispatchConnectOpen (remote.c:1215) ==29513== by 0x126ADF: remoteDispatchConnectOpenHelper (remote_dispatch.h:2346) ==29513== by 0x5453D21: virNetServerProgramDispatchCall (virnetserverprogram.c:437) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 24 6月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
On the Linux kernel, if huge pages are allocated the size they cut off from memory is accounted under the 'MemUsed' in the meminfo file. However, we want the sum to be subtracted from 'MemTotal'. This patch implements this feature. After this change, we can enable reporting of the ordinary system pages in the capability XML: <capabilities> <host> <uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='1' threads='1'/> <feature/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management/> <migration_features/> <topology> <cells num='4'> <cell id='0'> <memory unit='KiB'>4048248</memory> <pages unit='KiB' size='4'>748382</pages> <pages unit='KiB' size='2048'>3</pages> <pages unit='KiB' size='1048576'>1</pages> <distances/> <cpus num='1'> <cpu id='0' socket_id='0' core_id='0' siblings='0'/> </cpus> </cell> ... </cells> </topology> </host> </capabilities> You can see the beautiful thing about this: if you sum up all the <pages/> you'll get <memory/>. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 20 6月, 2014 3 次提交
-
-
由 Michal Privoznik 提交于
The virNodeParseSocket() function tries to get socked ID from 'topology/physical_package_id' file. However, on some architectures the file contains the -1 constant which makes in turn libvirt think the info extraction was unsuccessful. If that's the case, we need to overwrite the obtained integer with zero like we are doing for other architectures. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
As in previous commit, there are again some places where we can do runtime decision instead of compile time. This time it's whether the 'topology/physical_package_id' is allowed to have '-1' within or not. Then, core ID is pared differently on s390(x) than on the rest of architectures. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
So far, we are doing compile time decisions on which architecture is used. However, for testing purposes it's much easier if we pass host architecture as parameter and then let the function decide which code snippet for extracting host CPU info will be used. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 19 6月, 2014 3 次提交
-
-
由 Michal Privoznik 提交于
And add stubs to other drivers like: lxc, qemu, uml and vbox. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
There are two places where you'll find info on page sizes. The first one is under <cpu/> element, where all supported pages sizes are listed. Then the second one is under each <cell/> element which refers to concrete NUMA node. At this place, the size of page's pool is reported. So the capabilities XML looks something like this: <capabilities> <host> <uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <topology sockets='1' cores='1' threads='1'/> ... <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> ... <topology> <cells num='4'> <cell id='0'> <memory unit='KiB'>4054408</memory> <pages unit='KiB' size='4'>1013602</pages> <pages unit='KiB' size='2048'>3</pages> <pages unit='KiB' size='1048576'>1</pages> <distances/> <cpus num='1'> <cpu id='0' socket_id='0' core_id='0' siblings='0'/> </cpus> </cell> <cell id='1'> <memory unit='KiB'>4071072</memory> <pages unit='KiB' size='4'>1017768</pages> <pages unit='KiB' size='2048'>3</pages> <pages unit='KiB' size='1048576'>1</pages> <distances/> <cpus num='1'> <cpu id='1' socket_id='0' core_id='0' siblings='1'/> </cpus> </cell> ... </cells> </topology> ... </host> <guest/> </capabilities> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
For future work we want to get info for not only the free memory but overall memory size too. That's why the function must have new signature too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 6月, 2014 1 次提交
-
-
由 Eric Blake 提交于
Commit 8ba0a58f introduced a compiler warning that I hit during a run of ./autobuild.sh: ../../src/nodeinfo.c: In function 'nodeCapsInitNUMA': ../../src/nodeinfo.c:1853:43: error: 'nsiblings' may be used uninitialized in this function [-Werror=maybe-uninitialized] if (virCapabilitiesAddHostNUMACell(caps, n, memory, ^ Sure enough, nsiblings starts uninitialized, and is set by a call to virNodeCapsGetSiblingInfo, but that function fails to assign through the pointer if virNumaGetDistances fails. * src/nodeinfo.c (nodeCapsInitNUMA): Initialize nsiblings. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 04 6月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
If user or management application wants to create a guest, it may be useful to know the cost of internode latencies before the guest resources are pinned. For example: <capabilities> <host> ... <topology> <cells num='2'> <cell id='0'> <memory unit='KiB'>4004132</memory> <distances> <sibling id='0' value='10'/> <sibling id='1' value='20'/> </distances> <cpus num='2'> <cpu id='0' socket_id='0' core_id='0' siblings='0'/> <cpu id='2' socket_id='0' core_id='2' siblings='2'/> </cpus> </cell> <cell id='1'> <memory unit='KiB'>4030064</memory> <distances> <sibling id='0' value='20'/> <sibling id='1' value='10'/> </distances> <cpus num='2'> <cpu id='1' socket_id='0' core_id='0' siblings='1'/> <cpu id='3' socket_id='0' core_id='2' siblings='3'/> </cpus> </cell> </cells> </topology> ... </host> ... </capabilities> We can see the distance from node1 to node0 is 20 and within nodes 10. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 29 4月, 2014 1 次提交
-
-
由 Natanael Copa 提交于
This makes sure that errno is reset before readdir is called, even if the loop does a 'continue'. This fixes issue with musl libc which sets errno on sscanf. The following 'continue' makes the errno be set before calling readdir. Signed-off-by: NNatanael Copa <ncopa@alpinelinux.org> Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 25 3月, 2014 1 次提交
-
-
由 Ján Tomko 提交于
-
- 20 3月, 2014 1 次提交
-
-
由 Wojciech Macek 提交于
New functionalities: - connectGetMaxVcpus - on bhyve hardcode this value to 16 - nodeGetFreeMemory - do not use physmem_get on FreeBSD, since it might get wrong value on systems with more than 100GB of RAM - nodeGetCPUMap - wrapper only for mapping function, currently not supported by FreeBSD - nodeSet/GetMemoryParameters - wrapper only for future improvements, currently not supported by FreeBSD
-
- 18 3月, 2014 1 次提交
-
-
由 Daniel P. Berrange 提交于
Any source file which calls the logging APIs now needs to have a VIR_LOG_INIT("source.name") declaration at the start of the file. This provides a static variable of the virLogSource type. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 06 2月, 2014 1 次提交
-
-
由 Roman Bogorodskiy 提交于
Implementation obtains CPU usage information using kern.cp_time and kern.cp_times sysctl(8)s and reports CPU utilization.
-
- 01 2月, 2014 1 次提交
-
-
由 John Ferlan 提交于
Coverity complains about default: label in libxl_driver.c not be able to be reached. It's by design for the code and since it's not necessary in the code nor does it elicit any compiler/make check warnings - just remove it rather than adding a coverity[dead_error_begin] tag. While I'm at it, lxc_driver.c and nodeinfo.c have the same design, so I removed the default labels and the existing coverity tags.
-