- 02 11月, 2015 1 次提交
-
-
由 Jacob Tanenbaum 提交于
[root@hp-dl980g7-02 linux]# cpupower monitor ... 5472| 0| 1|******|******|******|******|| 0.00| 0.00| 0.00| 0.00| 0.00 *is offline 10567| 0| 159|******|******|******|******|| 0.00| 0.00| 0.00| 0.00| 0.00 *is offline 1661206560|859272560| 150|******|******|******|******|| 0.00| 0.00| 0.00| 0.00| 0.00 *is offline 1661206560|943093104| 140|******|******|******|******|| 0.00| 0.00| 0.00| 0.00| 0.00 *is offline because of this cpupower also holds the incorrect value for the number of physical packages in the machine Changed cpupower to initialize the values of an offline cpu's socket and core to -1, warn the user that one or more cpus is/are offline and not print statistics for offline cpus. This fix hides offlined cores where topology cannot be accessed. With a recent kernel patch suggested from Prarit Bhargava it may be possible that soft offlined cores' topology can still be parsed. This patch would then show which cores in which package/socket are offline, when sane toplogoy information is available. Signed-off-by: NJacob Tanenbaum <jtanenba@redhat.com> Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 29 8月, 2015 1 次提交
-
-
由 Shreyas B. Prabhu 提交于
get_cpu_topology() tries to get topology info from all cpus by reading files in the topology sysfs dir. If a cpu is offlined, since it doesn't have topology dir, this function fails and returns -1. This causes functions relying on get_cpu_topology() to fail. For example- $ cpupower monitor Cannot read number of available processors Fix this by skipping fetching topology info for offline cpus. Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Reported-by: NPavaman Subramaniyam <pavsubra@linux.vnet.ibm.com> Acked-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 28 11月, 2012 3 次提交
-
-
由 Palmer Cox 提交于
The pkgs member of cpupower_topology is being used as the number of cpu packages. As the comment in get_cpu_topology notes, the package ids are not guaranteed to be contiguous. So, simply setting pkgs to the value of the highest physical_package_id doesn't actually provide a count of the number of cpu packages. Instead, calculate pkgs by setting it to the number of distinct physical_packge_id values which is pretty easy to do after the core_info structs are sorted. Calculating pkgs this way also has the nice benefit of getting rid of a sign comparison warning that GCC 4.6 was reporting. Signed-off-by: NPalmer Cox <p@lmercox.com> Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Palmer Cox 提交于
The cpu_info member of cpupower_topology was being declared as an unnamed structure. This member was then being malloced using the size of the parent cpupower_topology * the number of cpus. This works because cpu_info is smaller than cpupower_topology. However, there is no guarantee that will always be the case. Making cpu_info its own top level structure (named cpuid_core_info) allows for mallocing the actual size of this structure. This also lets us get rid of a redefinition of the structure in topology.c with slightly different field names. Signed-off-by: NPalmer Cox <p@lmercox.com> Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Palmer Cox 提交于
Fix a variety of issues with sysfs_topology_read_file: * The return value of sysfs_topology_read_file function was not properly being checked for failure. * The function was reading int valued sysfs variables and then returning their value. So, even if a function was trying to check the return value of this function, a caller would not be able to tell an failure code apart from reading a negative value. This also conflicted with the comment on the function which said that a return value of 0 indicated success. * The function was parsing int valued sysfs values with strtoul instead of strtol. * The function was non-static even though it was only used in the file it was declared in. Signed-off-by: NPalmer Cox <p@lmercox.com> Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 16 8月, 2011 1 次提交
-
-
由 Thomas Renninger 提交于
Before, checking for offlined CPUs was done dirty and it was checked whether topology parsing returned -1 values. But this is a valid case on a Xen (and possibly other) kernels. Do proper online/offline checking, also take CONFIG_HOTPLUG_CPU option into account (no /sys/devices/../cpuX/online file). Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
- 30 7月, 2011 2 次提交
-
-
由 Dominik Brodowski 提交于
Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
CPU power consumption vs performance tuning is no longer limited to CPU frequency switching anymore: deep sleep states, traditional dynamic frequency scaling and hidden turbo/boost frequencies are tied close together and depend on each other. The first two exist on different architectures like PPC, Itanium and ARM, the latter (so far) only on X86. On X86 the APU (CPU+GPU) will only run most efficiently if CPU and GPU has proper power management in place. Users and Developers want to have *one* tool to get an overview what their system supports and to monitor and debug CPU power management in detail. The tool should compile and work on as many architectures as possible. Once this tool stabilizes a bit, it is intended to replace the Intel-specific tools in tools/power/x86 Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-