提交 02c3de11 编写于 作者: L Linus Torvalds

Merge tag 'pm-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "The majority of changes go into the Operating Performance Points (OPP)
  framework and cpufreq this time, followed by devfreq and some
  scattered updates all over.

  The OPP changes are mostly related to switching over from RCU-based
  synchronization, that turned out to be overly complicated and
  problematic, to reference counting using krefs.

  In the cpufreq land there are core cleanups, documentation updates, a
  new driver for Broadcom BMIPS SoCs, a new cpufreq-dt sub-driver for TI
  SoCs that require special handling, ARM64 SoCs support for the qoriq
  driver, intel_pstate updates, powernv driver update and assorted
  fixes.

  The devfreq changes are mostly fixes related to the sysfs interface
  and some Exynos drivers updates.

  Apart from that, the cpuidle menu governor will support per-CPU PM QoS
  constraints for the wakeup latency now, some bugs in the wakeup IRQs
  framework are fixed, the generic power domains framework should handle
  asynchronous invocations of *noirq suspend/resume callbacks from now
  on, the analyze_suspend.py script is updated and there is a new tool
  for intel_pstate diagnostics.

  Specifics:

   - Operating Performance Points (OPP) framework fixes, cleanups and
     switch over from RCU-based synchronization to reference counting
     using krefs (Viresh Kumar, Wei Yongjun, Dave Gerlach)

   - cpufreq core cleanups and documentation updates (Viresh Kumar,
     Rafael Wysocki)

   - New cpufreq driver for Broadcom BMIPS SoCs (Markus Mayer)

   - New cpufreq-dt sub-driver for TI SoCs requiring special handling,
     like in the AM335x, AM437x, DRA7x, and AM57x families, along with
     new DT bindings for it (Dave Gerlach, Paul Gortmaker)

   - ARM64 SoCs support for the qoriq cpufreq driver (Tang Yuantian)

   - intel_pstate driver updates including a new sysfs knob to control
     the driver's operation mode and fixes related to the no_turbo sysfs
     knob and the hardware-managed P-states feature support (Rafael
     Wysocki, Srinivas Pandruvada)

   - New interface to export ultra-turbo frequencies for the powernv
     cpufreq driver (Shilpasri Bhat)

   - Assorted fixes for cpufreq drivers (Arnd Bergmann, Dan Carpenter,
     Wei Yongjun)

   - devfreq core fixes, mostly related to the sysfs interface exported
     by it (Chanwoo Choi, Chris Diamand)

   - Updates of the exynos-bus and exynos-ppmu devfreq drivers (Chanwoo
     Choi)

   - Device PM QoS extension to support CPUs and support for per-CPU
     wakeup (device resume) latency constraints in the cpuidle menu
     governor (Alex Shi)

   - Wakeup IRQs framework fixes (Grygorii Strashko)

   - Generic power domains framework update including a fix to make it
     handle asynchronous invocations of *noirq suspend/resume callbacks
     correctly (Ulf Hansson, Geert Uytterhoeven)

   - Assorted fixes and cleanups in the core suspend/hibernate code, PM
     QoS framework and x86 ACPI idle support code (Corentin Labbe, Geert
     Uytterhoeven, Geliang Tang, John Keeping, Nick Desaulniers)

   - Update of the analyze_suspend.py script is updated to version 4.5
     offering multiple improvements (Todd Brandt)

   - New tool for intel_pstate diagnostics using the pstate_sample
     tracepoint (Doug Smythies)"

* tag 'pm-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (85 commits)
  MAINTAINERS: cpufreq: add bmips-cpufreq.c
  PM / QoS: Fix memory leak on resume_latency.notifiers
  PM / Documentation: Spelling s/wrtie/write/
  PM / sleep: Fix test_suspend after sleep state rework
  cpufreq: CPPC: add ACPI_PROCESSOR dependency
  cpufreq: make ti-cpufreq explicitly non-modular
  cpufreq: Do not clear real_cpus mask on policy init
  tools/power/x86: Debug utility for intel_pstate driver
  AnalyzeSuspend: fix drag and zoom bug in javascript
  PM / wakeirq: report a wakeup_event on dedicated wekup irq
  PM / wakeirq: Fix spurious wake-up events for dedicated wakeirqs
  PM / wakeirq: Enable dedicated wakeirq for suspend
  cpufreq: dt: Don't use generic platdev driver for ti-cpufreq platforms
  cpufreq: ti: Add cpufreq driver to determine available OPPs at runtime
  Documentation: dt: add bindings for ti-cpufreq
  PM / OPP: Expose _of_get_opp_desc_node as dev_pm_opp API
  cpufreq: qoriq: Don't look at clock implementation details
  cpufreq: qoriq: add ARM64 SoCs support
  PM / Domains: Provide dummy governors if CONFIG_PM_GENERIC_DOMAINS=n
  cpufreq: brcmstb-avs-cpufreq: remove unnecessary platform_set_drvdata()
  ...
What: /sys/class/devfreq-event/event(x)/
Date: January 2017
Contact: Chanwoo Choi <cw00.choi@samsung.com>
Description:
Provide a place in sysfs for the devfreq-event objects.
This allows accessing various devfreq-event specific variables.
The name of devfreq-event object denoted as 'event(x)' which
includes the unique number of 'x' for each devfreq-event object.
What: /sys/class/devfreq-event/event(x)/name
Date: January 2017
Contact: Chanwoo Choi <cw00.choi@samsung.com>
Description:
The /sys/class/devfreq-event/event(x)/name attribute contains
the name of the devfreq-event object. This attribute is
read-only.
What: /sys/class/devfreq-event/event(x)/enable_count
Date: January 2017
Contact: Chanwoo Choi <cw00.choi@samsung.com>
Description:
The /sys/class/devfreq-event/event(x)/enable_count attribute
contains the reference count to enable the devfreq-event
object. If the device is enabled, the value of attribute is
greater than zero.
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
Dominik Brodowski <linux@brodo.de> Dominik Brodowski <linux@brodo.de>
David Kimdon <dwhedon@debian.org> David Kimdon <dwhedon@debian.org>
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar <viresh.kumar@linaro.org>
...@@ -36,10 +38,11 @@ speed limits (like LCD drivers on ARM architecture). Additionally, the ...@@ -36,10 +38,11 @@ speed limits (like LCD drivers on ARM architecture). Additionally, the
kernel "constant" loops_per_jiffy is updated on frequency changes kernel "constant" loops_per_jiffy is updated on frequency changes
here. here.
Reference counting is done by cpufreq_get_cpu and cpufreq_put_cpu, Reference counting of the cpufreq policies is done by cpufreq_cpu_get
which make sure that the cpufreq processor driver is correctly and cpufreq_cpu_put, which make sure that the cpufreq driver is
registered with the core, and will not be unloaded until correctly registered with the core, and will not be unloaded until
cpufreq_put_cpu is called. cpufreq_put_cpu is called. That also ensures that the respective cpufreq
policy doesn't get freed while being used.
2. CPUFreq notifiers 2. CPUFreq notifiers
==================== ====================
...@@ -69,18 +72,16 @@ CPUFreq policy notifier is called twice for a policy transition: ...@@ -69,18 +72,16 @@ CPUFreq policy notifier is called twice for a policy transition:
The phase is specified in the second argument to the notifier. The phase is specified in the second argument to the notifier.
The third argument, a void *pointer, points to a struct cpufreq_policy The third argument, a void *pointer, points to a struct cpufreq_policy
consisting of five values: cpu, min, max, policy and max_cpu_freq. min consisting of several values, including min, max (the lower and upper
and max are the lower and upper frequencies (in kHz) of the new frequencies (in kHz) of the new policy).
policy, policy the new policy, cpu the number of the affected CPU; and
max_cpu_freq the maximum supported CPU frequency. This value is given
for informational purposes only.
2.2 CPUFreq transition notifiers 2.2 CPUFreq transition notifiers
-------------------------------- --------------------------------
These are notified twice when the CPUfreq driver switches the CPU core These are notified twice for each online CPU in the policy, when the
frequency and this change has any external implications. CPUfreq driver switches the CPU core frequency and this change has no
any external implications.
The second argument specifies the phase - CPUFREQ_PRECHANGE or The second argument specifies the phase - CPUFREQ_PRECHANGE or
CPUFREQ_POSTCHANGE. CPUFREQ_POSTCHANGE.
...@@ -90,6 +91,7 @@ values: ...@@ -90,6 +91,7 @@ values:
cpu - number of the affected CPU cpu - number of the affected CPU
old - old frequency old - old frequency
new - new frequency new - new frequency
flags - flags of the cpufreq driver
3. CPUFreq Table Generation with Operating Performance Point (OPP) 3. CPUFreq Table Generation with Operating Performance Point (OPP)
================================================================== ==================================================================
......
...@@ -9,6 +9,8 @@ ...@@ -9,6 +9,8 @@
Dominik Brodowski <linux@brodo.de> Dominik Brodowski <linux@brodo.de>
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar <viresh.kumar@linaro.org>
...@@ -49,49 +51,65 @@ using cpufreq_register_driver() ...@@ -49,49 +51,65 @@ using cpufreq_register_driver()
What shall this struct cpufreq_driver contain? What shall this struct cpufreq_driver contain?
cpufreq_driver.name - The name of this driver. .name - The name of this driver.
cpufreq_driver.init - A pointer to the per-CPU initialization .init - A pointer to the per-policy initialization function.
function.
cpufreq_driver.verify - A pointer to a "verification" function. .verify - A pointer to a "verification" function.
cpufreq_driver.setpolicy _or_ .setpolicy _or_ .fast_switch _or_ .target _or_ .target_index - See
cpufreq_driver.target/ below on the differences.
target_index - See below on the differences.
And optionally And optionally
cpufreq_driver.exit - A pointer to a per-CPU cleanup .flags - Hints for the cpufreq core.
function called during CPU_POST_DEAD
phase of cpu hotplug process.
cpufreq_driver.stop_cpu - A pointer to a per-CPU stop function .driver_data - cpufreq driver specific data.
called during CPU_DOWN_PREPARE phase of
cpu hotplug process.
cpufreq_driver.resume - A pointer to a per-CPU resume function .resolve_freq - Returns the most appropriate frequency for a target
which is called with interrupts disabled frequency. Doesn't change the frequency though.
and _before_ the pre-suspend frequency
and/or policy is restored by a call to
->target/target_index or ->setpolicy.
cpufreq_driver.attr - A pointer to a NULL-terminated list of .get_intermediate and target_intermediate - Used to switch to stable
"struct freq_attr" which allow to frequency while changing CPU frequency.
export values to sysfs.
cpufreq_driver.get_intermediate .get - Returns current frequency of the CPU.
and target_intermediate Used to switch to stable frequency while
changing CPU frequency. .bios_limit - Returns HW/BIOS max frequency limitations for the CPU.
.exit - A pointer to a per-policy cleanup function called during
CPU_POST_DEAD phase of cpu hotplug process.
.stop_cpu - A pointer to a per-policy stop function called during
CPU_DOWN_PREPARE phase of cpu hotplug process.
.suspend - A pointer to a per-policy suspend function which is called
with interrupts disabled and _after_ the governor is stopped for the
policy.
.resume - A pointer to a per-policy resume function which is called
with interrupts disabled and _before_ the governor is started again.
.ready - A pointer to a per-policy ready function which is called after
the policy is fully initialized.
.attr - A pointer to a NULL-terminated list of "struct freq_attr" which
allow to export values to sysfs.
.boost_enabled - If set, boost frequencies are enabled.
.set_boost - A pointer to a per-policy function to enable/disable boost
frequencies.
1.2 Per-CPU Initialization 1.2 Per-CPU Initialization
-------------------------- --------------------------
Whenever a new CPU is registered with the device model, or after the Whenever a new CPU is registered with the device model, or after the
cpufreq driver registers itself, the per-CPU initialization function cpufreq driver registers itself, the per-policy initialization function
cpufreq_driver.init is called. It takes a struct cpufreq_policy cpufreq_driver.init is called if no cpufreq policy existed for the CPU.
*policy as argument. What to do now? Note that the .init() and .exit() routines are called only once for the
policy and not for each CPU managed by the policy. It takes a struct
cpufreq_policy *policy as argument. What to do now?
If necessary, activate the CPUfreq support on your CPU. If necessary, activate the CPUfreq support on your CPU.
...@@ -117,47 +135,45 @@ policy->governor must contain the "default policy" for ...@@ -117,47 +135,45 @@ policy->governor must contain the "default policy" for
cpufreq_driver.setpolicy or cpufreq_driver.setpolicy or
cpufreq_driver.target/target_index is called cpufreq_driver.target/target_index is called
with these values. with these values.
policy->cpus Update this with the masks of the
(online + offline) CPUs that do DVFS
along with this CPU (i.e. that share
clock/voltage rails with it).
For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the
frequency table helpers might be helpful. See the section 2 for more information frequency table helpers might be helpful. See the section 2 for more information
on them. on them.
SMP systems normally have same clock source for a group of cpus. For these the
.init() would be called only once for the first online cpu. Here the .init()
routine must initialize policy->cpus with mask of all possible cpus (Online +
Offline) that share the clock. Then the core would copy this mask onto
policy->related_cpus and will reset policy->cpus to carry only online cpus.
1.3 verify 1.3 verify
------------ ----------
When the user decides a new policy (consisting of When the user decides a new policy (consisting of
"policy,governor,min,max") shall be set, this policy must be validated "policy,governor,min,max") shall be set, this policy must be validated
so that incompatible values can be corrected. For verifying these so that incompatible values can be corrected. For verifying these
values, a frequency table helper and/or the values cpufreq_verify_within_limits(struct cpufreq_policy *policy,
cpufreq_verify_within_limits(struct cpufreq_policy *policy, unsigned unsigned int min_freq, unsigned int max_freq) function might be helpful.
int min_freq, unsigned int max_freq) function might be helpful. See See section 2 for details on frequency table helpers.
section 2 for details on frequency table helpers.
You need to make sure that at least one valid frequency (or operating You need to make sure that at least one valid frequency (or operating
range) is within policy->min and policy->max. If necessary, increase range) is within policy->min and policy->max. If necessary, increase
policy->max first, and only if this is no solution, decrease policy->min. policy->max first, and only if this is no solution, decrease policy->min.
1.4 target/target_index or setpolicy? 1.4 target or target_index or setpolicy or fast_switch?
---------------------------- -------------------------------------------------------
Most cpufreq drivers or even most cpu frequency scaling algorithms Most cpufreq drivers or even most cpu frequency scaling algorithms
only allow the CPU to be set to one frequency. For these, you use the only allow the CPU frequency to be set to predefined fixed values. For
->target/target_index call. these, you use the ->target(), ->target_index() or ->fast_switch()
callbacks.
Some cpufreq-capable processors switch the frequency between certain Some cpufreq capable processors switch the frequency between certain
limits on their own. These shall use the ->setpolicy call limits on their own. These shall use the ->setpolicy() callback.
1.5. target/target_index 1.5. target/target_index
------------- ------------------------
The target_index call has two arguments: struct cpufreq_policy *policy, The target_index call has two arguments: struct cpufreq_policy *policy,
and unsigned int index (into the exposed frequency table). and unsigned int index (into the exposed frequency table).
...@@ -186,9 +202,20 @@ actual frequency must be determined using the following rules: ...@@ -186,9 +202,20 @@ actual frequency must be determined using the following rules:
Here again the frequency table helper might assist you - see section 2 Here again the frequency table helper might assist you - see section 2
for details. for details.
1.6. fast_switch
----------------
1.6 setpolicy This function is used for frequency switching from scheduler's context.
--------------- Not all drivers are expected to implement it, as sleeping from within
this callback isn't allowed. This callback must be highly optimized to
do switching as fast as possible.
This function has two arguments: struct cpufreq_policy *policy and
unsigned int target_frequency.
1.7 setpolicy
-------------
The setpolicy call only takes a struct cpufreq_policy *policy as The setpolicy call only takes a struct cpufreq_policy *policy as
argument. You need to set the lower limit of the in-processor or argument. You need to set the lower limit of the in-processor or
...@@ -198,7 +225,7 @@ setting when policy->policy is CPUFREQ_POLICY_PERFORMANCE, and a ...@@ -198,7 +225,7 @@ setting when policy->policy is CPUFREQ_POLICY_PERFORMANCE, and a
powersaving-oriented setting when CPUFREQ_POLICY_POWERSAVE. Also check powersaving-oriented setting when CPUFREQ_POLICY_POWERSAVE. Also check
the reference implementation in drivers/cpufreq/longrun.c the reference implementation in drivers/cpufreq/longrun.c
1.7 get_intermediate and target_intermediate 1.8 get_intermediate and target_intermediate
-------------------------------------------- --------------------------------------------
Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION unset. Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION unset.
...@@ -222,42 +249,36 @@ failures as core would send notifications for that. ...@@ -222,42 +249,36 @@ failures as core would send notifications for that.
As most cpufreq processors only allow for being set to a few specific As most cpufreq processors only allow for being set to a few specific
frequencies, a "frequency table" with some functions might assist in frequencies, a "frequency table" with some functions might assist in
some work of the processor driver. Such a "frequency table" consists some work of the processor driver. Such a "frequency table" consists of
of an array of struct cpufreq_frequency_table entries, with any value in an array of struct cpufreq_frequency_table entries, with driver specific
"driver_data" you want to use, and the corresponding frequency in values in "driver_data", the corresponding frequency in "frequency" and
"frequency". At the end of the table, you need to add a flags set. At the end of the table, you need to add a
cpufreq_frequency_table entry with frequency set to CPUFREQ_TABLE_END. And cpufreq_frequency_table entry with frequency set to CPUFREQ_TABLE_END.
if you want to skip one entry in the table, set the frequency to And if you want to skip one entry in the table, set the frequency to
CPUFREQ_ENTRY_INVALID. The entries don't need to be in ascending CPUFREQ_ENTRY_INVALID. The entries don't need to be in sorted in any
order. particular order, but if they are cpufreq core will do DVFS a bit
quickly for them as search for best match is faster.
By calling cpufreq_table_validate_and_show(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table); By calling cpufreq_table_validate_and_show(), the cpuinfo.min_freq and
the cpuinfo.min_freq and cpuinfo.max_freq values are detected, and cpuinfo.max_freq values are detected, and policy->min and policy->max
policy->min and policy->max are set to the same values. This is are set to the same values. This is helpful for the per-CPU
helpful for the per-CPU initialization stage. initialization stage.
int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, cpufreq_frequency_table_verify() assures that at least one valid
struct cpufreq_frequency_table *table); frequency is within policy->min and policy->max, and all other criteria
assures that at least one valid frequency is within policy->min and are met. This is helpful for the ->verify call.
policy->max, and all other criteria are met. This is helpful for the
->verify call. cpufreq_frequency_table_target() is the corresponding frequency table
helper for the ->target stage. Just pass the values to this function,
int cpufreq_frequency_table_target(struct cpufreq_policy *policy, and this function returns the of the frequency table entry which
unsigned int target_freq, contains the frequency the CPU shall be set to.
unsigned int relation);
is the corresponding frequency table helper for the ->target
stage. Just pass the values to this function, and this function
returns the number of the frequency table entry which contains
the frequency the CPU shall be set to.
The following macros can be used as iterators over cpufreq_frequency_table: The following macros can be used as iterators over cpufreq_frequency_table:
cpufreq_for_each_entry(pos, table) - iterates over all entries of frequency cpufreq_for_each_entry(pos, table) - iterates over all entries of frequency
table. table.
cpufreq-for_each_valid_entry(pos, table) - iterates over all entries, cpufreq_for_each_valid_entry(pos, table) - iterates over all entries,
excluding CPUFREQ_ENTRY_INVALID frequencies. excluding CPUFREQ_ENTRY_INVALID frequencies.
Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and
"table" - the cpufreq_frequency_table * you want to iterate over. "table" - the cpufreq_frequency_table * you want to iterate over.
......
...@@ -35,9 +35,9 @@ cpufreq stats provides following statistics (explained in detail below). ...@@ -35,9 +35,9 @@ cpufreq stats provides following statistics (explained in detail below).
- trans_table - trans_table
All the statistics will be from the time the stats driver has been inserted All the statistics will be from the time the stats driver has been inserted
to the time when a read of a particular statistic is done. Obviously, stats (or the time the stats were reset) to the time when a read of a particular
driver will not have any information about the frequency transitions before statistic is done. Obviously, stats driver will not have any information
the stats driver insertion. about the frequency transitions before the stats driver insertion.
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l
...@@ -110,25 +110,13 @@ Config Main Menu ...@@ -110,25 +110,13 @@ Config Main Menu
CPU Frequency scaling ---> CPU Frequency scaling --->
[*] CPU Frequency scaling [*] CPU Frequency scaling
[*] CPU frequency translation statistics [*] CPU frequency translation statistics
[*] CPU frequency translation statistics details
"CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure "CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure
cpufreq-stats. cpufreq-stats.
"CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the "CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the
basic statistics which includes time_in_state and total_trans. statistics which includes time_in_state, total_trans and trans_table.
"CPU frequency translation statistics details" (CONFIG_CPU_FREQ_STAT_DETAILS) Once this option is enabled and your CPU supports cpufrequency, you
provides fine grained cpufreq stats by trans_table. The reason for having a
separate config option for trans_table is:
- trans_table goes against the traditional /sysfs rule of one value per
interface. It provides a whole bunch of value in a 2 dimensional matrix
form.
Once these two options are enabled and your CPU supports cpufrequency, you
will be able to see the CPU frequency statistics in /sysfs. will be able to see the CPU frequency statistics in /sysfs.
...@@ -10,6 +10,8 @@ ...@@ -10,6 +10,8 @@
Dominik Brodowski <linux@brodo.de> Dominik Brodowski <linux@brodo.de>
some additions and corrections by Nico Golde <nico@ngolde.de> some additions and corrections by Nico Golde <nico@ngolde.de>
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar <viresh.kumar@linaro.org>
...@@ -28,32 +30,27 @@ Contents: ...@@ -28,32 +30,27 @@ Contents:
2.3 Userspace 2.3 Userspace
2.4 Ondemand 2.4 Ondemand
2.5 Conservative 2.5 Conservative
2.6 Schedutil
3. The Governor Interface in the CPUfreq Core 3. The Governor Interface in the CPUfreq Core
4. References
1. What Is A CPUFreq Governor? 1. What Is A CPUFreq Governor?
============================== ==============================
Most cpufreq drivers (except the intel_pstate and longrun) or even most Most cpufreq drivers (except the intel_pstate and longrun) or even most
cpu frequency scaling algorithms only offer the CPU to be set to one cpu frequency scaling algorithms only allow the CPU frequency to be set
frequency. In order to offer dynamic frequency scaling, the cpufreq to predefined fixed values. In order to offer dynamic frequency
core must be able to tell these drivers of a "target frequency". So scaling, the cpufreq core must be able to tell these drivers of a
these specific drivers will be transformed to offer a "->target/target_index" "target frequency". So these specific drivers will be transformed to
call instead of the existing "->setpolicy" call. For "longrun", all offer a "->target/target_index/fast_switch()" call instead of the
stays the same, though. "->setpolicy()" call. For set_policy drivers, all stays the same,
though.
How to decide what frequency within the CPUfreq policy should be used? How to decide what frequency within the CPUfreq policy should be used?
That's done using "cpufreq governors". Two are already in this patch That's done using "cpufreq governors".
-- they're the already existing "powersave" and "performance" which
set the frequency statically to the lowest or highest frequency,
respectively. At least two more such governors will be ready for
addition in the near future, but likely many more as there are various
different theories and models about dynamic frequency scaling
around. Using such a generic interface as cpufreq offers to scaling
governors, these can be tested extensively, and the best one can be
selected for each specific use.
Basically, it's the following flow graph: Basically, it's the following flow graph:
...@@ -71,7 +68,7 @@ CPU can be set to switch independently | CPU can only be set ...@@ -71,7 +68,7 @@ CPU can be set to switch independently | CPU can only be set
/ the limits of policy->{min,max} / the limits of policy->{min,max}
/ \ / \
/ \ / \
Using the ->setpolicy call, Using the ->target/target_index call, Using the ->setpolicy call, Using the ->target/target_index/fast_switch call,
the limits and the the frequency closest the limits and the the frequency closest
"policy" is set. to target_freq is set. "policy" is set. to target_freq is set.
It is assured that it It is assured that it
...@@ -109,114 +106,159 @@ directory. ...@@ -109,114 +106,159 @@ directory.
2.4 Ondemand 2.4 Ondemand
------------ ------------
The CPUfreq governor "ondemand" sets the CPU depending on the The CPUfreq governor "ondemand" sets the CPU frequency depending on the
current usage. To do this the CPU must have the capability to current system load. Load estimation is triggered by the scheduler
switch the frequency very quickly. There are a number of sysfs file through the update_util_data->func hook; when triggered, cpufreq checks
accessible parameters: the CPU-usage statistics over the last period and the governor sets the
CPU accordingly. The CPU must have the capability to switch the
sampling_rate: measured in uS (10^-6 seconds), this is how often you frequency very quickly.
want the kernel to look at the CPU usage and to make decisions on
what to do about the frequency. Typically this is set to values of Sysfs files:
around '10000' or more. It's default value is (cmp. with users-guide.txt):
transition_latency * 1000 * sampling_rate:
Be aware that transition latency is in ns and sampling_rate is in us, so you
get the same sysfs value by default. Measured in uS (10^-6 seconds), this is how often you want the kernel
Sampling rate should always get adjusted considering the transition latency to look at the CPU usage and to make decisions on what to do about the
To set the sampling rate 750 times as high as the transition latency frequency. Typically this is set to values of around '10000' or more.
in the bash (as said, 1000 is default), do: It's default value is (cmp. with users-guide.txt): transition_latency
echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) \ * 1000. Be aware that transition latency is in ns and sampling_rate
>ondemand/sampling_rate is in us, so you get the same sysfs value by default. Sampling rate
should always get adjusted considering the transition latency to set
sampling_rate_min: the sampling rate 750 times as high as the transition latency in the
The sampling rate is limited by the HW transition latency: bash (as said, 1000 is default), do:
transition_latency * 100
Or by kernel restrictions: $ echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate
If CONFIG_NO_HZ_COMMON is set, the limit is 10ms fixed.
If CONFIG_NO_HZ_COMMON is not set or nohz=off boot parameter is used, the * sampling_rate_min:
limits depend on the CONFIG_HZ option:
HZ=1000: min=20000us (20ms) The sampling rate is limited by the HW transition latency:
HZ=250: min=80000us (80ms) transition_latency * 100
HZ=100: min=200000us (200ms)
The highest value of kernel and HW latency restrictions is shown and Or by kernel restrictions:
used as the minimum sampling rate. - If CONFIG_NO_HZ_COMMON is set, the limit is 10ms fixed.
- If CONFIG_NO_HZ_COMMON is not set or nohz=off boot parameter is
up_threshold: defines what the average CPU usage between the samplings used, the limits depend on the CONFIG_HZ option:
of 'sampling_rate' needs to be for the kernel to make a decision on HZ=1000: min=20000us (20ms)
whether it should increase the frequency. For example when it is set HZ=250: min=80000us (80ms)
to its default value of '95' it means that between the checking HZ=100: min=200000us (200ms)
intervals the CPU needs to be on average more than 95% in use to then
decide that the CPU frequency needs to be increased. The highest value of kernel and HW latency restrictions is shown and
used as the minimum sampling rate.
ignore_nice_load: this parameter takes a value of '0' or '1'. When
set to '0' (its default), all processes are counted towards the * up_threshold:
'cpu utilisation' value. When set to '1', the processes that are
run with a 'nice' value will not count (and thus be ignored) in the This defines what the average CPU usage between the samplings of
overall usage calculation. This is useful if you are running a CPU 'sampling_rate' needs to be for the kernel to make a decision on
intensive calculation on your laptop that you do not care how long it whether it should increase the frequency. For example when it is set
takes to complete as you can 'nice' it and prevent it from taking part to its default value of '95' it means that between the checking
in the deciding process of whether to increase your CPU frequency. intervals the CPU needs to be on average more than 95% in use to then
decide that the CPU frequency needs to be increased.
sampling_down_factor: this parameter controls the rate at which the
kernel makes a decision on when to decrease the frequency while running * ignore_nice_load:
at top speed. When set to 1 (the default) decisions to reevaluate load
are made at the same interval regardless of current clock speed. But This parameter takes a value of '0' or '1'. When set to '0' (its
when set to greater than 1 (e.g. 100) it acts as a multiplier for the default), all processes are counted towards the 'cpu utilisation'
scheduling interval for reevaluating load when the CPU is at its top value. When set to '1', the processes that are run with a 'nice'
speed due to high load. This improves performance by reducing the overhead value will not count (and thus be ignored) in the overall usage
of load evaluation and helping the CPU stay at its top speed when truly calculation. This is useful if you are running a CPU intensive
busy, rather than shifting back and forth in speed. This tunable has no calculation on your laptop that you do not care how long it takes to
effect on behavior at lower speeds/lower CPU loads. complete as you can 'nice' it and prevent it from taking part in the
deciding process of whether to increase your CPU frequency.
powersave_bias: this parameter takes a value between 0 to 1000. It
defines the percentage (times 10) value of the target frequency that * sampling_down_factor:
will be shaved off of the target. For example, when set to 100 -- 10%,
when ondemand governor would have targeted 1000 MHz, it will target This parameter controls the rate at which the kernel makes a decision
1000 MHz - (10% of 1000 MHz) = 900 MHz instead. This is set to 0 on when to decrease the frequency while running at top speed. When set
(disabled) by default. to 1 (the default) decisions to reevaluate load are made at the same
When AMD frequency sensitivity powersave bias driver -- interval regardless of current clock speed. But when set to greater
drivers/cpufreq/amd_freq_sensitivity.c is loaded, this parameter than 1 (e.g. 100) it acts as a multiplier for the scheduling interval
defines the workload frequency sensitivity threshold in which a lower for reevaluating load when the CPU is at its top speed due to high
frequency is chosen instead of ondemand governor's original target. load. This improves performance by reducing the overhead of load
The frequency sensitivity is a hardware reported (on AMD Family 16h evaluation and helping the CPU stay at its top speed when truly busy,
Processors and above) value between 0 to 100% that tells software how rather than shifting back and forth in speed. This tunable has no
the performance of the workload running on a CPU will change when effect on behavior at lower speeds/lower CPU loads.
frequency changes. A workload with sensitivity of 0% (memory/IO-bound)
will not perform any better on higher core frequency, whereas a * powersave_bias:
workload with sensitivity of 100% (CPU-bound) will perform better
higher the frequency. When the driver is loaded, this is set to 400 This parameter takes a value between 0 to 1000. It defines the
by default -- for CPUs running workloads with sensitivity value below percentage (times 10) value of the target frequency that will be
40%, a lower frequency is chosen. Unloading the driver or writing 0 shaved off of the target. For example, when set to 100 -- 10%, when
will disable this feature. ondemand governor would have targeted 1000 MHz, it will target
1000 MHz - (10% of 1000 MHz) = 900 MHz instead. This is set to 0
(disabled) by default.
When AMD frequency sensitivity powersave bias driver --
drivers/cpufreq/amd_freq_sensitivity.c is loaded, this parameter
defines the workload frequency sensitivity threshold in which a lower
frequency is chosen instead of ondemand governor's original target.
The frequency sensitivity is a hardware reported (on AMD Family 16h
Processors and above) value between 0 to 100% that tells software how
the performance of the workload running on a CPU will change when
frequency changes. A workload with sensitivity of 0% (memory/IO-bound)
will not perform any better on higher core frequency, whereas a
workload with sensitivity of 100% (CPU-bound) will perform better
higher the frequency. When the driver is loaded, this is set to 400 by
default -- for CPUs running workloads with sensitivity value below
40%, a lower frequency is chosen. Unloading the driver or writing 0
will disable this feature.
2.5 Conservative 2.5 Conservative
---------------- ----------------
The CPUfreq governor "conservative", much like the "ondemand" The CPUfreq governor "conservative", much like the "ondemand"
governor, sets the CPU depending on the current usage. It differs in governor, sets the CPU frequency depending on the current usage. It
behaviour in that it gracefully increases and decreases the CPU speed differs in behaviour in that it gracefully increases and decreases the
rather than jumping to max speed the moment there is any load on the CPU speed rather than jumping to max speed the moment there is any load
CPU. This behaviour more suitable in a battery powered environment. on the CPU. This behaviour is more suitable in a battery powered
The governor is tweaked in the same manner as the "ondemand" governor environment. The governor is tweaked in the same manner as the
through sysfs with the addition of: "ondemand" governor through sysfs with the addition of:
freq_step: this describes what percentage steps the cpu freq should be * freq_step:
increased and decreased smoothly by. By default the cpu frequency will
increase in 5% chunks of your maximum cpu frequency. You can change this This describes what percentage steps the cpu freq should be increased
value to anywhere between 0 and 100 where '0' will effectively lock your and decreased smoothly by. By default the cpu frequency will increase
CPU at a speed regardless of its load whilst '100' will, in theory, make in 5% chunks of your maximum cpu frequency. You can change this value
it behave identically to the "ondemand" governor. to anywhere between 0 and 100 where '0' will effectively lock your CPU
at a speed regardless of its load whilst '100' will, in theory, make
down_threshold: same as the 'up_threshold' found for the "ondemand" it behave identically to the "ondemand" governor.
governor but for the opposite direction. For example when set to its
default value of '20' it means that if the CPU usage needs to be below * down_threshold:
20% between samples to have the frequency decreased.
Same as the 'up_threshold' found for the "ondemand" governor but for
sampling_down_factor: similar functionality as in "ondemand" governor. the opposite direction. For example when set to its default value of
But in "conservative", it controls the rate at which the kernel makes '20' it means that if the CPU usage needs to be below 20% between
a decision on when to decrease the frequency while running in any samples to have the frequency decreased.
speed. Load for frequency increase is still evaluated every
sampling rate. * sampling_down_factor:
Similar functionality as in "ondemand" governor. But in
"conservative", it controls the rate at which the kernel makes a
decision on when to decrease the frequency while running in any speed.
Load for frequency increase is still evaluated every sampling rate.
2.6 Schedutil
-------------
The "schedutil" governor aims at better integration with the Linux
kernel scheduler. Load estimation is achieved through the scheduler's
Per-Entity Load Tracking (PELT) mechanism, which also provides
information about the recent load [1]. This governor currently does
load based DVFS only for tasks managed by CFS. RT and DL scheduler tasks
are always run at the highest frequency. Unlike all the other
governors, the code is located under the kernel/sched/ directory.
Sysfs files:
* rate_limit_us:
This contains a value in microseconds. The governor waits for
rate_limit_us time before reevaluating the load again, after it has
evaluated the load once.
For an in-depth comparison with the other governors refer to [2].
3. The Governor Interface in the CPUfreq Core 3. The Governor Interface in the CPUfreq Core
============================================= =============================================
...@@ -225,26 +267,10 @@ A new governor must register itself with the CPUfreq core using ...@@ -225,26 +267,10 @@ A new governor must register itself with the CPUfreq core using
"cpufreq_register_governor". The struct cpufreq_governor, which has to "cpufreq_register_governor". The struct cpufreq_governor, which has to
be passed to that function, must contain the following values: be passed to that function, must contain the following values:
governor->name - A unique name for this governor governor->name - A unique name for this governor.
governor->governor - The governor callback function governor->owner - .THIS_MODULE for the governor module (if appropriate).
governor->owner - .THIS_MODULE for the governor module (if
appropriate)
The governor->governor callback is called with the current (or to-be-set)
cpufreq_policy struct for that CPU, and an unsigned int event. The
following events are currently defined:
CPUFREQ_GOV_START: This governor shall start its duty for the CPU
policy->cpu
CPUFREQ_GOV_STOP: This governor shall end its duty for the CPU
policy->cpu
CPUFREQ_GOV_LIMITS: The limits for CPU policy->cpu have changed to
policy->min and policy->max.
If you need other "events" externally of your driver, _only_ use the
cpufreq_governor_l(unsigned int cpu, unsigned int event) call to the
CPUfreq core to ensure proper locking.
plus a set of hooks to the functions implementing the governor's logic.
The CPUfreq governor may call the CPU processor driver using one of The CPUfreq governor may call the CPU processor driver using one of
these two functions: these two functions:
...@@ -258,12 +284,18 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy, ...@@ -258,12 +284,18 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
unsigned int relation); unsigned int relation);
target_freq must be within policy->min and policy->max, of course. target_freq must be within policy->min and policy->max, of course.
What's the difference between these two functions? When your governor What's the difference between these two functions? When your governor is
still is in a direct code path of a call to governor->governor, the in a direct code path of a call to governor callbacks, like
per-CPU cpufreq lock is still held in the cpufreq core, and there's governor->start(), the policy->rwsem is still held in the cpufreq core,
no need to lock it again (in fact, this would cause a deadlock). So and there's no need to lock it again (in fact, this would cause a
use __cpufreq_driver_target only in these cases. In all other cases deadlock). So use __cpufreq_driver_target only in these cases. In all
(for example, when there's a "daemonized" function that wakes up other cases (for example, when there's a "daemonized" function that
every second), use cpufreq_driver_target to lock the cpufreq per-CPU wakes up every second), use cpufreq_driver_target to take policy->rwsem
lock before the command is passed to the cpufreq processor driver. before the command is passed to the cpufreq driver.
4. References
=============
[1] Per-entity load tracking: https://lwn.net/Articles/531853/
[2] Improvements in CPU frequency management: https://lwn.net/Articles/682391/
...@@ -18,16 +18,29 @@ ...@@ -18,16 +18,29 @@
Documents in this directory: Documents in this directory:
---------------------------- ----------------------------
amd-powernow.txt - AMD powernow driver specific file.
boost.txt - Frequency boosting support.
core.txt - General description of the CPUFreq core and core.txt - General description of the CPUFreq core and
of CPUFreq notifiers of CPUFreq notifiers.
cpu-drivers.txt - How to implement a new cpufreq processor driver.
cpu-drivers.txt - How to implement a new cpufreq processor driver cpufreq-nforce2.txt - nVidia nForce2 platform specific file.
cpufreq-stats.txt - General description of sysfs cpufreq stats.
governors.txt - What are cpufreq governors and how to governors.txt - What are cpufreq governors and how to
implement them? implement them?
index.txt - File index, Mailing list and Links (this document) index.txt - File index, Mailing list and Links (this document)
intel-pstate.txt - Intel pstate cpufreq driver specific file.
pcc-cpufreq.txt - PCC cpufreq driver specific file.
user-guide.txt - User Guide to CPUFreq user-guide.txt - User Guide to CPUFreq
...@@ -35,9 +48,7 @@ Mailing List ...@@ -35,9 +48,7 @@ Mailing List
------------ ------------
There is a CPU frequency changing CVS commit and general list where There is a CPU frequency changing CVS commit and general list where
you can report bugs, problems or submit patches. To post a message, you can report bugs, problems or submit patches. To post a message,
send an email to linux-pm@vger.kernel.org, to subscribe go to send an email to linux-pm@vger.kernel.org.
http://vger.kernel.org/vger-lists.html#linux-pm and follow the
instructions there.
Links Links
----- -----
...@@ -48,7 +59,7 @@ how to access the CVS repository: ...@@ -48,7 +59,7 @@ how to access the CVS repository:
* http://cvs.arm.linux.org.uk/ * http://cvs.arm.linux.org.uk/
the CPUFreq Mailing list: the CPUFreq Mailing list:
* http://vger.kernel.org/vger-lists.html#cpufreq * http://vger.kernel.org/vger-lists.html#linux-pm
Clock and voltage scaling for the SA-1100: Clock and voltage scaling for the SA-1100:
* http://www.lartmaker.nl/projects/scaling * http://www.lartmaker.nl/projects/scaling
...@@ -85,6 +85,21 @@ Sysfs will show : ...@@ -85,6 +85,21 @@ Sysfs will show :
Refer to "Intel® 64 and IA-32 Architectures Software Developer’s Manual Refer to "Intel® 64 and IA-32 Architectures Software Developer’s Manual
Volume 3: System Programming Guide" to understand ratios. Volume 3: System Programming Guide" to understand ratios.
There is one more sysfs attribute in /sys/devices/system/cpu/intel_pstate/
that can be used for controlling the operation mode of the driver:
status: Three settings are possible:
"off" - The driver is not in use at this time.
"active" - The driver works as a P-state governor (default).
"passive" - The driver works as a regular cpufreq one and collaborates
with the generic cpufreq governors (it sets P-states as
requested by those governors).
The current setting is returned by reads from this attribute. Writing one
of the above strings to it changes the operation mode as indicated by that
string, if possible. If HW-managed P-states (HWP) are enabled, it is not
possible to change the driver's operation mode and attempts to write to
this attribute will fail.
cpufreq sysfs for Intel P-State cpufreq sysfs for Intel P-State
Since this driver registers with cpufreq, cpufreq sysfs is also presented. Since this driver registers with cpufreq, cpufreq sysfs is also presented.
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
Contents: Contents:
--------- ---------
1. Supported Architectures and Processors 1. Supported Architectures and Processors
1.1 ARM 1.1 ARM and ARM64
1.2 x86 1.2 x86
1.3 sparc64 1.3 sparc64
1.4 ppc 1.4 ppc
...@@ -37,16 +37,10 @@ Contents: ...@@ -37,16 +37,10 @@ Contents:
1. Supported Architectures and Processors 1. Supported Architectures and Processors
========================================= =========================================
1.1 ARM 1.1 ARM and ARM64
------- -----------------
The following ARM processors are supported by cpufreq:
ARM Integrator
ARM-SA1100
ARM-SA1110
Intel PXA
Almost all ARM and ARM64 platforms support CPU frequency scaling.
1.2 x86 1.2 x86
------- -------
...@@ -69,6 +63,7 @@ Transmeta Crusoe ...@@ -69,6 +63,7 @@ Transmeta Crusoe
Transmeta Efficeon Transmeta Efficeon
VIA Cyrix 3 / C3 VIA Cyrix 3 / C3
various processors on some ACPI 2.0-compatible systems [*] various processors on some ACPI 2.0-compatible systems [*]
And many more
[*] Only if "ACPI Processor Performance States" are available [*] Only if "ACPI Processor Performance States" are available
to the ACPI<->BIOS interface. to the ACPI<->BIOS interface.
...@@ -147,10 +142,19 @@ mounted it at /sys, the cpufreq interface is located in a subdirectory ...@@ -147,10 +142,19 @@ mounted it at /sys, the cpufreq interface is located in a subdirectory
"cpufreq" within the cpu-device directory "cpufreq" within the cpu-device directory
(e.g. /sys/devices/system/cpu/cpu0/cpufreq/ for the first CPU). (e.g. /sys/devices/system/cpu/cpu0/cpufreq/ for the first CPU).
affected_cpus : List of Online CPUs that require software
coordination of frequency.
cpuinfo_cur_freq : Current frequency of the CPU as obtained from
the hardware, in KHz. This is the frequency
the CPU actually runs at.
cpuinfo_min_freq : this file shows the minimum operating cpuinfo_min_freq : this file shows the minimum operating
frequency the processor can run at(in kHz) frequency the processor can run at(in kHz)
cpuinfo_max_freq : this file shows the maximum operating cpuinfo_max_freq : this file shows the maximum operating
frequency the processor can run at(in kHz) frequency the processor can run at(in kHz)
cpuinfo_transition_latency The time it takes on this CPU to cpuinfo_transition_latency The time it takes on this CPU to
switch between two frequencies in nano switch between two frequencies in nano
seconds. If unknown or known to be seconds. If unknown or known to be
...@@ -163,25 +167,30 @@ cpuinfo_transition_latency The time it takes on this CPU to ...@@ -163,25 +167,30 @@ cpuinfo_transition_latency The time it takes on this CPU to
userspace daemon. Make sure to not userspace daemon. Make sure to not
switch the frequency too often switch the frequency too often
resulting in performance loss. resulting in performance loss.
scaling_driver : this file shows what cpufreq driver is
used to set the frequency on this CPU related_cpus : List of Online + Offline CPUs that need software
coordination of frequency.
scaling_available_frequencies : List of available frequencies, in KHz.
scaling_available_governors : this file shows the CPUfreq governors scaling_available_governors : this file shows the CPUfreq governors
available in this kernel. You can see the available in this kernel. You can see the
currently activated governor in currently activated governor in
scaling_cur_freq : Current frequency of the CPU as determined by
the governor and cpufreq core, in KHz. This is
the frequency the kernel thinks the CPU runs
at.
scaling_driver : this file shows what cpufreq driver is
used to set the frequency on this CPU
scaling_governor, and by "echoing" the name of another scaling_governor, and by "echoing" the name of another
governor you can change it. Please note governor you can change it. Please note
that some governors won't load - they only that some governors won't load - they only
work on some specific architectures or work on some specific architectures or
processors. processors.
cpuinfo_cur_freq : Current frequency of the CPU as obtained from
the hardware, in KHz. This is the frequency
the CPU actually runs at.
scaling_available_frequencies : List of available frequencies, in KHz.
scaling_min_freq and scaling_min_freq and
scaling_max_freq show the current "policy limits" (in scaling_max_freq show the current "policy limits" (in
kHz). By echoing new values into these kHz). By echoing new values into these
...@@ -190,16 +199,11 @@ scaling_max_freq show the current "policy limits" (in ...@@ -190,16 +199,11 @@ scaling_max_freq show the current "policy limits" (in
first set scaling_max_freq, then first set scaling_max_freq, then
scaling_min_freq. scaling_min_freq.
affected_cpus : List of Online CPUs that require software scaling_setspeed This can be read to get the currently programmed
coordination of frequency. value by the governor. This can be written to
change the current frequency for a group of
related_cpus : List of Online + Offline CPUs that need software CPUs, represented by a policy. This is supported
coordination of frequency. currently only by the userspace governor.
scaling_cur_freq : Current frequency of the CPU as determined by
the governor and cpufreq core, in KHz. This is
the frequency the kernel thinks the CPU runs
at.
bios_limit : If the BIOS tells the OS to limit a CPU to bios_limit : If the BIOS tells the OS to limit a CPU to
lower frequencies, the user can read out the lower frequencies, the user can read out the
......
TI CPUFreq and OPP bindings
================================
Certain TI SoCs, like those in the am335x, am437x, am57xx, and dra7xx
families support different OPPs depending on the silicon variant in use.
The ti-cpufreq driver can use revision and an efuse value from the SoC to
provide the OPP framework with supported hardware information. This is
used to determine which OPPs from the operating-points-v2 table get enabled
when it is parsed by the OPP framework.
Required properties:
--------------------
In 'cpus' nodes:
- operating-points-v2: Phandle to the operating-points-v2 table to use.
In 'operating-points-v2' table:
- compatible: Should be
- 'operating-points-v2-ti-cpu' for am335x, am43xx, and dra7xx/am57xx SoCs
- syscon: A phandle pointing to a syscon node representing the control module
register space of the SoC.
Optional properties:
--------------------
For each opp entry in 'operating-points-v2' table:
- opp-supported-hw: Two bitfields indicating:
1. Which revision of the SoC the OPP is supported by
2. Which eFuse bits indicate this OPP is available
A bitwise AND is performed against these values and if any bit
matches, the OPP gets enabled.
Example:
--------
/* From arch/arm/boot/dts/am33xx.dtsi */
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
compatible = "arm,cortex-a8";
device_type = "cpu";
reg = <0>;
operating-points-v2 = <&cpu0_opp_table>;
clocks = <&dpll_mpu_ck>;
clock-names = "cpu";
clock-latency = <300000>; /* From omap-cpufreq driver */
};
};
/*
* cpu0 has different OPPs depending on SoC revision and some on revisions
* 0x2 and 0x4 have eFuse bits that indicate if they are available or not
*/
cpu0_opp_table: opp-table {
compatible = "operating-points-v2-ti-cpu";
syscon = <&scm_conf>;
/*
* The three following nodes are marked with opp-suspend
* because they can not be enabled simultaneously on a
* single SoC.
*/
opp50@300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <950000 931000 969000>;
opp-supported-hw = <0x06 0x0010>;
opp-suspend;
};
opp100@275000000 {
opp-hz = /bits/ 64 <275000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x01 0x00FF>;
opp-suspend;
};
opp100@300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x06 0x0020>;
opp-suspend;
};
opp100@500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x01 0xFFFF>;
};
opp100@600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x06 0x0040>;
};
opp120@600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <1200000 1176000 1224000>;
opp-supported-hw = <0x01 0xFFFF>;
};
opp120@720000000 {
opp-hz = /bits/ 64 <720000000>;
opp-microvolt = <1200000 1176000 1224000>;
opp-supported-hw = <0x06 0x0080>;
};
oppturbo@720000000 {
opp-hz = /bits/ 64 <720000000>;
opp-microvolt = <1260000 1234800 1285200>;
opp-supported-hw = <0x01 0xFFFF>;
};
oppturbo@800000000 {
opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <1260000 1234800 1285200>;
opp-supported-hw = <0x06 0x0100>;
};
oppnitro@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <1325000 1298500 1351500>;
opp-supported-hw = <0x04 0x0200>;
};
};
...@@ -123,6 +123,20 @@ Detailed correlation between sub-blocks and power line according to Exynos SoC: ...@@ -123,6 +123,20 @@ Detailed correlation between sub-blocks and power line according to Exynos SoC:
|--- FSYS |--- FSYS
|--- FSYS2 |--- FSYS2
- In case of Exynos5433, there is VDD_INT power line as following:
VDD_INT |--- G2D (parent device)
|--- MSCL
|--- GSCL
|--- JPEG
|--- MFC
|--- HEVC
|--- BUS0
|--- BUS1
|--- BUS2
|--- PERIS (Fixed clock rate)
|--- PERIC (Fixed clock rate)
|--- FSYS (Fixed clock rate)
Example1: Example1:
Show the AXI buses of Exynos3250 SoC. Exynos3250 divides the buses to Show the AXI buses of Exynos3250 SoC. Exynos3250 divides the buses to
power line (regulator). The MIF (Memory Interface) AXI bus is used to power line (regulator). The MIF (Memory Interface) AXI bus is used to
......
...@@ -79,22 +79,6 @@ dependent subsystems such as cpufreq are left to the discretion of the SoC ...@@ -79,22 +79,6 @@ dependent subsystems such as cpufreq are left to the discretion of the SoC
specific framework which uses the OPP library. Similar care needs to be taken specific framework which uses the OPP library. Similar care needs to be taken
care to refresh the cpufreq table in cases of these operations. care to refresh the cpufreq table in cases of these operations.
WARNING on OPP List locking mechanism:
-------------------------------------------------
OPP library uses RCU for exclusivity. RCU allows the query functions to operate
in multiple contexts and this synchronization mechanism is optimal for a read
intensive operations on data structure as the OPP library caters to.
To ensure that the data retrieved are sane, the users such as SoC framework
should ensure that the section of code operating on OPP queries are locked
using RCU read locks. The opp_find_freq_{exact,ceil,floor},
opp_get_{voltage, freq, opp_count} fall into this category.
opp_{add,enable,disable} are updaters which use mutex and implement it's own
RCU locking mechanisms. These functions should *NOT* be called under RCU locks
and other contexts that prevent blocking functions in RCU or mutex operations
from working.
2. Initial OPP List Registration 2. Initial OPP List Registration
================================ ================================
The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per
...@@ -137,15 +121,18 @@ functions return the matching pointer representing the opp if a match is ...@@ -137,15 +121,18 @@ functions return the matching pointer representing the opp if a match is
found, else returns error. These errors are expected to be handled by standard found, else returns error. These errors are expected to be handled by standard
error checks such as IS_ERR() and appropriate actions taken by the caller. error checks such as IS_ERR() and appropriate actions taken by the caller.
Callers of these functions shall call dev_pm_opp_put() after they have used the
OPP. Otherwise the memory for the OPP will never get freed and result in
memleak.
dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
availability. This function is especially useful to enable an OPP which availability. This function is especially useful to enable an OPP which
is not available by default. is not available by default.
Example: In a case when SoC framework detects a situation where a Example: In a case when SoC framework detects a situation where a
higher frequency could be made available, it can use this function to higher frequency could be made available, it can use this function to
find the OPP prior to call the dev_pm_opp_enable to actually make it available. find the OPP prior to call the dev_pm_opp_enable to actually make it available.
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* dont operate on the pointer.. just do a sanity check.. */ /* dont operate on the pointer.. just do a sanity check.. */
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
pr_err("frequency not disabled!\n"); pr_err("frequency not disabled!\n");
...@@ -163,9 +150,8 @@ dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the ...@@ -163,9 +150,8 @@ dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the
frequency. frequency.
Example: To find the highest opp for a device: Example: To find the highest opp for a device:
freq = ULONG_MAX; freq = ULONG_MAX;
rcu_read_lock(); opp = dev_pm_opp_find_freq_floor(dev, &freq);
dev_pm_opp_find_freq_floor(dev, &freq); dev_pm_opp_put(opp);
rcu_read_unlock();
dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
provided frequency. This function is useful while searching for a provided frequency. This function is useful while searching for a
...@@ -173,17 +159,15 @@ dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the ...@@ -173,17 +159,15 @@ dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
frequency. frequency.
Example 1: To find the lowest opp for a device: Example 1: To find the lowest opp for a device:
freq = 0; freq = 0;
rcu_read_lock(); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
dev_pm_opp_find_freq_ceil(dev, &freq); dev_pm_opp_put(opp);
rcu_read_unlock();
Example 2: A simplified implementation of a SoC cpufreq_driver->target: Example 2: A simplified implementation of a SoC cpufreq_driver->target:
soc_cpufreq_target(..) soc_cpufreq_target(..)
{ {
/* Do stuff like policy checks etc. */ /* Do stuff like policy checks etc. */
/* Find the best frequency match for the req */ /* Find the best frequency match for the req */
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
rcu_read_unlock(); dev_pm_opp_put(opp);
if (!IS_ERR(opp)) if (!IS_ERR(opp))
soc_switch_to_freq_voltage(freq); soc_switch_to_freq_voltage(freq);
else else
...@@ -208,9 +192,8 @@ dev_pm_opp_enable - Make a OPP available for operation. ...@@ -208,9 +192,8 @@ dev_pm_opp_enable - Make a OPP available for operation.
implementation might choose to do something as follows: implementation might choose to do something as follows:
if (cur_temp < temp_low_thresh) { if (cur_temp < temp_low_thresh) {
/* Enable 1GHz if it was disabled */ /* Enable 1GHz if it was disabled */
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* just error check */ /* just error check */
if (!IS_ERR(opp)) if (!IS_ERR(opp))
ret = dev_pm_opp_enable(dev, 1000000000); ret = dev_pm_opp_enable(dev, 1000000000);
...@@ -224,9 +207,8 @@ dev_pm_opp_disable - Make an OPP to be not available for operation ...@@ -224,9 +207,8 @@ dev_pm_opp_disable - Make an OPP to be not available for operation
choose to do something as follows: choose to do something as follows:
if (cur_temp > temp_high_thresh) { if (cur_temp > temp_high_thresh) {
/* Disable 1GHz if it was enabled */ /* Disable 1GHz if it was enabled */
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* just error check */ /* just error check */
if (!IS_ERR(opp)) if (!IS_ERR(opp))
ret = dev_pm_opp_disable(dev, 1000000000); ret = dev_pm_opp_disable(dev, 1000000000);
...@@ -249,10 +231,9 @@ dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer. ...@@ -249,10 +231,9 @@ dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer.
soc_switch_to_freq_voltage(freq) soc_switch_to_freq_voltage(freq)
{ {
/* do things */ /* do things */
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
v = dev_pm_opp_get_voltage(opp); v = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
if (v) if (v)
regulator_set_voltage(.., v); regulator_set_voltage(.., v);
/* do other things */ /* do other things */
...@@ -266,12 +247,12 @@ dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer. ...@@ -266,12 +247,12 @@ dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer.
{ {
/* do things.. */ /* do things.. */
max_freq = ULONG_MAX; max_freq = ULONG_MAX;
rcu_read_lock();
max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq); max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq);
requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq); requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq);
if (!IS_ERR(max_opp) && !IS_ERR(requested_opp)) if (!IS_ERR(max_opp) && !IS_ERR(requested_opp))
r = soc_test_validity(max_opp, requested_opp); r = soc_test_validity(max_opp, requested_opp);
rcu_read_unlock(); dev_pm_opp_put(max_opp);
dev_pm_opp_put(requested_opp);
/* do other things */ /* do other things */
} }
soc_test_validity(..) soc_test_validity(..)
...@@ -289,7 +270,6 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device ...@@ -289,7 +270,6 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
soc_notify_coproc_available_frequencies() soc_notify_coproc_available_frequencies()
{ {
/* Do things */ /* Do things */
rcu_read_lock();
num_available = dev_pm_opp_get_opp_count(dev); num_available = dev_pm_opp_get_opp_count(dev);
speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL); speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL);
/* populate the table in increasing order */ /* populate the table in increasing order */
...@@ -298,8 +278,8 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device ...@@ -298,8 +278,8 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
speeds[i] = freq; speeds[i] = freq;
freq++; freq++;
i++; i++;
dev_pm_opp_put(opp);
} }
rcu_read_unlock();
soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available); soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available);
/* Do other things */ /* Do other things */
......
...@@ -25,7 +25,7 @@ to be used subsequently to change to the one represented by that string. ...@@ -25,7 +25,7 @@ to be used subsequently to change to the one represented by that string.
Consequently, there are two ways to cause the system to go into the Consequently, there are two ways to cause the system to go into the
Suspend-To-Idle sleep state. The first one is to write "freeze" directly to Suspend-To-Idle sleep state. The first one is to write "freeze" directly to
/sys/power/state. The second one is to write "s2idle" to /sys/power/mem_sleep /sys/power/state. The second one is to write "s2idle" to /sys/power/mem_sleep
and then to wrtie "mem" to /sys/power/state. Similarly, there are two ways and then to write "mem" to /sys/power/state. Similarly, there are two ways
to cause the system to go into the Power-On Suspend sleep state (the strings to to cause the system to go into the Power-On Suspend sleep state (the strings to
write to the control files in that case are "standby" or "shallow" and "mem", write to the control files in that case are "standby" or "shallow" and "mem",
respectively) if that state is supported by the platform. In turn, there is respectively) if that state is supported by the platform. In turn, there is
......
...@@ -2692,6 +2692,13 @@ F: drivers/irqchip/irq-brcmstb* ...@@ -2692,6 +2692,13 @@ F: drivers/irqchip/irq-brcmstb*
F: include/linux/bcm963xx_nvram.h F: include/linux/bcm963xx_nvram.h
F: include/linux/bcm963xx_tag.h F: include/linux/bcm963xx_tag.h
BROADCOM BMIPS CPUFREQ DRIVER
M: Markus Mayer <mmayer@broadcom.com>
M: bcm-kernel-feedback-list@broadcom.com
L: linux-pm@vger.kernel.org
S: Maintained
F: drivers/cpufreq/bmips-cpufreq.c
BROADCOM TG3 GIGABIT ETHERNET DRIVER BROADCOM TG3 GIGABIT ETHERNET DRIVER
M: Siva Reddy Kallam <siva.kallam@broadcom.com> M: Siva Reddy Kallam <siva.kallam@broadcom.com>
M: Prashant Sreedharan <prashant@broadcom.com> M: Prashant Sreedharan <prashant@broadcom.com>
......
...@@ -24,7 +24,7 @@ CONFIG_ARM_APPENDED_DTB=y ...@@ -24,7 +24,7 @@ CONFIG_ARM_APPENDED_DTB=y
CONFIG_ARM_ATAG_DTB_COMPAT=y CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_CMDLINE="root=/dev/ram0 rw ramdisk=8192 initrd=0x41000000,8M console=ttySAC1,115200 init=/linuxrc mem=256M" CONFIG_CMDLINE="root=/dev/ram0 rw ramdisk=8192 initrd=0x41000000,8M console=ttySAC1,115200 init=/linuxrc mem=256M"
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_POWERSAVE=m
CONFIG_CPU_FREQ_GOV_USERSPACE=m CONFIG_CPU_FREQ_GOV_USERSPACE=m
......
...@@ -58,7 +58,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0 ...@@ -58,7 +58,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_APPENDED_DTB=y
CONFIG_ARM_ATAG_DTB_COMPAT=y CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y CONFIG_CPU_IDLE=y
CONFIG_ARM_KIRKWOOD_CPUIDLE=y CONFIG_ARM_KIRKWOOD_CPUIDLE=y
......
...@@ -132,7 +132,7 @@ CONFIG_ARM_ATAG_DTB_COMPAT=y ...@@ -132,7 +132,7 @@ CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_KEXEC=y CONFIG_KEXEC=y
CONFIG_EFI=y CONFIG_EFI=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_POWERSAVE=m
CONFIG_CPU_FREQ_GOV_USERSPACE=m CONFIG_CPU_FREQ_GOV_USERSPACE=m
......
...@@ -44,7 +44,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0 ...@@ -44,7 +44,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_APPENDED_DTB=y
CONFIG_ARM_ATAG_DTB_COMPAT=y CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y CONFIG_CPU_IDLE=y
CONFIG_ARM_KIRKWOOD_CPUIDLE=y CONFIG_ARM_KIRKWOOD_CPUIDLE=y
......
...@@ -97,7 +97,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0 ...@@ -97,7 +97,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_CMDLINE="root=/dev/ram0 ro" CONFIG_CMDLINE="root=/dev/ram0 ro"
CONFIG_KEXEC=y CONFIG_KEXEC=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_POWERSAVE=m
CONFIG_CPU_FREQ_GOV_USERSPACE=m CONFIG_CPU_FREQ_GOV_USERSPACE=m
......
...@@ -38,7 +38,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0 ...@@ -38,7 +38,7 @@ CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_APPENDED_DTB=y
CONFIG_KEXEC=y CONFIG_KEXEC=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y CONFIG_CPU_FREQ_GOV_ONDEMAND=y
......
...@@ -130,17 +130,16 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name, ...@@ -130,17 +130,16 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
freq = clk_get_rate(clk); freq = clk_get_rate(clk);
clk_put(clk); clk_put(clk);
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("%s: unable to find boot up OPP for vdd_%s\n", pr_err("%s: unable to find boot up OPP for vdd_%s\n",
__func__, vdd_name); __func__, vdd_name);
goto exit; goto exit;
} }
bootup_volt = dev_pm_opp_get_voltage(opp); bootup_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
if (!bootup_volt) { if (!bootup_volt) {
pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n", pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",
__func__, vdd_name); __func__, vdd_name);
......
...@@ -1703,6 +1703,8 @@ config CPU_BMIPS ...@@ -1703,6 +1703,8 @@ config CPU_BMIPS
select WEAK_ORDERING select WEAK_ORDERING
select CPU_SUPPORTS_HIGHMEM select CPU_SUPPORTS_HIGHMEM
select CPU_HAS_PREFETCH select CPU_HAS_PREFETCH
select CPU_SUPPORTS_CPUFREQ
select MIPS_EXTERNAL_TIMER
help help
Support for BMIPS32/3300/4350/4380 and BMIPS5000 processors. Support for BMIPS32/3300/4350/4380 and BMIPS5000 processors.
......
...@@ -9,13 +9,20 @@ CONFIG_MIPS_O32_FP64_SUPPORT=y ...@@ -9,13 +9,20 @@ CONFIG_MIPS_O32_FP64_SUPPORT=y
# CONFIG_SWAP is not set # CONFIG_SWAP is not set
CONFIG_NO_HZ=y CONFIG_NO_HZ=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_GZIP=y
CONFIG_EXPERT=y CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_SLUB_DEBUG is not set # CONFIG_SLUB_DEBUG is not set
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
CONFIG_BMIPS_CPUFREQ=y
CONFIG_NET=y CONFIG_NET=y
CONFIG_PACKET=y CONFIG_PACKET=y
CONFIG_PACKET_DIAG=y CONFIG_PACKET_DIAG=y
...@@ -24,7 +31,6 @@ CONFIG_INET=y ...@@ -24,7 +31,6 @@ CONFIG_INET=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set # CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set # CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set # CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
# CONFIG_INET_DIAG is not set # CONFIG_INET_DIAG is not set
CONFIG_CFG80211=y CONFIG_CFG80211=y
CONFIG_NL80211_TESTMODE=y CONFIG_NL80211_TESTMODE=y
...@@ -34,8 +40,6 @@ CONFIG_DEVTMPFS=y ...@@ -34,8 +40,6 @@ CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set # CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set # CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_PRINTK_TIME=y
CONFIG_BRCMSTB_GISB_ARB=y
CONFIG_MTD=y CONFIG_MTD=y
CONFIG_MTD_CFI=y CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_INTELEXT=y CONFIG_MTD_CFI_INTELEXT=y
...@@ -51,16 +55,15 @@ CONFIG_USB_USBNET=y ...@@ -51,16 +55,15 @@ CONFIG_USB_USBNET=y
# CONFIG_INPUT is not set # CONFIG_INPUT is not set
# CONFIG_SERIO is not set # CONFIG_SERIO is not set
# CONFIG_VT is not set # CONFIG_VT is not set
# CONFIG_DEVKMEM is not set
CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set # CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_OF_PLATFORM=y
# CONFIG_HW_RANDOM is not set # CONFIG_HW_RANDOM is not set
CONFIG_POWER_SUPPLY=y
CONFIG_POWER_RESET=y CONFIG_POWER_RESET=y
CONFIG_POWER_RESET_BRCMSTB=y CONFIG_POWER_RESET_BRCMSTB=y
CONFIG_POWER_RESET_SYSCON=y CONFIG_POWER_RESET_SYSCON=y
CONFIG_POWER_SUPPLY=y
# CONFIG_HWMON is not set # CONFIG_HWMON is not set
CONFIG_USB=y CONFIG_USB=y
CONFIG_USB_EHCI_HCD=y CONFIG_USB_EHCI_HCD=y
...@@ -82,6 +85,7 @@ CONFIG_CIFS=y ...@@ -82,6 +85,7 @@ CONFIG_CIFS=y
CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_FS=y CONFIG_DEBUG_FS=y
CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
......
...@@ -40,7 +40,6 @@ CONFIG_PM_STD_PARTITION="/dev/hda3" ...@@ -40,7 +40,6 @@ CONFIG_PM_STD_PARTITION="/dev/hda3"
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEBUG=y CONFIG_CPU_FREQ_DEBUG=y
CONFIG_CPU_FREQ_STAT=m CONFIG_CPU_FREQ_STAT=m
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_POWERSAVE=m
CONFIG_CPU_FREQ_GOV_USERSPACE=m CONFIG_CPU_FREQ_GOV_USERSPACE=m
......
...@@ -62,7 +62,6 @@ CONFIG_MPC8610_HPCD=y ...@@ -62,7 +62,6 @@ CONFIG_MPC8610_HPCD=y
CONFIG_GEF_SBC610=y CONFIG_GEF_SBC610=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT=m CONFIG_CPU_FREQ_STAT=m
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_POWERSAVE=m
......
...@@ -25,7 +25,7 @@ CONFIG_SH_SH7785LCR=y ...@@ -25,7 +25,7 @@ CONFIG_SH_SH7785LCR=y
CONFIG_NO_HZ=y CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_SH_CPU_FREQ=y CONFIG_SH_CPU_FREQ=y
CONFIG_HEARTBEAT=y CONFIG_HEARTBEAT=y
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <acpi/processor.h> #include <acpi/processor.h>
#include <asm/acpi.h>
#include <asm/mwait.h> #include <asm/mwait.h>
#include <asm/special_insns.h> #include <asm/special_insns.h>
...@@ -89,7 +88,8 @@ static long acpi_processor_ffh_cstate_probe_cpu(void *_cx) ...@@ -89,7 +88,8 @@ static long acpi_processor_ffh_cstate_probe_cpu(void *_cx)
retval = 0; retval = 0;
/* If the HW does not support any sub-states in this C-state */ /* If the HW does not support any sub-states in this C-state */
if (num_cstate_subtype == 0) { if (num_cstate_subtype == 0) {
pr_warn(FW_BUG "ACPI MWAIT C-state 0x%x not supported by HW (0x%x)\n", cx->address, edx_part); pr_warn(FW_BUG "ACPI MWAIT C-state 0x%x not supported by HW (0x%x)\n",
cx->address, edx_part);
retval = -1; retval = -1;
goto out; goto out;
} }
...@@ -104,8 +104,8 @@ static long acpi_processor_ffh_cstate_probe_cpu(void *_cx) ...@@ -104,8 +104,8 @@ static long acpi_processor_ffh_cstate_probe_cpu(void *_cx)
if (!mwait_supported[cstate_type]) { if (!mwait_supported[cstate_type]) {
mwait_supported[cstate_type] = 1; mwait_supported[cstate_type] = 1;
printk(KERN_DEBUG printk(KERN_DEBUG
"Monitor-Mwait will be used to enter C-%d " "Monitor-Mwait will be used to enter C-%d state\n",
"state\n", cx->type); cx->type);
} }
snprintf(cx->desc, snprintf(cx->desc,
ACPI_CX_DESC_LEN, "ACPI FFH INTEL MWAIT 0x%x", ACPI_CX_DESC_LEN, "ACPI FFH INTEL MWAIT 0x%x",
...@@ -166,6 +166,7 @@ EXPORT_SYMBOL_GPL(acpi_processor_ffh_cstate_enter); ...@@ -166,6 +166,7 @@ EXPORT_SYMBOL_GPL(acpi_processor_ffh_cstate_enter);
static int __init ffh_cstate_init(void) static int __init ffh_cstate_init(void)
{ {
struct cpuinfo_x86 *c = &boot_cpu_data; struct cpuinfo_x86 *c = &boot_cpu_data;
if (c->x86_vendor != X86_VENDOR_INTEL) if (c->x86_vendor != X86_VENDOR_INTEL)
return -1; return -1;
......
...@@ -75,10 +75,8 @@ static int acpi_processor_ppc_notifier(struct notifier_block *nb, ...@@ -75,10 +75,8 @@ static int acpi_processor_ppc_notifier(struct notifier_block *nb,
struct acpi_processor *pr; struct acpi_processor *pr;
unsigned int ppc = 0; unsigned int ppc = 0;
if (event == CPUFREQ_START && ignore_ppc <= 0) { if (ignore_ppc < 0)
ignore_ppc = 0; ignore_ppc = 0;
return 0;
}
if (ignore_ppc) if (ignore_ppc)
return 0; return 0;
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/cpufeature.h> #include <linux/cpufeature.h>
#include <linux/tick.h> #include <linux/tick.h>
#include <linux/pm_qos.h>
#include "base.h" #include "base.h"
...@@ -376,6 +377,7 @@ int register_cpu(struct cpu *cpu, int num) ...@@ -376,6 +377,7 @@ int register_cpu(struct cpu *cpu, int num)
per_cpu(cpu_sys_devices, num) = &cpu->dev; per_cpu(cpu_sys_devices, num) = &cpu->dev;
register_cpu_under_node(num, cpu_to_node(num)); register_cpu_under_node(num, cpu_to_node(num));
dev_pm_qos_expose_latency_limit(&cpu->dev, 0);
return 0; return 0;
} }
......
...@@ -130,7 +130,7 @@ static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, ...@@ -130,7 +130,7 @@ static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
ret = pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd); ret = pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd);
/* Warn once for each IRQ safe dev in no sleep domain */ /* Warn once if IRQ safe dev in no sleep domain */
if (ret) if (ret)
dev_warn_once(dev, "PM domain %s will not be powered off\n", dev_warn_once(dev, "PM domain %s will not be powered off\n",
genpd->name); genpd->name);
...@@ -201,7 +201,7 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd) ...@@ -201,7 +201,7 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
smp_mb__after_atomic(); smp_mb__after_atomic();
} }
static int genpd_power_on(struct generic_pm_domain *genpd, bool timed) static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{ {
unsigned int state_idx = genpd->state_idx; unsigned int state_idx = genpd->state_idx;
ktime_t time_start; ktime_t time_start;
...@@ -231,7 +231,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd, bool timed) ...@@ -231,7 +231,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
return ret; return ret;
} }
static int genpd_power_off(struct generic_pm_domain *genpd, bool timed) static int _genpd_power_off(struct generic_pm_domain *genpd, bool timed)
{ {
unsigned int state_idx = genpd->state_idx; unsigned int state_idx = genpd->state_idx;
ktime_t time_start; ktime_t time_start;
...@@ -262,10 +262,10 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool timed) ...@@ -262,10 +262,10 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool timed)
} }
/** /**
* genpd_queue_power_off_work - Queue up the execution of genpd_poweroff(). * genpd_queue_power_off_work - Queue up the execution of genpd_power_off().
* @genpd: PM domain to power off. * @genpd: PM domain to power off.
* *
* Queue up the execution of genpd_poweroff() unless it's already been done * Queue up the execution of genpd_power_off() unless it's already been done
* before. * before.
*/ */
static void genpd_queue_power_off_work(struct generic_pm_domain *genpd) static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
...@@ -274,14 +274,14 @@ static void genpd_queue_power_off_work(struct generic_pm_domain *genpd) ...@@ -274,14 +274,14 @@ static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
} }
/** /**
* genpd_poweron - Restore power to a given PM domain and its masters. * genpd_power_on - Restore power to a given PM domain and its masters.
* @genpd: PM domain to power up. * @genpd: PM domain to power up.
* @depth: nesting count for lockdep. * @depth: nesting count for lockdep.
* *
* Restore power to @genpd and all of its masters so that it is possible to * Restore power to @genpd and all of its masters so that it is possible to
* resume a device belonging to it. * resume a device belonging to it.
*/ */
static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth) static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth)
{ {
struct gpd_link *link; struct gpd_link *link;
int ret = 0; int ret = 0;
...@@ -300,7 +300,7 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth) ...@@ -300,7 +300,7 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
genpd_sd_counter_inc(master); genpd_sd_counter_inc(master);
genpd_lock_nested(master, depth + 1); genpd_lock_nested(master, depth + 1);
ret = genpd_poweron(master, depth + 1); ret = genpd_power_on(master, depth + 1);
genpd_unlock(master); genpd_unlock(master);
if (ret) { if (ret) {
...@@ -309,7 +309,7 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth) ...@@ -309,7 +309,7 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
} }
} }
ret = genpd_power_on(genpd, true); ret = _genpd_power_on(genpd, true);
if (ret) if (ret)
goto err; goto err;
...@@ -368,14 +368,14 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, ...@@ -368,14 +368,14 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
} }
/** /**
* genpd_poweroff - Remove power from a given PM domain. * genpd_power_off - Remove power from a given PM domain.
* @genpd: PM domain to power down. * @genpd: PM domain to power down.
* @is_async: PM domain is powered down from a scheduled work * @is_async: PM domain is powered down from a scheduled work
* *
* If all of the @genpd's devices have been suspended and all of its subdomains * If all of the @genpd's devices have been suspended and all of its subdomains
* have been powered down, remove power from @genpd. * have been powered down, remove power from @genpd.
*/ */
static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async) static int genpd_power_off(struct generic_pm_domain *genpd, bool is_async)
{ {
struct pm_domain_data *pdd; struct pm_domain_data *pdd;
struct gpd_link *link; struct gpd_link *link;
...@@ -427,13 +427,13 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async) ...@@ -427,13 +427,13 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
/* /*
* If sd_count > 0 at this point, one of the subdomains hasn't * If sd_count > 0 at this point, one of the subdomains hasn't
* managed to call genpd_poweron() for the master yet after * managed to call genpd_power_on() for the master yet after
* incrementing it. In that case genpd_poweron() will wait * incrementing it. In that case genpd_power_on() will wait
* for us to drop the lock, so we can call .power_off() and let * for us to drop the lock, so we can call .power_off() and let
* the genpd_poweron() restore power for us (this shouldn't * the genpd_power_on() restore power for us (this shouldn't
* happen very often). * happen very often).
*/ */
ret = genpd_power_off(genpd, true); ret = _genpd_power_off(genpd, true);
if (ret) if (ret)
return ret; return ret;
} }
...@@ -459,7 +459,7 @@ static void genpd_power_off_work_fn(struct work_struct *work) ...@@ -459,7 +459,7 @@ static void genpd_power_off_work_fn(struct work_struct *work)
genpd = container_of(work, struct generic_pm_domain, power_off_work); genpd = container_of(work, struct generic_pm_domain, power_off_work);
genpd_lock(genpd); genpd_lock(genpd);
genpd_poweroff(genpd, true); genpd_power_off(genpd, true);
genpd_unlock(genpd); genpd_unlock(genpd);
} }
...@@ -578,7 +578,7 @@ static int genpd_runtime_suspend(struct device *dev) ...@@ -578,7 +578,7 @@ static int genpd_runtime_suspend(struct device *dev)
return 0; return 0;
genpd_lock(genpd); genpd_lock(genpd);
genpd_poweroff(genpd, false); genpd_power_off(genpd, false);
genpd_unlock(genpd); genpd_unlock(genpd);
return 0; return 0;
...@@ -618,7 +618,7 @@ static int genpd_runtime_resume(struct device *dev) ...@@ -618,7 +618,7 @@ static int genpd_runtime_resume(struct device *dev)
} }
genpd_lock(genpd); genpd_lock(genpd);
ret = genpd_poweron(genpd, 0); ret = genpd_power_on(genpd, 0);
genpd_unlock(genpd); genpd_unlock(genpd);
if (ret) if (ret)
...@@ -658,7 +658,7 @@ static int genpd_runtime_resume(struct device *dev) ...@@ -658,7 +658,7 @@ static int genpd_runtime_resume(struct device *dev)
if (!pm_runtime_is_irq_safe(dev) || if (!pm_runtime_is_irq_safe(dev) ||
(pm_runtime_is_irq_safe(dev) && genpd_is_irq_safe(genpd))) { (pm_runtime_is_irq_safe(dev) && genpd_is_irq_safe(genpd))) {
genpd_lock(genpd); genpd_lock(genpd);
genpd_poweroff(genpd, 0); genpd_power_off(genpd, 0);
genpd_unlock(genpd); genpd_unlock(genpd);
} }
...@@ -674,9 +674,9 @@ static int __init pd_ignore_unused_setup(char *__unused) ...@@ -674,9 +674,9 @@ static int __init pd_ignore_unused_setup(char *__unused)
__setup("pd_ignore_unused", pd_ignore_unused_setup); __setup("pd_ignore_unused", pd_ignore_unused_setup);
/** /**
* genpd_poweroff_unused - Power off all PM domains with no devices in use. * genpd_power_off_unused - Power off all PM domains with no devices in use.
*/ */
static int __init genpd_poweroff_unused(void) static int __init genpd_power_off_unused(void)
{ {
struct generic_pm_domain *genpd; struct generic_pm_domain *genpd;
...@@ -694,7 +694,7 @@ static int __init genpd_poweroff_unused(void) ...@@ -694,7 +694,7 @@ static int __init genpd_poweroff_unused(void)
return 0; return 0;
} }
late_initcall(genpd_poweroff_unused); late_initcall(genpd_power_off_unused);
#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF) #if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF)
...@@ -727,18 +727,20 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, ...@@ -727,18 +727,20 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd,
} }
/** /**
* genpd_sync_poweroff - Synchronously power off a PM domain and its masters. * genpd_sync_power_off - Synchronously power off a PM domain and its masters.
* @genpd: PM domain to power off, if possible. * @genpd: PM domain to power off, if possible.
* @use_lock: use the lock.
* @depth: nesting count for lockdep.
* *
* Check if the given PM domain can be powered off (during system suspend or * Check if the given PM domain can be powered off (during system suspend or
* hibernation) and do that if so. Also, in that case propagate to its masters. * hibernation) and do that if so. Also, in that case propagate to its masters.
* *
* This function is only called in "noirq" and "syscore" stages of system power * This function is only called in "noirq" and "syscore" stages of system power
* transitions, so it need not acquire locks (all of the "noirq" callbacks are * transitions. The "noirq" callbacks may be executed asynchronously, thus in
* executed sequentially, so it is guaranteed that it will never run twice in * these cases the lock must be held.
* parallel).
*/ */
static void genpd_sync_poweroff(struct generic_pm_domain *genpd) static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
unsigned int depth)
{ {
struct gpd_link *link; struct gpd_link *link;
...@@ -751,26 +753,35 @@ static void genpd_sync_poweroff(struct generic_pm_domain *genpd) ...@@ -751,26 +753,35 @@ static void genpd_sync_poweroff(struct generic_pm_domain *genpd)
/* Choose the deepest state when suspending */ /* Choose the deepest state when suspending */
genpd->state_idx = genpd->state_count - 1; genpd->state_idx = genpd->state_count - 1;
genpd_power_off(genpd, false); _genpd_power_off(genpd, false);
genpd->status = GPD_STATE_POWER_OFF; genpd->status = GPD_STATE_POWER_OFF;
list_for_each_entry(link, &genpd->slave_links, slave_node) { list_for_each_entry(link, &genpd->slave_links, slave_node) {
genpd_sd_counter_dec(link->master); genpd_sd_counter_dec(link->master);
genpd_sync_poweroff(link->master);
if (use_lock)
genpd_lock_nested(link->master, depth + 1);
genpd_sync_power_off(link->master, use_lock, depth + 1);
if (use_lock)
genpd_unlock(link->master);
} }
} }
/** /**
* genpd_sync_poweron - Synchronously power on a PM domain and its masters. * genpd_sync_power_on - Synchronously power on a PM domain and its masters.
* @genpd: PM domain to power on. * @genpd: PM domain to power on.
* @use_lock: use the lock.
* @depth: nesting count for lockdep.
* *
* This function is only called in "noirq" and "syscore" stages of system power * This function is only called in "noirq" and "syscore" stages of system power
* transitions, so it need not acquire locks (all of the "noirq" callbacks are * transitions. The "noirq" callbacks may be executed asynchronously, thus in
* executed sequentially, so it is guaranteed that it will never run twice in * these cases the lock must be held.
* parallel).
*/ */
static void genpd_sync_poweron(struct generic_pm_domain *genpd) static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock,
unsigned int depth)
{ {
struct gpd_link *link; struct gpd_link *link;
...@@ -778,11 +789,18 @@ static void genpd_sync_poweron(struct generic_pm_domain *genpd) ...@@ -778,11 +789,18 @@ static void genpd_sync_poweron(struct generic_pm_domain *genpd)
return; return;
list_for_each_entry(link, &genpd->slave_links, slave_node) { list_for_each_entry(link, &genpd->slave_links, slave_node) {
genpd_sync_poweron(link->master);
genpd_sd_counter_inc(link->master); genpd_sd_counter_inc(link->master);
if (use_lock)
genpd_lock_nested(link->master, depth + 1);
genpd_sync_power_on(link->master, use_lock, depth + 1);
if (use_lock)
genpd_unlock(link->master);
} }
genpd_power_on(genpd, false); _genpd_power_on(genpd, false);
genpd->status = GPD_STATE_ACTIVE; genpd->status = GPD_STATE_ACTIVE;
} }
...@@ -888,13 +906,10 @@ static int pm_genpd_suspend_noirq(struct device *dev) ...@@ -888,13 +906,10 @@ static int pm_genpd_suspend_noirq(struct device *dev)
return ret; return ret;
} }
/* genpd_lock(genpd);
* Since all of the "noirq" callbacks are executed sequentially, it is
* guaranteed that this function will never run twice in parallel for
* the same PM domain, so it is not necessary to use locking here.
*/
genpd->suspended_count++; genpd->suspended_count++;
genpd_sync_poweroff(genpd); genpd_sync_power_off(genpd, true, 0);
genpd_unlock(genpd);
return 0; return 0;
} }
...@@ -919,13 +934,10 @@ static int pm_genpd_resume_noirq(struct device *dev) ...@@ -919,13 +934,10 @@ static int pm_genpd_resume_noirq(struct device *dev)
if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
return 0; return 0;
/* genpd_lock(genpd);
* Since all of the "noirq" callbacks are executed sequentially, it is genpd_sync_power_on(genpd, true, 0);
* guaranteed that this function will never run twice in parallel for
* the same PM domain, so it is not necessary to use locking here.
*/
genpd_sync_poweron(genpd);
genpd->suspended_count--; genpd->suspended_count--;
genpd_unlock(genpd);
if (genpd->dev_ops.stop && genpd->dev_ops.start) if (genpd->dev_ops.stop && genpd->dev_ops.start)
ret = pm_runtime_force_resume(dev); ret = pm_runtime_force_resume(dev);
...@@ -1002,22 +1014,20 @@ static int pm_genpd_restore_noirq(struct device *dev) ...@@ -1002,22 +1014,20 @@ static int pm_genpd_restore_noirq(struct device *dev)
return -EINVAL; return -EINVAL;
/* /*
* Since all of the "noirq" callbacks are executed sequentially, it is
* guaranteed that this function will never run twice in parallel for
* the same PM domain, so it is not necessary to use locking here.
*
* At this point suspended_count == 0 means we are being run for the * At this point suspended_count == 0 means we are being run for the
* first time for the given domain in the present cycle. * first time for the given domain in the present cycle.
*/ */
genpd_lock(genpd);
if (genpd->suspended_count++ == 0) if (genpd->suspended_count++ == 0)
/* /*
* The boot kernel might put the domain into arbitrary state, * The boot kernel might put the domain into arbitrary state,
* so make it appear as powered off to genpd_sync_poweron(), * so make it appear as powered off to genpd_sync_power_on(),
* so that it tries to power it on in case it was really off. * so that it tries to power it on in case it was really off.
*/ */
genpd->status = GPD_STATE_POWER_OFF; genpd->status = GPD_STATE_POWER_OFF;
genpd_sync_poweron(genpd); genpd_sync_power_on(genpd, true, 0);
genpd_unlock(genpd);
if (genpd->dev_ops.stop && genpd->dev_ops.start) if (genpd->dev_ops.stop && genpd->dev_ops.start)
ret = pm_runtime_force_resume(dev); ret = pm_runtime_force_resume(dev);
...@@ -1072,9 +1082,9 @@ static void genpd_syscore_switch(struct device *dev, bool suspend) ...@@ -1072,9 +1082,9 @@ static void genpd_syscore_switch(struct device *dev, bool suspend)
if (suspend) { if (suspend) {
genpd->suspended_count++; genpd->suspended_count++;
genpd_sync_poweroff(genpd); genpd_sync_power_off(genpd, false, 0);
} else { } else {
genpd_sync_poweron(genpd); genpd_sync_power_on(genpd, false, 0);
genpd->suspended_count--; genpd->suspended_count--;
} }
} }
...@@ -2043,7 +2053,7 @@ int genpd_dev_pm_attach(struct device *dev) ...@@ -2043,7 +2053,7 @@ int genpd_dev_pm_attach(struct device *dev)
dev->pm_domain->sync = genpd_dev_pm_sync; dev->pm_domain->sync = genpd_dev_pm_sync;
genpd_lock(pd); genpd_lock(pd);
ret = genpd_poweron(pd, 0); ret = genpd_power_on(pd, 0);
genpd_unlock(pd); genpd_unlock(pd);
out: out:
return ret ? -EPROBE_DEFER : 0; return ret ? -EPROBE_DEFER : 0;
......
...@@ -32,13 +32,7 @@ LIST_HEAD(opp_tables); ...@@ -32,13 +32,7 @@ LIST_HEAD(opp_tables);
/* Lock to allow exclusive modification to the device and opp lists */ /* Lock to allow exclusive modification to the device and opp lists */
DEFINE_MUTEX(opp_table_lock); DEFINE_MUTEX(opp_table_lock);
#define opp_rcu_lockdep_assert() \ static void dev_pm_opp_get(struct dev_pm_opp *opp);
do { \
RCU_LOCKDEP_WARN(!rcu_read_lock_held() && \
!lockdep_is_held(&opp_table_lock), \
"Missing rcu_read_lock() or " \
"opp_table_lock protection"); \
} while (0)
static struct opp_device *_find_opp_dev(const struct device *dev, static struct opp_device *_find_opp_dev(const struct device *dev,
struct opp_table *opp_table) struct opp_table *opp_table)
...@@ -52,38 +46,46 @@ static struct opp_device *_find_opp_dev(const struct device *dev, ...@@ -52,38 +46,46 @@ static struct opp_device *_find_opp_dev(const struct device *dev,
return NULL; return NULL;
} }
static struct opp_table *_find_opp_table_unlocked(struct device *dev)
{
struct opp_table *opp_table;
list_for_each_entry(opp_table, &opp_tables, node) {
if (_find_opp_dev(dev, opp_table)) {
_get_opp_table_kref(opp_table);
return opp_table;
}
}
return ERR_PTR(-ENODEV);
}
/** /**
* _find_opp_table() - find opp_table struct using device pointer * _find_opp_table() - find opp_table struct using device pointer
* @dev: device pointer used to lookup OPP table * @dev: device pointer used to lookup OPP table
* *
* Search OPP table for one containing matching device. Does a RCU reader * Search OPP table for one containing matching device.
* operation to grab the pointer needed.
* *
* Return: pointer to 'struct opp_table' if found, otherwise -ENODEV or * Return: pointer to 'struct opp_table' if found, otherwise -ENODEV or
* -EINVAL based on type of error. * -EINVAL based on type of error.
* *
* Locking: For readers, this function must be called under rcu_read_lock(). * The callers must call dev_pm_opp_put_opp_table() after the table is used.
* opp_table is a RCU protected pointer, which means that opp_table is valid
* as long as we are under RCU lock.
*
* For Writers, this function must be called with opp_table_lock held.
*/ */
struct opp_table *_find_opp_table(struct device *dev) struct opp_table *_find_opp_table(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
opp_rcu_lockdep_assert();
if (IS_ERR_OR_NULL(dev)) { if (IS_ERR_OR_NULL(dev)) {
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }
list_for_each_entry_rcu(opp_table, &opp_tables, node) mutex_lock(&opp_table_lock);
if (_find_opp_dev(dev, opp_table)) opp_table = _find_opp_table_unlocked(dev);
return opp_table; mutex_unlock(&opp_table_lock);
return ERR_PTR(-ENODEV); return opp_table;
} }
/** /**
...@@ -94,29 +96,15 @@ struct opp_table *_find_opp_table(struct device *dev) ...@@ -94,29 +96,15 @@ struct opp_table *_find_opp_table(struct device *dev)
* return 0 * return 0
* *
* This is useful only for devices with single power supply. * This is useful only for devices with single power supply.
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/ */
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp) unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{ {
struct dev_pm_opp *tmp_opp; if (IS_ERR_OR_NULL(opp)) {
unsigned long v = 0;
opp_rcu_lockdep_assert();
tmp_opp = rcu_dereference(opp);
if (IS_ERR_OR_NULL(tmp_opp))
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
else return 0;
v = tmp_opp->supplies[0].u_volt; }
return v; return opp->supplies[0].u_volt;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage); EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
...@@ -126,29 +114,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage); ...@@ -126,29 +114,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
* *
* Return: frequency in hertz corresponding to the opp, else * Return: frequency in hertz corresponding to the opp, else
* return 0 * return 0
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/ */
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp) unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
{ {
struct dev_pm_opp *tmp_opp; if (IS_ERR_OR_NULL(opp) || !opp->available) {
unsigned long f = 0;
opp_rcu_lockdep_assert();
tmp_opp = rcu_dereference(opp);
if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available)
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
else return 0;
f = tmp_opp->rate; }
return f; return opp->rate;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
...@@ -161,28 +135,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); ...@@ -161,28 +135,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
* quickly. Running on them for longer times may overheat the chip. * quickly. Running on them for longer times may overheat the chip.
* *
* Return: true if opp is turbo opp, else false. * Return: true if opp is turbo opp, else false.
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/ */
bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp) bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp)
{ {
struct dev_pm_opp *tmp_opp; if (IS_ERR_OR_NULL(opp) || !opp->available) {
opp_rcu_lockdep_assert();
tmp_opp = rcu_dereference(opp);
if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available) {
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
return false; return false;
} }
return tmp_opp->turbo; return opp->turbo;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo);
...@@ -191,52 +152,29 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); ...@@ -191,52 +152,29 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo);
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* Return: This function returns the max clock latency in nanoseconds. * Return: This function returns the max clock latency in nanoseconds.
*
* Locking: This function takes rcu_read_lock().
*/ */
unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev) unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
unsigned long clock_latency_ns; unsigned long clock_latency_ns;
rcu_read_lock();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
clock_latency_ns = 0; return 0;
else
clock_latency_ns = opp_table->clock_latency_ns_max; clock_latency_ns = opp_table->clock_latency_ns_max;
rcu_read_unlock(); dev_pm_opp_put_opp_table(opp_table);
return clock_latency_ns; return clock_latency_ns;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency); EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency);
static int _get_regulator_count(struct device *dev)
{
struct opp_table *opp_table;
int count;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (!IS_ERR(opp_table))
count = opp_table->regulator_count;
else
count = 0;
rcu_read_unlock();
return count;
}
/** /**
* dev_pm_opp_get_max_volt_latency() - Get max voltage latency in nanoseconds * dev_pm_opp_get_max_volt_latency() - Get max voltage latency in nanoseconds
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* Return: This function returns the max voltage latency in nanoseconds. * Return: This function returns the max voltage latency in nanoseconds.
*
* Locking: This function takes rcu_read_lock().
*/ */
unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
{ {
...@@ -250,35 +188,33 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) ...@@ -250,35 +188,33 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
unsigned long max; unsigned long max;
} *uV; } *uV;
count = _get_regulator_count(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table))
return 0;
count = opp_table->regulator_count;
/* Regulator may not be required for the device */ /* Regulator may not be required for the device */
if (!count) if (!count)
return 0; goto put_opp_table;
regulators = kmalloc_array(count, sizeof(*regulators), GFP_KERNEL); regulators = kmalloc_array(count, sizeof(*regulators), GFP_KERNEL);
if (!regulators) if (!regulators)
return 0; goto put_opp_table;
uV = kmalloc_array(count, sizeof(*uV), GFP_KERNEL); uV = kmalloc_array(count, sizeof(*uV), GFP_KERNEL);
if (!uV) if (!uV)
goto free_regulators; goto free_regulators;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
rcu_read_unlock();
goto free_uV;
}
memcpy(regulators, opp_table->regulators, count * sizeof(*regulators)); memcpy(regulators, opp_table->regulators, count * sizeof(*regulators));
mutex_lock(&opp_table->lock);
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
uV[i].min = ~0; uV[i].min = ~0;
uV[i].max = 0; uV[i].max = 0;
list_for_each_entry_rcu(opp, &opp_table->opp_list, node) { list_for_each_entry(opp, &opp_table->opp_list, node) {
if (!opp->available) if (!opp->available)
continue; continue;
...@@ -289,7 +225,7 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) ...@@ -289,7 +225,7 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
} }
} }
rcu_read_unlock(); mutex_unlock(&opp_table->lock);
/* /*
* The caller needs to ensure that opp_table (and hence the regulator) * The caller needs to ensure that opp_table (and hence the regulator)
...@@ -301,10 +237,11 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) ...@@ -301,10 +237,11 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
latency_ns += ret * 1000; latency_ns += ret * 1000;
} }
free_uV:
kfree(uV); kfree(uV);
free_regulators: free_regulators:
kfree(regulators); kfree(regulators);
put_opp_table:
dev_pm_opp_put_opp_table(opp_table);
return latency_ns; return latency_ns;
} }
...@@ -317,8 +254,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_volt_latency); ...@@ -317,8 +254,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_volt_latency);
* *
* Return: This function returns the max transition latency, in nanoseconds, to * Return: This function returns the max transition latency, in nanoseconds, to
* switch from one OPP to other. * switch from one OPP to other.
*
* Locking: This function takes rcu_read_lock().
*/ */
unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev) unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev)
{ {
...@@ -328,32 +263,29 @@ unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev) ...@@ -328,32 +263,29 @@ unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev)
EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_transition_latency); EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_transition_latency);
/** /**
* dev_pm_opp_get_suspend_opp() - Get suspend opp * dev_pm_opp_get_suspend_opp_freq() - Get frequency of suspend opp in Hz
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* Return: This function returns pointer to the suspend opp if it is * Return: This function returns the frequency of the OPP marked as suspend_opp
* defined and available, otherwise it returns NULL. * if one is available, else returns 0;
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. The reason for the same is that the opp pointer which is
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_get_suspend_opp(struct device *dev) unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
unsigned long freq = 0;
opp_rcu_lockdep_assert();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table) || !opp_table->suspend_opp || if (IS_ERR(opp_table))
!opp_table->suspend_opp->available) return 0;
return NULL;
if (opp_table->suspend_opp && opp_table->suspend_opp->available)
freq = dev_pm_opp_get_freq(opp_table->suspend_opp);
dev_pm_opp_put_opp_table(opp_table);
return opp_table->suspend_opp; return freq;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp); EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp_freq);
/** /**
* dev_pm_opp_get_opp_count() - Get number of opps available in the opp table * dev_pm_opp_get_opp_count() - Get number of opps available in the opp table
...@@ -361,8 +293,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp); ...@@ -361,8 +293,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp);
* *
* Return: This function returns the number of available opps if there are any, * Return: This function returns the number of available opps if there are any,
* else returns 0 if none or the corresponding error value. * else returns 0 if none or the corresponding error value.
*
* Locking: This function takes rcu_read_lock().
*/ */
int dev_pm_opp_get_opp_count(struct device *dev) int dev_pm_opp_get_opp_count(struct device *dev)
{ {
...@@ -370,23 +300,24 @@ int dev_pm_opp_get_opp_count(struct device *dev) ...@@ -370,23 +300,24 @@ int dev_pm_opp_get_opp_count(struct device *dev)
struct dev_pm_opp *temp_opp; struct dev_pm_opp *temp_opp;
int count = 0; int count = 0;
rcu_read_lock();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
count = PTR_ERR(opp_table); count = PTR_ERR(opp_table);
dev_err(dev, "%s: OPP table not found (%d)\n", dev_err(dev, "%s: OPP table not found (%d)\n",
__func__, count); __func__, count);
goto out_unlock; return count;
} }
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available) if (temp_opp->available)
count++; count++;
} }
out_unlock: mutex_unlock(&opp_table->lock);
rcu_read_unlock(); dev_pm_opp_put_opp_table(opp_table);
return count; return count;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count); EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
...@@ -411,11 +342,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count); ...@@ -411,11 +342,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
* This provides a mechanism to enable an opp which is not available currently * This provides a mechanism to enable an opp which is not available currently
* or the opposite as well. * or the opposite as well.
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq, unsigned long freq,
...@@ -424,8 +352,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, ...@@ -424,8 +352,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
opp_rcu_lockdep_assert();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
int r = PTR_ERR(opp_table); int r = PTR_ERR(opp_table);
...@@ -434,14 +360,22 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, ...@@ -434,14 +360,22 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
return ERR_PTR(r); return ERR_PTR(r);
} }
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available == available && if (temp_opp->available == available &&
temp_opp->rate == freq) { temp_opp->rate == freq) {
opp = temp_opp; opp = temp_opp;
/* Increment the reference count of OPP */
dev_pm_opp_get(opp);
break; break;
} }
} }
mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
return opp; return opp;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
...@@ -451,14 +385,21 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table, ...@@ -451,14 +385,21 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
{ {
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available && temp_opp->rate >= *freq) { if (temp_opp->available && temp_opp->rate >= *freq) {
opp = temp_opp; opp = temp_opp;
*freq = opp->rate; *freq = opp->rate;
/* Increment the reference count of OPP */
dev_pm_opp_get(opp);
break; break;
} }
} }
mutex_unlock(&opp_table->lock);
return opp; return opp;
} }
...@@ -477,18 +418,14 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table, ...@@ -477,18 +418,14 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
* ERANGE: no match found for search * ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices * ENODEV: if device not found in list of registered devices
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq) unsigned long *freq)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *opp;
opp_rcu_lockdep_assert();
if (!dev || !freq) { if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
...@@ -499,7 +436,11 @@ struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, ...@@ -499,7 +436,11 @@ struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return ERR_CAST(opp_table); return ERR_CAST(opp_table);
return _find_freq_ceil(opp_table, freq); opp = _find_freq_ceil(opp_table, freq);
dev_pm_opp_put_opp_table(opp_table);
return opp;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
...@@ -518,11 +459,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil); ...@@ -518,11 +459,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
* ERANGE: no match found for search * ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices * ENODEV: if device not found in list of registered devices
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq) unsigned long *freq)
...@@ -530,8 +468,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -530,8 +468,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
opp_rcu_lockdep_assert();
if (!dev || !freq) { if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
...@@ -541,7 +477,9 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -541,7 +477,9 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return ERR_CAST(opp_table); return ERR_CAST(opp_table);
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available) { if (temp_opp->available) {
/* go to the next node, before choosing prev */ /* go to the next node, before choosing prev */
if (temp_opp->rate > *freq) if (temp_opp->rate > *freq)
...@@ -550,6 +488,13 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -550,6 +488,13 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
opp = temp_opp; opp = temp_opp;
} }
} }
/* Increment the reference count of OPP */
if (!IS_ERR(opp))
dev_pm_opp_get(opp);
mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
if (!IS_ERR(opp)) if (!IS_ERR(opp))
*freq = opp->rate; *freq = opp->rate;
...@@ -557,34 +502,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -557,34 +502,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
/*
* The caller needs to ensure that opp_table (and hence the clk) isn't freed,
* while clk returned here is used.
*/
static struct clk *_get_opp_clk(struct device *dev)
{
struct opp_table *opp_table;
struct clk *clk;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "%s: device opp doesn't exist\n", __func__);
clk = ERR_CAST(opp_table);
goto unlock;
}
clk = opp_table->clk;
if (IS_ERR(clk))
dev_err(dev, "%s: No clock available for the device\n",
__func__);
unlock:
rcu_read_unlock();
return clk;
}
static int _set_opp_voltage(struct device *dev, struct regulator *reg, static int _set_opp_voltage(struct device *dev, struct regulator *reg,
struct dev_pm_opp_supply *supply) struct dev_pm_opp_supply *supply)
{ {
...@@ -680,8 +597,6 @@ static int _generic_set_opp(struct dev_pm_set_opp_data *data) ...@@ -680,8 +597,6 @@ static int _generic_set_opp(struct dev_pm_set_opp_data *data)
* *
* This configures the power-supplies and clock source to the levels specified * This configures the power-supplies and clock source to the levels specified
* by the OPP corresponding to the target_freq. * by the OPP corresponding to the target_freq.
*
* Locking: This function takes rcu_read_lock().
*/ */
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
{ {
...@@ -700,9 +615,19 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -700,9 +615,19 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
return -EINVAL; return -EINVAL;
} }
clk = _get_opp_clk(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(clk)) if (IS_ERR(opp_table)) {
return PTR_ERR(clk); dev_err(dev, "%s: device opp doesn't exist\n", __func__);
return PTR_ERR(opp_table);
}
clk = opp_table->clk;
if (IS_ERR(clk)) {
dev_err(dev, "%s: No clock available for the device\n",
__func__);
ret = PTR_ERR(clk);
goto put_opp_table;
}
freq = clk_round_rate(clk, target_freq); freq = clk_round_rate(clk, target_freq);
if ((long)freq <= 0) if ((long)freq <= 0)
...@@ -714,16 +639,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -714,16 +639,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
if (old_freq == freq) { if (old_freq == freq) {
dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n", dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
__func__, freq); __func__, freq);
return 0; ret = 0;
} goto put_opp_table;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "%s: device opp doesn't exist\n", __func__);
rcu_read_unlock();
return PTR_ERR(opp_table);
} }
old_opp = _find_freq_ceil(opp_table, &old_freq); old_opp = _find_freq_ceil(opp_table, &old_freq);
...@@ -737,8 +654,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -737,8 +654,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n", dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n",
__func__, freq, ret); __func__, freq, ret);
rcu_read_unlock(); goto put_old_opp;
return ret;
} }
dev_dbg(dev, "%s: switching OPP: %lu Hz --> %lu Hz\n", __func__, dev_dbg(dev, "%s: switching OPP: %lu Hz --> %lu Hz\n", __func__,
...@@ -748,8 +664,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -748,8 +664,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
/* Only frequency scaling */ /* Only frequency scaling */
if (!regulators) { if (!regulators) {
rcu_read_unlock(); ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
return _generic_set_opp_clk_only(dev, clk, old_freq, freq); goto put_opps;
} }
if (opp_table->set_opp) if (opp_table->set_opp)
...@@ -773,28 +689,26 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -773,28 +689,26 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
data->new_opp.rate = freq; data->new_opp.rate = freq;
memcpy(data->new_opp.supplies, opp->supplies, size); memcpy(data->new_opp.supplies, opp->supplies, size);
rcu_read_unlock(); ret = set_opp(data);
return set_opp(data); put_opps:
dev_pm_opp_put(opp);
put_old_opp:
if (!IS_ERR(old_opp))
dev_pm_opp_put(old_opp);
put_opp_table:
dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate); EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate);
/* OPP-dev Helpers */ /* OPP-dev Helpers */
static void _kfree_opp_dev_rcu(struct rcu_head *head)
{
struct opp_device *opp_dev;
opp_dev = container_of(head, struct opp_device, rcu_head);
kfree_rcu(opp_dev, rcu_head);
}
static void _remove_opp_dev(struct opp_device *opp_dev, static void _remove_opp_dev(struct opp_device *opp_dev,
struct opp_table *opp_table) struct opp_table *opp_table)
{ {
opp_debug_unregister(opp_dev, opp_table); opp_debug_unregister(opp_dev, opp_table);
list_del(&opp_dev->node); list_del(&opp_dev->node);
call_srcu(&opp_table->srcu_head.srcu, &opp_dev->rcu_head, kfree(opp_dev);
_kfree_opp_dev_rcu);
} }
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_device *_add_opp_dev(const struct device *dev,
...@@ -809,7 +723,7 @@ struct opp_device *_add_opp_dev(const struct device *dev, ...@@ -809,7 +723,7 @@ struct opp_device *_add_opp_dev(const struct device *dev,
/* Initialize opp-dev */ /* Initialize opp-dev */
opp_dev->dev = dev; opp_dev->dev = dev;
list_add_rcu(&opp_dev->node, &opp_table->dev_list); list_add(&opp_dev->node, &opp_table->dev_list);
/* Create debugfs entries for the opp_table */ /* Create debugfs entries for the opp_table */
ret = opp_debug_register(opp_dev, opp_table); ret = opp_debug_register(opp_dev, opp_table);
...@@ -820,26 +734,12 @@ struct opp_device *_add_opp_dev(const struct device *dev, ...@@ -820,26 +734,12 @@ struct opp_device *_add_opp_dev(const struct device *dev,
return opp_dev; return opp_dev;
} }
/** static struct opp_table *_allocate_opp_table(struct device *dev)
* _add_opp_table() - Find OPP table or allocate a new one
* @dev: device for which we do this operation
*
* It tries to find an existing table first, if it couldn't find one, it
* allocates a new OPP table and returns that.
*
* Return: valid opp_table pointer if success, else NULL.
*/
static struct opp_table *_add_opp_table(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
struct opp_device *opp_dev; struct opp_device *opp_dev;
int ret; int ret;
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (!IS_ERR(opp_table))
return opp_table;
/* /*
* Allocate a new OPP table. In the infrequent case where a new * Allocate a new OPP table. In the infrequent case where a new
* device is needed to be added, we pay this penalty. * device is needed to be added, we pay this penalty.
...@@ -867,50 +767,45 @@ static struct opp_table *_add_opp_table(struct device *dev) ...@@ -867,50 +767,45 @@ static struct opp_table *_add_opp_table(struct device *dev)
ret); ret);
} }
srcu_init_notifier_head(&opp_table->srcu_head); BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
INIT_LIST_HEAD(&opp_table->opp_list); INIT_LIST_HEAD(&opp_table->opp_list);
mutex_init(&opp_table->lock);
kref_init(&opp_table->kref);
/* Secure the device table modification */ /* Secure the device table modification */
list_add_rcu(&opp_table->node, &opp_tables); list_add(&opp_table->node, &opp_tables);
return opp_table; return opp_table;
} }
/** void _get_opp_table_kref(struct opp_table *opp_table)
* _kfree_device_rcu() - Free opp_table RCU handler
* @head: RCU head
*/
static void _kfree_device_rcu(struct rcu_head *head)
{ {
struct opp_table *opp_table = container_of(head, struct opp_table, kref_get(&opp_table->kref);
rcu_head);
kfree_rcu(opp_table, rcu_head);
} }
/** struct opp_table *dev_pm_opp_get_opp_table(struct device *dev)
* _remove_opp_table() - Removes a OPP table
* @opp_table: OPP table to be removed.
*
* Removes/frees OPP table if it doesn't contain any OPPs.
*/
static void _remove_opp_table(struct opp_table *opp_table)
{ {
struct opp_device *opp_dev; struct opp_table *opp_table;
if (!list_empty(&opp_table->opp_list)) /* Hold our table modification lock here */
return; mutex_lock(&opp_table_lock);
if (opp_table->supported_hw) opp_table = _find_opp_table_unlocked(dev);
return; if (!IS_ERR(opp_table))
goto unlock;
if (opp_table->prop_name) opp_table = _allocate_opp_table(dev);
return;
if (opp_table->regulators) unlock:
return; mutex_unlock(&opp_table_lock);
if (opp_table->set_opp) return opp_table;
return; }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_table);
static void _opp_table_kref_release(struct kref *kref)
{
struct opp_table *opp_table = container_of(kref, struct opp_table, kref);
struct opp_device *opp_dev;
/* Release clk */ /* Release clk */
if (!IS_ERR(opp_table->clk)) if (!IS_ERR(opp_table->clk))
...@@ -924,63 +819,60 @@ static void _remove_opp_table(struct opp_table *opp_table) ...@@ -924,63 +819,60 @@ static void _remove_opp_table(struct opp_table *opp_table)
/* dev_list must be empty now */ /* dev_list must be empty now */
WARN_ON(!list_empty(&opp_table->dev_list)); WARN_ON(!list_empty(&opp_table->dev_list));
list_del_rcu(&opp_table->node); mutex_destroy(&opp_table->lock);
call_srcu(&opp_table->srcu_head.srcu, &opp_table->rcu_head, list_del(&opp_table->node);
_kfree_device_rcu); kfree(opp_table);
mutex_unlock(&opp_table_lock);
} }
/** void dev_pm_opp_put_opp_table(struct opp_table *opp_table)
* _kfree_opp_rcu() - Free OPP RCU handler
* @head: RCU head
*/
static void _kfree_opp_rcu(struct rcu_head *head)
{ {
struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head); kref_put_mutex(&opp_table->kref, _opp_table_kref_release,
&opp_table_lock);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_put_opp_table);
kfree_rcu(opp, rcu_head); void _opp_free(struct dev_pm_opp *opp)
{
kfree(opp);
} }
/** static void _opp_kref_release(struct kref *kref)
* _opp_remove() - Remove an OPP from a table definition
* @opp_table: points back to the opp_table struct this opp belongs to
* @opp: pointer to the OPP to remove
* @notify: OPP_EVENT_REMOVE notification should be sent or not
*
* This function removes an opp definition from the opp table.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* It is assumed that the caller holds required mutex for an RCU updater
* strategy.
*/
void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp,
bool notify)
{ {
struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
struct opp_table *opp_table = opp->opp_table;
/* /*
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
* frequency/voltage list. * frequency/voltage list.
*/ */
if (notify) blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_REMOVE, opp);
srcu_notifier_call_chain(&opp_table->srcu_head,
OPP_EVENT_REMOVE, opp);
opp_debug_remove_one(opp); opp_debug_remove_one(opp);
list_del_rcu(&opp->node); list_del(&opp->node);
call_srcu(&opp_table->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu); kfree(opp);
_remove_opp_table(opp_table); mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
} }
static void dev_pm_opp_get(struct dev_pm_opp *opp)
{
kref_get(&opp->kref);
}
void dev_pm_opp_put(struct dev_pm_opp *opp)
{
kref_put_mutex(&opp->kref, _opp_kref_release, &opp->opp_table->lock);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_put);
/** /**
* dev_pm_opp_remove() - Remove an OPP from OPP table * dev_pm_opp_remove() - Remove an OPP from OPP table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: OPP to remove with matching 'freq' * @freq: OPP to remove with matching 'freq'
* *
* This function removes an opp from the opp table. * This function removes an opp from the opp table.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_remove(struct device *dev, unsigned long freq) void dev_pm_opp_remove(struct device *dev, unsigned long freq)
{ {
...@@ -988,12 +880,11 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq) ...@@ -988,12 +880,11 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
struct opp_table *opp_table; struct opp_table *opp_table;
bool found = false; bool found = false;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
goto unlock; return;
mutex_lock(&opp_table->lock);
list_for_each_entry(opp, &opp_table->opp_list, node) { list_for_each_entry(opp, &opp_table->opp_list, node) {
if (opp->rate == freq) { if (opp->rate == freq) {
...@@ -1002,28 +893,23 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq) ...@@ -1002,28 +893,23 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
} }
} }
if (!found) { mutex_unlock(&opp_table->lock);
if (found) {
dev_pm_opp_put(opp);
} else {
dev_warn(dev, "%s: Couldn't find OPP with freq: %lu\n", dev_warn(dev, "%s: Couldn't find OPP with freq: %lu\n",
__func__, freq); __func__, freq);
goto unlock;
} }
_opp_remove(opp_table, opp, true); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_remove); EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
struct dev_pm_opp *_allocate_opp(struct device *dev, struct dev_pm_opp *_opp_allocate(struct opp_table *table)
struct opp_table **opp_table)
{ {
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
int count, supply_size; int count, supply_size;
struct opp_table *table;
table = _add_opp_table(dev);
if (!table)
return NULL;
/* Allocate space for at least one supply */ /* Allocate space for at least one supply */
count = table->regulator_count ? table->regulator_count : 1; count = table->regulator_count ? table->regulator_count : 1;
...@@ -1031,17 +917,13 @@ struct dev_pm_opp *_allocate_opp(struct device *dev, ...@@ -1031,17 +917,13 @@ struct dev_pm_opp *_allocate_opp(struct device *dev,
/* allocate new OPP node and supplies structures */ /* allocate new OPP node and supplies structures */
opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL); opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL);
if (!opp) { if (!opp)
kfree(table);
return NULL; return NULL;
}
/* Put the supplies at the end of the OPP structure as an empty array */ /* Put the supplies at the end of the OPP structure as an empty array */
opp->supplies = (struct dev_pm_opp_supply *)(opp + 1); opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
INIT_LIST_HEAD(&opp->node); INIT_LIST_HEAD(&opp->node);
*opp_table = table;
return opp; return opp;
} }
...@@ -1067,11 +949,21 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp, ...@@ -1067,11 +949,21 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp,
return true; return true;
} }
/*
* Returns:
* 0: On success. And appropriate error message for duplicate OPPs.
* -EBUSY: For OPP with same freq/volt and is available. The callers of
* _opp_add() must return 0 if they receive -EBUSY from it. This is to make
* sure we don't print error messages unnecessarily if different parts of
* kernel try to initialize the OPP table.
* -EEXIST: For OPP with same freq but different volt or is unavailable. This
* should be considered an error by the callers of _opp_add().
*/
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
struct opp_table *opp_table) struct opp_table *opp_table)
{ {
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
struct list_head *head = &opp_table->opp_list; struct list_head *head;
int ret; int ret;
/* /*
...@@ -1082,7 +974,10 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1082,7 +974,10 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* loop, don't replace it with head otherwise it will become an infinite * loop, don't replace it with head otherwise it will become an infinite
* loop. * loop.
*/ */
list_for_each_entry_rcu(opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
head = &opp_table->opp_list;
list_for_each_entry(opp, &opp_table->opp_list, node) {
if (new_opp->rate > opp->rate) { if (new_opp->rate > opp->rate) {
head = &opp->node; head = &opp->node;
continue; continue;
...@@ -1098,12 +993,21 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1098,12 +993,21 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
new_opp->supplies[0].u_volt, new_opp->available); new_opp->supplies[0].u_volt, new_opp->available);
/* Should we compare voltages for all regulators here ? */ /* Should we compare voltages for all regulators here ? */
return opp->available && ret = opp->available &&
new_opp->supplies[0].u_volt == opp->supplies[0].u_volt ? 0 : -EEXIST; new_opp->supplies[0].u_volt == opp->supplies[0].u_volt ? -EBUSY : -EEXIST;
mutex_unlock(&opp_table->lock);
return ret;
} }
list_add(&new_opp->node, head);
mutex_unlock(&opp_table->lock);
new_opp->opp_table = opp_table; new_opp->opp_table = opp_table;
list_add_rcu(&new_opp->node, head); kref_init(&new_opp->kref);
/* Get a reference to the OPP table */
_get_opp_table_kref(opp_table);
ret = opp_debug_create_one(new_opp, opp_table); ret = opp_debug_create_one(new_opp, opp_table);
if (ret) if (ret)
...@@ -1121,6 +1025,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1121,6 +1025,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
/** /**
* _opp_add_v1() - Allocate a OPP based on v1 bindings. * _opp_add_v1() - Allocate a OPP based on v1 bindings.
* @opp_table: OPP table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: Frequency in Hz for this OPP * @freq: Frequency in Hz for this OPP
* @u_volt: Voltage in uVolts for this OPP * @u_volt: Voltage in uVolts for this OPP
...@@ -1133,12 +1038,6 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1133,12 +1038,6 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* NOTE: "dynamic" parameter impacts OPPs added by the dev_pm_opp_of_add_table * NOTE: "dynamic" parameter impacts OPPs added by the dev_pm_opp_of_add_table
* and freed by dev_pm_opp_of_remove_table. * and freed by dev_pm_opp_of_remove_table.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -1146,22 +1045,16 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1146,22 +1045,16 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* Duplicate OPPs (both freq and volt are same) and !opp->available * Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM Memory allocation failure * -ENOMEM Memory allocation failure
*/ */
int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, int _opp_add_v1(struct opp_table *opp_table, struct device *dev,
bool dynamic) unsigned long freq, long u_volt, bool dynamic)
{ {
struct opp_table *opp_table;
struct dev_pm_opp *new_opp; struct dev_pm_opp *new_opp;
unsigned long tol; unsigned long tol;
int ret; int ret;
/* Hold our table modification lock here */ new_opp = _opp_allocate(opp_table);
mutex_lock(&opp_table_lock); if (!new_opp)
return -ENOMEM;
new_opp = _allocate_opp(dev, &opp_table);
if (!new_opp) {
ret = -ENOMEM;
goto unlock;
}
/* populate the opp table */ /* populate the opp table */
new_opp->rate = freq; new_opp->rate = freq;
...@@ -1173,22 +1066,23 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, ...@@ -1173,22 +1066,23 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
new_opp->dynamic = dynamic; new_opp->dynamic = dynamic;
ret = _opp_add(dev, new_opp, opp_table); ret = _opp_add(dev, new_opp, opp_table);
if (ret) if (ret) {
/* Don't return error for duplicate OPPs */
if (ret == -EBUSY)
ret = 0;
goto free_opp; goto free_opp;
}
mutex_unlock(&opp_table_lock);
/* /*
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
* frequency/voltage list. * frequency/voltage list.
*/ */
srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADD, new_opp);
return 0; return 0;
free_opp: free_opp:
_opp_remove(opp_table, new_opp, false); _opp_free(new_opp);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ret;
} }
...@@ -1202,27 +1096,16 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, ...@@ -1202,27 +1096,16 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
* specify the hierarchy of versions it supports. OPP layer will then enable * specify the hierarchy of versions it supports. OPP layer will then enable
* OPPs, which are available for those versions, based on its 'opp-supported-hw' * OPPs, which are available for those versions, based on its 'opp-supported-hw'
* property. * property.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
unsigned int count) const u32 *versions, unsigned int count)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0; int ret;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
opp_table = _add_opp_table(dev); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table) { if (!opp_table)
ret = -ENOMEM; return ERR_PTR(-ENOMEM);
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
...@@ -1243,65 +1126,40 @@ int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, ...@@ -1243,65 +1126,40 @@ int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions,
} }
opp_table->supported_hw_count = count; opp_table->supported_hw_count = count;
mutex_unlock(&opp_table_lock);
return 0; return opp_table;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw); EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw);
/** /**
* dev_pm_opp_put_supported_hw() - Releases resources blocked for supported hw * dev_pm_opp_put_supported_hw() - Releases resources blocked for supported hw
* @dev: Device for which supported-hw has to be put. * @opp_table: OPP table returned by dev_pm_opp_set_supported_hw().
* *
* This is required only for the V2 bindings, and is called for a matching * This is required only for the V2 bindings, and is called for a matching
* dev_pm_opp_set_supported_hw(). Until this is called, the opp_table structure * dev_pm_opp_set_supported_hw(). Until this is called, the opp_table structure
* will not be freed. * will not be freed.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_put_supported_hw(struct device *dev) void dev_pm_opp_put_supported_hw(struct opp_table *opp_table)
{ {
struct opp_table *opp_table;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to find opp_table: %ld\n",
PTR_ERR(opp_table));
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
if (!opp_table->supported_hw) { if (!opp_table->supported_hw) {
dev_err(dev, "%s: Doesn't have supported hardware list\n", pr_err("%s: Doesn't have supported hardware list\n",
__func__); __func__);
goto unlock; return;
} }
kfree(opp_table->supported_hw); kfree(opp_table->supported_hw);
opp_table->supported_hw = NULL; opp_table->supported_hw = NULL;
opp_table->supported_hw_count = 0; opp_table->supported_hw_count = 0;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw);
...@@ -1314,26 +1172,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); ...@@ -1314,26 +1172,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw);
* specify the extn to be used for certain property names. The properties to * specify the extn to be used for certain property names. The properties to
* which the extension will apply are opp-microvolt and opp-microamp. OPP core * which the extension will apply are opp-microvolt and opp-microamp. OPP core
* should postfix the property name with -<name> while looking for them. * should postfix the property name with -<name> while looking for them.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_set_prop_name(struct device *dev, const char *name) struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0; int ret;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
opp_table = _add_opp_table(dev); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table) { if (!opp_table)
ret = -ENOMEM; return ERR_PTR(-ENOMEM);
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
...@@ -1352,63 +1199,37 @@ int dev_pm_opp_set_prop_name(struct device *dev, const char *name) ...@@ -1352,63 +1199,37 @@ int dev_pm_opp_set_prop_name(struct device *dev, const char *name)
goto err; goto err;
} }
mutex_unlock(&opp_table_lock); return opp_table;
return 0;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name); EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name);
/** /**
* dev_pm_opp_put_prop_name() - Releases resources blocked for prop-name * dev_pm_opp_put_prop_name() - Releases resources blocked for prop-name
* @dev: Device for which the prop-name has to be put. * @opp_table: OPP table returned by dev_pm_opp_set_prop_name().
* *
* This is required only for the V2 bindings, and is called for a matching * This is required only for the V2 bindings, and is called for a matching
* dev_pm_opp_set_prop_name(). Until this is called, the opp_table structure * dev_pm_opp_set_prop_name(). Until this is called, the opp_table structure
* will not be freed. * will not be freed.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_put_prop_name(struct device *dev) void dev_pm_opp_put_prop_name(struct opp_table *opp_table)
{ {
struct opp_table *opp_table;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to find opp_table: %ld\n",
PTR_ERR(opp_table));
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
if (!opp_table->prop_name) { if (!opp_table->prop_name) {
dev_err(dev, "%s: Doesn't have a prop-name\n", __func__); pr_err("%s: Doesn't have a prop-name\n", __func__);
goto unlock; return;
} }
kfree(opp_table->prop_name); kfree(opp_table->prop_name);
opp_table->prop_name = NULL; opp_table->prop_name = NULL;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name); EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name);
...@@ -1455,12 +1276,6 @@ static void _free_set_opp_data(struct opp_table *opp_table) ...@@ -1455,12 +1276,6 @@ static void _free_set_opp_data(struct opp_table *opp_table)
* well. * well.
* *
* This must be called before any OPPs are initialized for the device. * This must be called before any OPPs are initialized for the device.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
struct opp_table *dev_pm_opp_set_regulators(struct device *dev, struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
const char * const names[], const char * const names[],
...@@ -1470,13 +1285,9 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, ...@@ -1470,13 +1285,9 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
struct regulator *reg; struct regulator *reg;
int ret, i; int ret, i;
mutex_lock(&opp_table_lock); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
opp_table = _add_opp_table(dev); return ERR_PTR(-ENOMEM);
if (!opp_table) {
ret = -ENOMEM;
goto unlock;
}
/* This should be called before OPPs are initialized */ /* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) { if (WARN_ON(!list_empty(&opp_table->opp_list))) {
...@@ -1518,7 +1329,6 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, ...@@ -1518,7 +1329,6 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
if (ret) if (ret)
goto free_regulators; goto free_regulators;
mutex_unlock(&opp_table_lock);
return opp_table; return opp_table;
free_regulators: free_regulators:
...@@ -1529,9 +1339,7 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, ...@@ -1529,9 +1339,7 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
opp_table->regulators = NULL; opp_table->regulators = NULL;
opp_table->regulator_count = 0; opp_table->regulator_count = 0;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
...@@ -1540,22 +1348,14 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulators); ...@@ -1540,22 +1348,14 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulators);
/** /**
* dev_pm_opp_put_regulators() - Releases resources blocked for regulator * dev_pm_opp_put_regulators() - Releases resources blocked for regulator
* @opp_table: OPP table returned from dev_pm_opp_set_regulators(). * @opp_table: OPP table returned from dev_pm_opp_set_regulators().
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_put_regulators(struct opp_table *opp_table) void dev_pm_opp_put_regulators(struct opp_table *opp_table)
{ {
int i; int i;
mutex_lock(&opp_table_lock);
if (!opp_table->regulators) { if (!opp_table->regulators) {
pr_err("%s: Doesn't have regulators set\n", __func__); pr_err("%s: Doesn't have regulators set\n", __func__);
goto unlock; return;
} }
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
...@@ -1570,11 +1370,7 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table) ...@@ -1570,11 +1370,7 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
opp_table->regulators = NULL; opp_table->regulators = NULL;
opp_table->regulator_count = 0; opp_table->regulator_count = 0;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators);
...@@ -1587,29 +1383,19 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); ...@@ -1587,29 +1383,19 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators);
* regulators per device), instead of the generic OPP set rate helper. * regulators per device), instead of the generic OPP set rate helper.
* *
* This must be called before any OPPs are initialized for the device. * This must be called before any OPPs are initialized for the device.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_register_set_opp_helper(struct device *dev, struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
int (*set_opp)(struct dev_pm_set_opp_data *data)) int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int ret; int ret;
if (!set_opp) if (!set_opp)
return -EINVAL; return ERR_PTR(-EINVAL);
mutex_lock(&opp_table_lock);
opp_table = _add_opp_table(dev); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table) { if (!opp_table)
ret = -ENOMEM; return ERR_PTR(-ENOMEM);
goto unlock;
}
/* This should be called before OPPs are initialized */ /* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) { if (WARN_ON(!list_empty(&opp_table->opp_list))) {
...@@ -1625,47 +1411,28 @@ int dev_pm_opp_register_set_opp_helper(struct device *dev, ...@@ -1625,47 +1411,28 @@ int dev_pm_opp_register_set_opp_helper(struct device *dev,
opp_table->set_opp = set_opp; opp_table->set_opp = set_opp;
mutex_unlock(&opp_table_lock); return opp_table;
return 0;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper); EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper);
/** /**
* dev_pm_opp_register_put_opp_helper() - Releases resources blocked for * dev_pm_opp_register_put_opp_helper() - Releases resources blocked for
* set_opp helper * set_opp helper
* @dev: Device for which custom set_opp helper has to be cleared. * @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper().
* *
* Locking: The internal opp_table and opp structures are RCU protected. * Release resources blocked for platform specific set_opp helper.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_register_put_opp_helper(struct device *dev) void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table)
{ {
struct opp_table *opp_table;
mutex_lock(&opp_table_lock);
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to find opp_table: %ld\n",
PTR_ERR(opp_table));
goto unlock;
}
if (!opp_table->set_opp) { if (!opp_table->set_opp) {
dev_err(dev, "%s: Doesn't have custom set_opp helper set\n", pr_err("%s: Doesn't have custom set_opp helper set\n",
__func__); __func__);
goto unlock; return;
} }
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
...@@ -1673,11 +1440,7 @@ void dev_pm_opp_register_put_opp_helper(struct device *dev) ...@@ -1673,11 +1440,7 @@ void dev_pm_opp_register_put_opp_helper(struct device *dev)
opp_table->set_opp = NULL; opp_table->set_opp = NULL;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
...@@ -1691,12 +1454,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); ...@@ -1691,12 +1454,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
* The opp is made available by default and it can be controlled using * The opp is made available by default and it can be controlled using
* dev_pm_opp_enable/disable functions. * dev_pm_opp_enable/disable functions.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -1706,7 +1463,17 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); ...@@ -1706,7 +1463,17 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
*/ */
int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
{ {
return _opp_add_v1(dev, freq, u_volt, true); struct opp_table *opp_table;
int ret;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
ret = _opp_add_v1(opp_table, dev, freq, u_volt, true);
dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_add); EXPORT_SYMBOL_GPL(dev_pm_opp_add);
...@@ -1716,41 +1483,30 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_add); ...@@ -1716,41 +1483,30 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_add);
* @freq: OPP frequency to modify availability * @freq: OPP frequency to modify availability
* @availability_req: availability status requested for this opp * @availability_req: availability status requested for this opp
* *
* Set the availability of an OPP with an RCU operation, opp_{enable,disable} * Set the availability of an OPP, opp_{enable,disable} share a common logic
* share a common logic which is isolated here. * which is isolated here.
* *
* Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modification was done OR modification was * copy operation, returns 0 if no modification was done OR modification was
* successful. * successful.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks to
* keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*/ */
static int _opp_set_availability(struct device *dev, unsigned long freq, static int _opp_set_availability(struct device *dev, unsigned long freq,
bool availability_req) bool availability_req)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV); struct dev_pm_opp *tmp_opp, *opp = ERR_PTR(-ENODEV);
int r = 0; int r = 0;
/* keep the node allocated */
new_opp = kmalloc(sizeof(*new_opp), GFP_KERNEL);
if (!new_opp)
return -ENOMEM;
mutex_lock(&opp_table_lock);
/* Find the opp_table */ /* Find the opp_table */
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
r = PTR_ERR(opp_table); r = PTR_ERR(opp_table);
dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r); dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r);
goto unlock; return r;
} }
mutex_lock(&opp_table->lock);
/* Do we have the frequency? */ /* Do we have the frequency? */
list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { list_for_each_entry(tmp_opp, &opp_table->opp_list, node) {
if (tmp_opp->rate == freq) { if (tmp_opp->rate == freq) {
...@@ -1758,6 +1514,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, ...@@ -1758,6 +1514,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
break; break;
} }
} }
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
r = PTR_ERR(opp); r = PTR_ERR(opp);
goto unlock; goto unlock;
...@@ -1766,29 +1523,20 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, ...@@ -1766,29 +1523,20 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
/* Is update really needed? */ /* Is update really needed? */
if (opp->available == availability_req) if (opp->available == availability_req)
goto unlock; goto unlock;
/* copy the old data over */
*new_opp = *opp;
/* plug in new node */ opp->available = availability_req;
new_opp->available = availability_req;
list_replace_rcu(&opp->node, &new_opp->node);
mutex_unlock(&opp_table_lock);
call_srcu(&opp_table->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu);
/* Notify the change of the OPP availability */ /* Notify the change of the OPP availability */
if (availability_req) if (availability_req)
srcu_notifier_call_chain(&opp_table->srcu_head, blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ENABLE,
OPP_EVENT_ENABLE, new_opp); opp);
else else
srcu_notifier_call_chain(&opp_table->srcu_head, blocking_notifier_call_chain(&opp_table->head,
OPP_EVENT_DISABLE, new_opp); OPP_EVENT_DISABLE, opp);
return 0;
unlock: unlock:
mutex_unlock(&opp_table_lock); mutex_unlock(&opp_table->lock);
kfree(new_opp); dev_pm_opp_put_opp_table(opp_table);
return r; return r;
} }
...@@ -1801,12 +1549,6 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, ...@@ -1801,12 +1549,6 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
* corresponding error value. It is meant to be used for users an OPP available * corresponding error value. It is meant to be used for users an OPP available
* after being temporarily made unavailable with dev_pm_opp_disable. * after being temporarily made unavailable with dev_pm_opp_disable.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the
* integrity of the internal data structures. Callers should ensure that
* this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*
* Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modification was done OR modification was * copy operation, returns 0 if no modification was done OR modification was
* successful. * successful.
...@@ -1827,12 +1569,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_enable); ...@@ -1827,12 +1569,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_enable);
* control by users to make this OPP not available until the circumstances are * control by users to make this OPP not available until the circumstances are
* right to make it available again (with a call to dev_pm_opp_enable). * right to make it available again (with a call to dev_pm_opp_enable).
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the
* integrity of the internal data structures. Callers should ensure that
* this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*
* Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modification was done OR modification was * copy operation, returns 0 if no modification was done OR modification was
* successful. * successful.
...@@ -1844,41 +1580,78 @@ int dev_pm_opp_disable(struct device *dev, unsigned long freq) ...@@ -1844,41 +1580,78 @@ int dev_pm_opp_disable(struct device *dev, unsigned long freq)
EXPORT_SYMBOL_GPL(dev_pm_opp_disable); EXPORT_SYMBOL_GPL(dev_pm_opp_disable);
/** /**
* dev_pm_opp_get_notifier() - find notifier_head of the device with opp * dev_pm_opp_register_notifier() - Register OPP notifier for the device
* @dev: device pointer used to lookup OPP table. * @dev: Device for which notifier needs to be registered
* @nb: Notifier block to be registered
* *
* Return: pointer to notifier head if found, otherwise -ENODEV or * Return: 0 on success or a negative error value.
* -EINVAL based on type of error casted as pointer. value must be checked */
* with IS_ERR to determine valid pointer or error result. int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb)
{
struct opp_table *opp_table;
int ret;
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
ret = blocking_notifier_chain_register(&opp_table->head, nb);
dev_pm_opp_put_opp_table(opp_table);
return ret;
}
EXPORT_SYMBOL(dev_pm_opp_register_notifier);
/**
* dev_pm_opp_unregister_notifier() - Unregister OPP notifier for the device
* @dev: Device for which notifier needs to be unregistered
* @nb: Notifier block to be unregistered
* *
* Locking: This function must be called under rcu_read_lock(). opp_table is a * Return: 0 on success or a negative error value.
* RCU protected pointer. The reason for the same is that the opp pointer which
* is returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev) int dev_pm_opp_unregister_notifier(struct device *dev,
struct notifier_block *nb)
{ {
struct opp_table *opp_table = _find_opp_table(dev); struct opp_table *opp_table;
int ret;
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return ERR_CAST(opp_table); /* matching type */ return PTR_ERR(opp_table);
ret = blocking_notifier_chain_unregister(&opp_table->head, nb);
return &opp_table->srcu_head; dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier); EXPORT_SYMBOL(dev_pm_opp_unregister_notifier);
/* /*
* Free OPPs either created using static entries present in DT or even the * Free OPPs either created using static entries present in DT or even the
* dynamically added entries based on remove_all param. * dynamically added entries based on remove_all param.
*/ */
void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev,
bool remove_all)
{ {
struct opp_table *opp_table;
struct dev_pm_opp *opp, *tmp; struct dev_pm_opp *opp, *tmp;
/* Hold our table modification lock here */ /* Find if opp_table manages a single device */
mutex_lock(&opp_table_lock); if (list_is_singular(&opp_table->dev_list)) {
/* Free static OPPs */
list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
if (remove_all || !opp->dynamic)
dev_pm_opp_put(opp);
}
} else {
_remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
}
}
void _dev_pm_opp_find_and_remove_table(struct device *dev, bool remove_all)
{
struct opp_table *opp_table;
/* Check for existing table for 'dev' */ /* Check for existing table for 'dev' */
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
...@@ -1890,22 +1663,12 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) ...@@ -1890,22 +1663,12 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all)
IS_ERR_OR_NULL(dev) ? IS_ERR_OR_NULL(dev) ?
"Invalid device" : dev_name(dev), "Invalid device" : dev_name(dev),
error); error);
goto unlock; return;
} }
/* Find if opp_table manages a single device */ _dev_pm_opp_remove_table(opp_table, dev, remove_all);
if (list_is_singular(&opp_table->dev_list)) {
/* Free static OPPs */
list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
if (remove_all || !opp->dynamic)
_opp_remove(opp_table, opp, true);
}
} else {
_remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
}
unlock: dev_pm_opp_put_opp_table(opp_table);
mutex_unlock(&opp_table_lock);
} }
/** /**
...@@ -1914,15 +1677,9 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) ...@@ -1914,15 +1677,9 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all)
* *
* Free both OPPs created using static entries present in DT and the * Free both OPPs created using static entries present in DT and the
* dynamically added entries. * dynamically added entries.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_remove_table(struct device *dev) void dev_pm_opp_remove_table(struct device *dev)
{ {
_dev_pm_opp_remove_table(dev, true); _dev_pm_opp_find_and_remove_table(dev, true);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table); EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table);
...@@ -42,11 +42,6 @@ ...@@ -42,11 +42,6 @@
* *
* WARNING: It is important for the callers to ensure refreshing their copy of * WARNING: It is important for the callers to ensure refreshing their copy of
* the table if any of the mentioned functions have been invoked in the interim. * the table if any of the mentioned functions have been invoked in the interim.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Since we just use the regular accessor functions to access the internal data
* structures, we use RCU read lock inside this function. As a result, users of
* this function DONOT need to use explicit locks for invoking.
*/ */
int dev_pm_opp_init_cpufreq_table(struct device *dev, int dev_pm_opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table) struct cpufreq_frequency_table **table)
...@@ -56,19 +51,13 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, ...@@ -56,19 +51,13 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
int i, max_opps, ret = 0; int i, max_opps, ret = 0;
unsigned long rate; unsigned long rate;
rcu_read_lock();
max_opps = dev_pm_opp_get_opp_count(dev); max_opps = dev_pm_opp_get_opp_count(dev);
if (max_opps <= 0) { if (max_opps <= 0)
ret = max_opps ? max_opps : -ENODATA; return max_opps ? max_opps : -ENODATA;
goto out;
}
freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC); freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
if (!freq_table) { if (!freq_table)
ret = -ENOMEM; return -ENOMEM;
goto out;
}
for (i = 0, rate = 0; i < max_opps; i++, rate++) { for (i = 0, rate = 0; i < max_opps; i++, rate++) {
/* find next rate */ /* find next rate */
...@@ -83,6 +72,8 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, ...@@ -83,6 +72,8 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
/* Is Boost/turbo opp ? */ /* Is Boost/turbo opp ? */
if (dev_pm_opp_is_turbo(opp)) if (dev_pm_opp_is_turbo(opp))
freq_table[i].flags = CPUFREQ_BOOST_FREQ; freq_table[i].flags = CPUFREQ_BOOST_FREQ;
dev_pm_opp_put(opp);
} }
freq_table[i].driver_data = i; freq_table[i].driver_data = i;
...@@ -91,7 +82,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, ...@@ -91,7 +82,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
*table = &freq_table[0]; *table = &freq_table[0];
out: out:
rcu_read_unlock();
if (ret) if (ret)
kfree(freq_table); kfree(freq_table);
...@@ -147,12 +137,6 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of) ...@@ -147,12 +137,6 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of)
* This removes the OPP tables for CPUs present in the @cpumask. * This removes the OPP tables for CPUs present in the @cpumask.
* This should be used to remove all the OPPs entries associated with * This should be used to remove all the OPPs entries associated with
* the cpus in @cpumask. * the cpus in @cpumask.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask) void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask)
{ {
...@@ -169,12 +153,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table); ...@@ -169,12 +153,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table);
* @cpumask. * @cpumask.
* *
* Returns -ENODEV if OPP table isn't already present. * Returns -ENODEV if OPP table isn't already present.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
const struct cpumask *cpumask) const struct cpumask *cpumask)
...@@ -184,13 +162,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, ...@@ -184,13 +162,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
struct device *dev; struct device *dev;
int cpu, ret = 0; int cpu, ret = 0;
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(cpu_dev); opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table))
ret = PTR_ERR(opp_table); return PTR_ERR(opp_table);
goto unlock;
}
for_each_cpu(cpu, cpumask) { for_each_cpu(cpu, cpumask) {
if (cpu == cpu_dev->id) if (cpu == cpu_dev->id)
...@@ -213,8 +187,8 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, ...@@ -213,8 +187,8 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
/* Mark opp-table as multiple CPUs are sharing it now */ /* Mark opp-table as multiple CPUs are sharing it now */
opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED; opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED;
} }
unlock:
mutex_unlock(&opp_table_lock); dev_pm_opp_put_opp_table(opp_table);
return ret; return ret;
} }
...@@ -229,12 +203,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); ...@@ -229,12 +203,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
* *
* Returns -ENODEV if OPP table isn't already present and -EINVAL if the OPP * Returns -ENODEV if OPP table isn't already present and -EINVAL if the OPP
* table's status is access-unknown. * table's status is access-unknown.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
{ {
...@@ -242,17 +210,13 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) ...@@ -242,17 +210,13 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0; int ret = 0;
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(cpu_dev); opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table))
ret = PTR_ERR(opp_table); return PTR_ERR(opp_table);
goto unlock;
}
if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) { if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) {
ret = -EINVAL; ret = -EINVAL;
goto unlock; goto put_opp_table;
} }
cpumask_clear(cpumask); cpumask_clear(cpumask);
...@@ -264,8 +228,8 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) ...@@ -264,8 +228,8 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
cpumask_set_cpu(cpu_dev->id, cpumask); cpumask_set_cpu(cpu_dev->id, cpumask);
} }
unlock: put_opp_table:
mutex_unlock(&opp_table_lock); dev_pm_opp_put_opp_table(opp_table);
return ret; return ret;
} }
......
...@@ -24,9 +24,11 @@ ...@@ -24,9 +24,11 @@
static struct opp_table *_managed_opp(const struct device_node *np) static struct opp_table *_managed_opp(const struct device_node *np)
{ {
struct opp_table *opp_table; struct opp_table *opp_table, *managed_table = NULL;
mutex_lock(&opp_table_lock);
list_for_each_entry_rcu(opp_table, &opp_tables, node) { list_for_each_entry(opp_table, &opp_tables, node) {
if (opp_table->np == np) { if (opp_table->np == np) {
/* /*
* Multiple devices can point to the same OPP table and * Multiple devices can point to the same OPP table and
...@@ -35,14 +37,18 @@ static struct opp_table *_managed_opp(const struct device_node *np) ...@@ -35,14 +37,18 @@ static struct opp_table *_managed_opp(const struct device_node *np)
* But the OPPs will be considered as shared only if the * But the OPPs will be considered as shared only if the
* OPP table contains a "opp-shared" property. * OPP table contains a "opp-shared" property.
*/ */
if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) {
return opp_table; _get_opp_table_kref(opp_table);
managed_table = opp_table;
}
return NULL; break;
} }
} }
return NULL; mutex_unlock(&opp_table_lock);
return managed_table;
} }
void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) void _of_init_opp_table(struct opp_table *opp_table, struct device *dev)
...@@ -229,34 +235,28 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, ...@@ -229,34 +235,28 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
* @dev: device pointer used to lookup OPP table. * @dev: device pointer used to lookup OPP table.
* *
* Free OPPs created using static entries present in DT. * Free OPPs created using static entries present in DT.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_of_remove_table(struct device *dev) void dev_pm_opp_of_remove_table(struct device *dev)
{ {
_dev_pm_opp_remove_table(dev, false); _dev_pm_opp_find_and_remove_table(dev, false);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
/* Returns opp descriptor node for a device, caller must do of_node_put() */ /* Returns opp descriptor node for a device, caller must do of_node_put() */
static struct device_node *_of_get_opp_desc_node(struct device *dev) struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
{ {
/* /*
* TODO: Support for multiple OPP tables.
*
* There should be only ONE phandle present in "operating-points-v2" * There should be only ONE phandle present in "operating-points-v2"
* property. * property.
*/ */
return of_parse_phandle(dev->of_node, "operating-points-v2", 0); return of_parse_phandle(dev->of_node, "operating-points-v2", 0);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node);
/** /**
* _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings) * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings)
* @opp_table: OPP table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @np: device node * @np: device node
* *
...@@ -264,12 +264,6 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev) ...@@ -264,12 +264,6 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev)
* opp can be controlled using dev_pm_opp_enable/disable functions and may be * opp can be controlled using dev_pm_opp_enable/disable functions and may be
* removed by dev_pm_opp_remove. * removed by dev_pm_opp_remove.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -278,22 +272,17 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev) ...@@ -278,22 +272,17 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev)
* -ENOMEM Memory allocation failure * -ENOMEM Memory allocation failure
* -EINVAL Failed parsing the OPP node * -EINVAL Failed parsing the OPP node
*/ */
static int _opp_add_static_v2(struct device *dev, struct device_node *np) static int _opp_add_static_v2(struct opp_table *opp_table, struct device *dev,
struct device_node *np)
{ {
struct opp_table *opp_table;
struct dev_pm_opp *new_opp; struct dev_pm_opp *new_opp;
u64 rate; u64 rate;
u32 val; u32 val;
int ret; int ret;
/* Hold our table modification lock here */ new_opp = _opp_allocate(opp_table);
mutex_lock(&opp_table_lock); if (!new_opp)
return -ENOMEM;
new_opp = _allocate_opp(dev, &opp_table);
if (!new_opp) {
ret = -ENOMEM;
goto unlock;
}
ret = of_property_read_u64(np, "opp-hz", &rate); ret = of_property_read_u64(np, "opp-hz", &rate);
if (ret < 0) { if (ret < 0) {
...@@ -327,8 +316,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -327,8 +316,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
goto free_opp; goto free_opp;
ret = _opp_add(dev, new_opp, opp_table); ret = _opp_add(dev, new_opp, opp_table);
if (ret) if (ret) {
/* Don't return error for duplicate OPPs */
if (ret == -EBUSY)
ret = 0;
goto free_opp; goto free_opp;
}
/* OPP to select on device suspend */ /* OPP to select on device suspend */
if (of_property_read_bool(np, "opp-suspend")) { if (of_property_read_bool(np, "opp-suspend")) {
...@@ -345,8 +338,6 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -345,8 +338,6 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max) if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max)
opp_table->clock_latency_ns_max = new_opp->clock_latency_ns; opp_table->clock_latency_ns_max = new_opp->clock_latency_ns;
mutex_unlock(&opp_table_lock);
pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n",
__func__, new_opp->turbo, new_opp->rate, __func__, new_opp->turbo, new_opp->rate,
new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min, new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min,
...@@ -356,13 +347,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -356,13 +347,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
* frequency/voltage list. * frequency/voltage list.
*/ */
srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADD, new_opp);
return 0; return 0;
free_opp: free_opp:
_opp_remove(opp_table, new_opp, false); _opp_free(new_opp);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ret;
} }
...@@ -373,41 +363,35 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) ...@@ -373,41 +363,35 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0, count = 0; int ret = 0, count = 0;
mutex_lock(&opp_table_lock);
opp_table = _managed_opp(opp_np); opp_table = _managed_opp(opp_np);
if (opp_table) { if (opp_table) {
/* OPPs are already managed */ /* OPPs are already managed */
if (!_add_opp_dev(dev, opp_table)) if (!_add_opp_dev(dev, opp_table))
ret = -ENOMEM; ret = -ENOMEM;
mutex_unlock(&opp_table_lock); goto put_opp_table;
return ret;
} }
mutex_unlock(&opp_table_lock);
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
/* We have opp-table node now, iterate over it and add OPPs */ /* We have opp-table node now, iterate over it and add OPPs */
for_each_available_child_of_node(opp_np, np) { for_each_available_child_of_node(opp_np, np) {
count++; count++;
ret = _opp_add_static_v2(dev, np); ret = _opp_add_static_v2(opp_table, dev, np);
if (ret) { if (ret) {
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
ret); ret);
goto free_table; _dev_pm_opp_remove_table(opp_table, dev, false);
goto put_opp_table;
} }
} }
/* There should be one of more OPP defined */ /* There should be one of more OPP defined */
if (WARN_ON(!count)) if (WARN_ON(!count)) {
return -ENOENT; ret = -ENOENT;
goto put_opp_table;
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(dev);
if (WARN_ON(IS_ERR(opp_table))) {
ret = PTR_ERR(opp_table);
mutex_unlock(&opp_table_lock);
goto free_table;
} }
opp_table->np = opp_np; opp_table->np = opp_np;
...@@ -416,12 +400,8 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) ...@@ -416,12 +400,8 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
else else
opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE; opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE;
mutex_unlock(&opp_table_lock); put_opp_table:
dev_pm_opp_put_opp_table(opp_table);
return 0;
free_table:
dev_pm_opp_of_remove_table(dev);
return ret; return ret;
} }
...@@ -429,9 +409,10 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) ...@@ -429,9 +409,10 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
/* Initializes OPP tables based on old-deprecated bindings */ /* Initializes OPP tables based on old-deprecated bindings */
static int _of_add_opp_table_v1(struct device *dev) static int _of_add_opp_table_v1(struct device *dev)
{ {
struct opp_table *opp_table;
const struct property *prop; const struct property *prop;
const __be32 *val; const __be32 *val;
int nr; int nr, ret = 0;
prop = of_find_property(dev->of_node, "operating-points", NULL); prop = of_find_property(dev->of_node, "operating-points", NULL);
if (!prop) if (!prop)
...@@ -449,18 +430,27 @@ static int _of_add_opp_table_v1(struct device *dev) ...@@ -449,18 +430,27 @@ static int _of_add_opp_table_v1(struct device *dev)
return -EINVAL; return -EINVAL;
} }
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
val = prop->value; val = prop->value;
while (nr) { while (nr) {
unsigned long freq = be32_to_cpup(val++) * 1000; unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++); unsigned long volt = be32_to_cpup(val++);
if (_opp_add_v1(dev, freq, volt, false)) ret = _opp_add_v1(opp_table, dev, freq, volt, false);
dev_warn(dev, "%s: Failed to add OPP %ld\n", if (ret) {
__func__, freq); dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
__func__, freq, ret);
_dev_pm_opp_remove_table(opp_table, dev, false);
break;
}
nr -= 2; nr -= 2;
} }
return 0; dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
/** /**
...@@ -469,12 +459,6 @@ static int _of_add_opp_table_v1(struct device *dev) ...@@ -469,12 +459,6 @@ static int _of_add_opp_table_v1(struct device *dev)
* *
* Register the initial OPP table with the OPP library for given device. * Register the initial OPP table with the OPP library for given device.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -495,7 +479,7 @@ int dev_pm_opp_of_add_table(struct device *dev) ...@@ -495,7 +479,7 @@ int dev_pm_opp_of_add_table(struct device *dev)
* OPPs have two version of bindings now. The older one is deprecated, * OPPs have two version of bindings now. The older one is deprecated,
* try for the new binding first. * try for the new binding first.
*/ */
opp_np = _of_get_opp_desc_node(dev); opp_np = dev_pm_opp_of_get_opp_desc_node(dev);
if (!opp_np) { if (!opp_np) {
/* /*
* Try old-deprecated bindings for backward compatibility with * Try old-deprecated bindings for backward compatibility with
...@@ -519,12 +503,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); ...@@ -519,12 +503,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table);
* *
* This removes the OPP tables for CPUs present in the @cpumask. * This removes the OPP tables for CPUs present in the @cpumask.
* This should be used only to remove static entries created from DT. * This should be used only to remove static entries created from DT.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask) void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask)
{ {
...@@ -537,12 +515,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table); ...@@ -537,12 +515,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
* @cpumask: cpumask for which OPP table needs to be added. * @cpumask: cpumask for which OPP table needs to be added.
* *
* This adds the OPP tables for CPUs present in the @cpumask. * This adds the OPP tables for CPUs present in the @cpumask.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask) int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
{ {
...@@ -590,12 +562,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); ...@@ -590,12 +562,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
* This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev. * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev.
* *
* Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev. * Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
struct cpumask *cpumask) struct cpumask *cpumask)
...@@ -605,7 +571,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -605,7 +571,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
int cpu, ret = 0; int cpu, ret = 0;
/* Get OPP descriptor node */ /* Get OPP descriptor node */
np = _of_get_opp_desc_node(cpu_dev); np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np) { if (!np) {
dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__); dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__);
return -ENOENT; return -ENOENT;
...@@ -630,7 +596,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -630,7 +596,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
} }
/* Get OPP descriptor node */ /* Get OPP descriptor node */
tmp_np = _of_get_opp_desc_node(tcpu_dev); tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev);
if (!tmp_np) { if (!tmp_np) {
dev_err(tcpu_dev, "%s: Couldn't find opp node.\n", dev_err(tcpu_dev, "%s: Couldn't find opp node.\n",
__func__); __func__);
......
...@@ -16,11 +16,11 @@ ...@@ -16,11 +16,11 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/limits.h> #include <linux/limits.h>
#include <linux/pm_opp.h> #include <linux/pm_opp.h>
#include <linux/rculist.h> #include <linux/notifier.h>
#include <linux/rcupdate.h>
struct clk; struct clk;
struct regulator; struct regulator;
...@@ -51,11 +51,9 @@ extern struct list_head opp_tables; ...@@ -51,11 +51,9 @@ extern struct list_head opp_tables;
* @node: opp table node. The nodes are maintained throughout the lifetime * @node: opp table node. The nodes are maintained throughout the lifetime
* of boot. It is expected only an optimal set of OPPs are * of boot. It is expected only an optimal set of OPPs are
* added to the library by the SoC framework. * added to the library by the SoC framework.
* RCU usage: opp table is traversed with RCU locks. node
* modification is possible realtime, hence the modifications
* are protected by the opp_table_lock for integrity.
* IMPORTANT: the opp nodes should be maintained in increasing * IMPORTANT: the opp nodes should be maintained in increasing
* order. * order.
* @kref: for reference count of the OPP.
* @available: true/false - marks if this OPP as available or not * @available: true/false - marks if this OPP as available or not
* @dynamic: not-created from static DT entries. * @dynamic: not-created from static DT entries.
* @turbo: true if turbo (boost) OPP * @turbo: true if turbo (boost) OPP
...@@ -65,7 +63,6 @@ extern struct list_head opp_tables; ...@@ -65,7 +63,6 @@ extern struct list_head opp_tables;
* @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
* frequency from any other OPP's frequency. * frequency from any other OPP's frequency.
* @opp_table: points back to the opp_table struct this opp belongs to * @opp_table: points back to the opp_table struct this opp belongs to
* @rcu_head: RCU callback head used for deferred freeing
* @np: OPP's device node. * @np: OPP's device node.
* @dentry: debugfs dentry pointer (per opp) * @dentry: debugfs dentry pointer (per opp)
* *
...@@ -73,6 +70,7 @@ extern struct list_head opp_tables; ...@@ -73,6 +70,7 @@ extern struct list_head opp_tables;
*/ */
struct dev_pm_opp { struct dev_pm_opp {
struct list_head node; struct list_head node;
struct kref kref;
bool available; bool available;
bool dynamic; bool dynamic;
...@@ -85,7 +83,6 @@ struct dev_pm_opp { ...@@ -85,7 +83,6 @@ struct dev_pm_opp {
unsigned long clock_latency_ns; unsigned long clock_latency_ns;
struct opp_table *opp_table; struct opp_table *opp_table;
struct rcu_head rcu_head;
struct device_node *np; struct device_node *np;
...@@ -98,7 +95,6 @@ struct dev_pm_opp { ...@@ -98,7 +95,6 @@ struct dev_pm_opp {
* struct opp_device - devices managed by 'struct opp_table' * struct opp_device - devices managed by 'struct opp_table'
* @node: list node * @node: list node
* @dev: device to which the struct object belongs * @dev: device to which the struct object belongs
* @rcu_head: RCU callback head used for deferred freeing
* @dentry: debugfs dentry pointer (per device) * @dentry: debugfs dentry pointer (per device)
* *
* This is an internal data structure maintaining the devices that are managed * This is an internal data structure maintaining the devices that are managed
...@@ -107,7 +103,6 @@ struct dev_pm_opp { ...@@ -107,7 +103,6 @@ struct dev_pm_opp {
struct opp_device { struct opp_device {
struct list_head node; struct list_head node;
const struct device *dev; const struct device *dev;
struct rcu_head rcu_head;
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
struct dentry *dentry; struct dentry *dentry;
...@@ -125,12 +120,11 @@ enum opp_table_access { ...@@ -125,12 +120,11 @@ enum opp_table_access {
* @node: table node - contains the devices with OPPs that * @node: table node - contains the devices with OPPs that
* have been registered. Nodes once added are not modified in this * have been registered. Nodes once added are not modified in this
* table. * table.
* RCU usage: nodes are not modified in the table of opp_table, * @head: notifier head to notify the OPP availability changes.
* however addition is possible and is secured by opp_table_lock
* @srcu_head: notifier head to notify the OPP availability changes.
* @rcu_head: RCU callback head used for deferred freeing
* @dev_list: list of devices that share these OPPs * @dev_list: list of devices that share these OPPs
* @opp_list: table of opps * @opp_list: table of opps
* @kref: for reference count of the table.
* @lock: mutex protecting the opp_list.
* @np: struct device_node pointer for opp's DT node. * @np: struct device_node pointer for opp's DT node.
* @clock_latency_ns_max: Max clock latency in nanoseconds. * @clock_latency_ns_max: Max clock latency in nanoseconds.
* @shared_opp: OPP is shared between multiple devices. * @shared_opp: OPP is shared between multiple devices.
...@@ -151,18 +145,15 @@ enum opp_table_access { ...@@ -151,18 +145,15 @@ enum opp_table_access {
* This is an internal data structure maintaining the link to opps attached to * This is an internal data structure maintaining the link to opps attached to
* a device. This structure is not meant to be shared to users as it is * a device. This structure is not meant to be shared to users as it is
* meant for book keeping and private to OPP library. * meant for book keeping and private to OPP library.
*
* Because the opp structures can be used from both rcu and srcu readers, we
* need to wait for the grace period of both of them before freeing any
* resources. And so we have used kfree_rcu() from within call_srcu() handlers.
*/ */
struct opp_table { struct opp_table {
struct list_head node; struct list_head node;
struct srcu_notifier_head srcu_head; struct blocking_notifier_head head;
struct rcu_head rcu_head;
struct list_head dev_list; struct list_head dev_list;
struct list_head opp_list; struct list_head opp_list;
struct kref kref;
struct mutex lock;
struct device_node *np; struct device_node *np;
unsigned long clock_latency_ns_max; unsigned long clock_latency_ns_max;
...@@ -190,14 +181,17 @@ struct opp_table { ...@@ -190,14 +181,17 @@ struct opp_table {
}; };
/* Routines internal to opp core */ /* Routines internal to opp core */
void _get_opp_table_kref(struct opp_table *opp_table);
struct opp_table *_find_opp_table(struct device *dev); struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
void _dev_pm_opp_remove_table(struct device *dev, bool remove_all); void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev, bool remove_all);
struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table); void _dev_pm_opp_find_and_remove_table(struct device *dev, bool remove_all);
struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table);
void _opp_free(struct dev_pm_opp *opp);
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table); int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table);
void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp, bool notify); int _opp_add_v1(struct opp_table *opp_table, struct device *dev, unsigned long freq, long u_volt, bool dynamic);
int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, bool dynamic);
void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of); void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of);
struct opp_table *_add_opp_table(struct device *dev);
#ifdef CONFIG_OF #ifdef CONFIG_OF
void _of_init_opp_table(struct opp_table *opp_table, struct device *dev); void _of_init_opp_table(struct opp_table *opp_table, struct device *dev);
......
...@@ -281,7 +281,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev) ...@@ -281,7 +281,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
dev->power.qos = ERR_PTR(-ENODEV); dev->power.qos = ERR_PTR(-ENODEV);
spin_unlock_irq(&dev->power.lock); spin_unlock_irq(&dev->power.lock);
kfree(c->notifiers); kfree(qos->resume_latency.notifiers);
kfree(qos); kfree(qos);
out: out:
......
...@@ -141,6 +141,13 @@ static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq) ...@@ -141,6 +141,13 @@ static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq)
struct wake_irq *wirq = _wirq; struct wake_irq *wirq = _wirq;
int res; int res;
/* Maybe abort suspend? */
if (irqd_is_wakeup_set(irq_get_irq_data(irq))) {
pm_wakeup_event(wirq->dev, 0);
return IRQ_HANDLED;
}
/* We don't want RPM_ASYNC or RPM_NOWAIT here */ /* We don't want RPM_ASYNC or RPM_NOWAIT here */
res = pm_runtime_resume(wirq->dev); res = pm_runtime_resume(wirq->dev);
if (res < 0) if (res < 0)
...@@ -183,6 +190,9 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq) ...@@ -183,6 +190,9 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
wirq->irq = irq; wirq->irq = irq;
irq_set_status_flags(irq, IRQ_NOAUTOEN); irq_set_status_flags(irq, IRQ_NOAUTOEN);
/* Prevent deferred spurious wakeirqs with disable_irq_nosync() */
irq_set_status_flags(irq, IRQ_DISABLE_UNLAZY);
/* /*
* Consumer device may need to power up and restore state * Consumer device may need to power up and restore state
* so we use a threaded irq. * so we use a threaded irq.
...@@ -312,8 +322,12 @@ void dev_pm_arm_wake_irq(struct wake_irq *wirq) ...@@ -312,8 +322,12 @@ void dev_pm_arm_wake_irq(struct wake_irq *wirq)
if (!wirq) if (!wirq)
return; return;
if (device_may_wakeup(wirq->dev)) if (device_may_wakeup(wirq->dev)) {
if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED)
enable_irq(wirq->irq);
enable_irq_wake(wirq->irq); enable_irq_wake(wirq->irq);
}
} }
/** /**
...@@ -328,6 +342,10 @@ void dev_pm_disarm_wake_irq(struct wake_irq *wirq) ...@@ -328,6 +342,10 @@ void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
if (!wirq) if (!wirq)
return; return;
if (device_may_wakeup(wirq->dev)) if (device_may_wakeup(wirq->dev)) {
disable_irq_wake(wirq->irq); disable_irq_wake(wirq->irq);
if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED)
disable_irq_nosync(wirq->irq);
}
} }
...@@ -633,16 +633,12 @@ static int find_lut_index_for_rate(struct tegra_dfll *td, unsigned long rate) ...@@ -633,16 +633,12 @@ static int find_lut_index_for_rate(struct tegra_dfll *td, unsigned long rate)
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
int i, uv; int i, uv;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate); opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate);
if (IS_ERR(opp)) { if (IS_ERR(opp))
rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
}
uv = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); uv = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
for (i = 0; i < td->i2c_lut_size; i++) { for (i = 0; i < td->i2c_lut_size; i++) {
if (regulator_list_voltage(td->vdd_reg, td->i2c_lut[i]) == uv) if (regulator_list_voltage(td->vdd_reg, td->i2c_lut[i]) == uv)
...@@ -1440,8 +1436,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1440,8 +1436,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
int lut; int lut;
rcu_read_lock();
rate = ULONG_MAX; rate = ULONG_MAX;
opp = dev_pm_opp_find_freq_floor(td->soc->dev, &rate); opp = dev_pm_opp_find_freq_floor(td->soc->dev, &rate);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
...@@ -1449,6 +1443,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1449,6 +1443,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
goto out; goto out;
} }
v_max = dev_pm_opp_get_voltage(opp); v_max = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
v = td->soc->cvb->min_millivolts * 1000; v = td->soc->cvb->min_millivolts * 1000;
lut = find_vdd_map_entry_exact(td, v); lut = find_vdd_map_entry_exact(td, v);
...@@ -1465,6 +1460,8 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1465,6 +1460,8 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
if (v_opp <= td->soc->cvb->min_millivolts * 1000) if (v_opp <= td->soc->cvb->min_millivolts * 1000)
td->dvco_rate_min = dev_pm_opp_get_freq(opp); td->dvco_rate_min = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
for (;;) { for (;;) {
v += max(1, (v_max - v) / (MAX_DFLL_VOLTAGES - j)); v += max(1, (v_max - v) / (MAX_DFLL_VOLTAGES - j));
if (v >= v_opp) if (v >= v_opp)
...@@ -1496,8 +1493,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1496,8 +1493,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
ret = 0; ret = 0;
out: out:
rcu_read_unlock();
return ret; return ret;
} }
......
...@@ -37,14 +37,6 @@ config CPU_FREQ_STAT ...@@ -37,14 +37,6 @@ config CPU_FREQ_STAT
If in doubt, say N. If in doubt, say N.
config CPU_FREQ_STAT_DETAILS
bool "CPU frequency transition statistics details"
depends on CPU_FREQ_STAT
help
Show detailed CPU frequency transition table in sysfs.
If in doubt, say N.
choice choice
prompt "Default CPUFreq governor" prompt "Default CPUFreq governor"
default CPU_FREQ_DEFAULT_GOV_USERSPACE if ARM_SA1100_CPUFREQ || ARM_SA1110_CPUFREQ default CPU_FREQ_DEFAULT_GOV_USERSPACE if ARM_SA1100_CPUFREQ || ARM_SA1110_CPUFREQ
...@@ -271,6 +263,16 @@ config IA64_ACPI_CPUFREQ ...@@ -271,6 +263,16 @@ config IA64_ACPI_CPUFREQ
endif endif
if MIPS if MIPS
config BMIPS_CPUFREQ
tristate "BMIPS CPUfreq Driver"
help
This option adds a CPUfreq driver for BMIPS processors with
support for configurable CPU frequency.
For now, BMIPS5 chips are supported (such as the Broadcom 7425).
If in doubt, say N.
config LOONGSON2_CPUFREQ config LOONGSON2_CPUFREQ
tristate "Loongson2 CPUFreq Driver" tristate "Loongson2 CPUFreq Driver"
help help
...@@ -332,7 +334,7 @@ endif ...@@ -332,7 +334,7 @@ endif
config QORIQ_CPUFREQ config QORIQ_CPUFREQ
tristate "CPU frequency scaling driver for Freescale QorIQ SoCs" tristate "CPU frequency scaling driver for Freescale QorIQ SoCs"
depends on OF && COMMON_CLK && (PPC_E500MC || ARM) depends on OF && COMMON_CLK && (PPC_E500MC || ARM || ARM64)
depends on !CPU_THERMAL || THERMAL depends on !CPU_THERMAL || THERMAL
select CLK_QORIQ select CLK_QORIQ
help help
......
...@@ -247,6 +247,17 @@ config ARM_TEGRA124_CPUFREQ ...@@ -247,6 +247,17 @@ config ARM_TEGRA124_CPUFREQ
help help
This adds the CPUFreq driver support for Tegra124 SOCs. This adds the CPUFreq driver support for Tegra124 SOCs.
config ARM_TI_CPUFREQ
bool "Texas Instruments CPUFreq support"
depends on ARCH_OMAP2PLUS
help
This driver enables valid OPPs on the running platform based on
values contained within the SoC in use. Enable this in order to
use the cpufreq-dt driver on all Texas Instruments platforms that
provide dt based operating-points-v2 tables with opp-supported-hw
data provided. Required for cpufreq support on AM335x, AM437x,
DRA7x, and AM57x platforms.
config ARM_PXA2xx_CPUFREQ config ARM_PXA2xx_CPUFREQ
tristate "Intel PXA2xx CPUfreq driver" tristate "Intel PXA2xx CPUfreq driver"
depends on PXA27x || PXA25x depends on PXA27x || PXA25x
...@@ -257,7 +268,7 @@ config ARM_PXA2xx_CPUFREQ ...@@ -257,7 +268,7 @@ config ARM_PXA2xx_CPUFREQ
config ACPI_CPPC_CPUFREQ config ACPI_CPPC_CPUFREQ
tristate "CPUFreq driver based on the ACPI CPPC spec" tristate "CPUFreq driver based on the ACPI CPPC spec"
depends on ACPI depends on ACPI_PROCESSOR
select ACPI_CPPC_LIB select ACPI_CPPC_LIB
default n default n
help help
......
...@@ -77,6 +77,7 @@ obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o ...@@ -77,6 +77,7 @@ obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
obj-$(CONFIG_ARM_TI_CPUFREQ) += ti-cpufreq.o
obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o
obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
...@@ -98,6 +99,7 @@ obj-$(CONFIG_POWERNV_CPUFREQ) += powernv-cpufreq.o ...@@ -98,6 +99,7 @@ obj-$(CONFIG_POWERNV_CPUFREQ) += powernv-cpufreq.o
# Other platform drivers # Other platform drivers
obj-$(CONFIG_AVR32_AT32AP_CPUFREQ) += at32ap-cpufreq.o obj-$(CONFIG_AVR32_AT32AP_CPUFREQ) += at32ap-cpufreq.o
obj-$(CONFIG_BFIN_CPU_FREQ) += blackfin-cpufreq.o obj-$(CONFIG_BFIN_CPU_FREQ) += blackfin-cpufreq.o
obj-$(CONFIG_BMIPS_CPUFREQ) += bmips-cpufreq.o
obj-$(CONFIG_CRIS_MACH_ARTPEC3) += cris-artpec3-cpufreq.o obj-$(CONFIG_CRIS_MACH_ARTPEC3) += cris-artpec3-cpufreq.o
obj-$(CONFIG_ETRAXFS) += cris-etraxfs-cpufreq.o obj-$(CONFIG_ETRAXFS) += cris-etraxfs-cpufreq.o
obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o
......
/*
* CPU frequency scaling for Broadcom BMIPS SoCs
*
* Copyright (c) 2017 Broadcom
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/slab.h>
/* for mips_hpt_frequency */
#include <asm/time.h>
#define BMIPS_CPUFREQ_PREFIX "bmips"
#define BMIPS_CPUFREQ_NAME BMIPS_CPUFREQ_PREFIX "-cpufreq"
#define TRANSITION_LATENCY (25 * 1000) /* 25 us */
#define BMIPS5_CLK_DIV_SET_SHIFT 0x7
#define BMIPS5_CLK_DIV_SHIFT 0x4
#define BMIPS5_CLK_DIV_MASK 0xf
enum bmips_type {
BMIPS5000,
BMIPS5200,
};
struct cpufreq_compat {
const char *compatible;
unsigned int bmips_type;
unsigned int clk_mult;
unsigned int max_freqs;
};
#define BMIPS(c, t, m, f) { \
.compatible = c, \
.bmips_type = (t), \
.clk_mult = (m), \
.max_freqs = (f), \
}
static struct cpufreq_compat bmips_cpufreq_compat[] = {
BMIPS("brcm,bmips5000", BMIPS5000, 8, 4),
BMIPS("brcm,bmips5200", BMIPS5200, 8, 4),
{ }
};
static struct cpufreq_compat *priv;
static int htp_freq_to_cpu_freq(unsigned int clk_mult)
{
return mips_hpt_frequency * clk_mult / 1000;
}
static struct cpufreq_frequency_table *
bmips_cpufreq_get_freq_table(const struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *table;
unsigned long cpu_freq;
int i;
cpu_freq = htp_freq_to_cpu_freq(priv->clk_mult);
table = kmalloc((priv->max_freqs + 1) * sizeof(*table), GFP_KERNEL);
if (!table)
return ERR_PTR(-ENOMEM);
for (i = 0; i < priv->max_freqs; i++) {
table[i].frequency = cpu_freq / (1 << i);
table[i].driver_data = i;
}
table[i].frequency = CPUFREQ_TABLE_END;
return table;
}
static unsigned int bmips_cpufreq_get(unsigned int cpu)
{
unsigned int div;
uint32_t mode;
switch (priv->bmips_type) {
case BMIPS5200:
case BMIPS5000:
mode = read_c0_brcm_mode();
div = ((mode >> BMIPS5_CLK_DIV_SHIFT) & BMIPS5_CLK_DIV_MASK);
break;
default:
div = 0;
}
return htp_freq_to_cpu_freq(priv->clk_mult) / (1 << div);
}
static int bmips_cpufreq_target_index(struct cpufreq_policy *policy,
unsigned int index)
{
unsigned int div = policy->freq_table[index].driver_data;
switch (priv->bmips_type) {
case BMIPS5200:
case BMIPS5000:
change_c0_brcm_mode(BMIPS5_CLK_DIV_MASK << BMIPS5_CLK_DIV_SHIFT,
(1 << BMIPS5_CLK_DIV_SET_SHIFT) |
(div << BMIPS5_CLK_DIV_SHIFT));
break;
default:
return -ENOTSUPP;
}
return 0;
}
static int bmips_cpufreq_exit(struct cpufreq_policy *policy)
{
kfree(policy->freq_table);
return 0;
}
static int bmips_cpufreq_init(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *freq_table;
int ret;
freq_table = bmips_cpufreq_get_freq_table(policy);
if (IS_ERR(freq_table)) {
ret = PTR_ERR(freq_table);
pr_err("%s: couldn't determine frequency table (%d).\n",
BMIPS_CPUFREQ_NAME, ret);
return ret;
}
ret = cpufreq_generic_init(policy, freq_table, TRANSITION_LATENCY);
if (ret)
bmips_cpufreq_exit(policy);
else
pr_info("%s: registered\n", BMIPS_CPUFREQ_NAME);
return ret;
}
static struct cpufreq_driver bmips_cpufreq_driver = {
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = bmips_cpufreq_target_index,
.get = bmips_cpufreq_get,
.init = bmips_cpufreq_init,
.exit = bmips_cpufreq_exit,
.attr = cpufreq_generic_attr,
.name = BMIPS_CPUFREQ_PREFIX,
};
static int __init bmips_cpufreq_probe(void)
{
struct cpufreq_compat *cc;
struct device_node *np;
for (cc = bmips_cpufreq_compat; cc->compatible; cc++) {
np = of_find_compatible_node(NULL, "cpu", cc->compatible);
if (np) {
of_node_put(np);
priv = cc;
break;
}
}
/* We hit the guard element of the array. No compatible CPU found. */
if (!cc->compatible)
return -ENODEV;
return cpufreq_register_driver(&bmips_cpufreq_driver);
}
device_initcall(bmips_cpufreq_probe);
MODULE_AUTHOR("Markus Mayer <mmayer@broadcom.com>");
MODULE_DESCRIPTION("CPUfreq driver for Broadcom BMIPS SoCs");
MODULE_LICENSE("GPL");
...@@ -878,7 +878,6 @@ static int brcm_avs_prepare_init(struct platform_device *pdev) ...@@ -878,7 +878,6 @@ static int brcm_avs_prepare_init(struct platform_device *pdev)
iounmap(priv->avs_intr_base); iounmap(priv->avs_intr_base);
unmap_base: unmap_base:
iounmap(priv->base); iounmap(priv->base);
platform_set_drvdata(pdev, NULL);
return ret; return ret;
} }
...@@ -1042,7 +1041,6 @@ static int brcm_avs_cpufreq_remove(struct platform_device *pdev) ...@@ -1042,7 +1041,6 @@ static int brcm_avs_cpufreq_remove(struct platform_device *pdev)
priv = platform_get_drvdata(pdev); priv = platform_get_drvdata(pdev);
iounmap(priv->base); iounmap(priv->base);
iounmap(priv->avs_intr_base); iounmap(priv->avs_intr_base);
platform_set_drvdata(pdev, NULL);
return 0; return 0;
} }
......
...@@ -87,8 +87,6 @@ static const struct of_device_id machines[] __initconst = { ...@@ -87,8 +87,6 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "socionext,uniphier-ld11", }, { .compatible = "socionext,uniphier-ld11", },
{ .compatible = "socionext,uniphier-ld20", }, { .compatible = "socionext,uniphier-ld20", },
{ .compatible = "ti,am33xx", },
{ .compatible = "ti,dra7", },
{ .compatible = "ti,omap2", }, { .compatible = "ti,omap2", },
{ .compatible = "ti,omap3", }, { .compatible = "ti,omap3", },
{ .compatible = "ti,omap4", }, { .compatible = "ti,omap4", },
......
...@@ -148,7 +148,6 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -148,7 +148,6 @@ static int cpufreq_init(struct cpufreq_policy *policy)
struct private_data *priv; struct private_data *priv;
struct device *cpu_dev; struct device *cpu_dev;
struct clk *cpu_clk; struct clk *cpu_clk;
struct dev_pm_opp *suspend_opp;
unsigned int transition_latency; unsigned int transition_latency;
bool fallback = false; bool fallback = false;
const char *name; const char *name;
...@@ -252,11 +251,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -252,11 +251,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
policy->driver_data = priv; policy->driver_data = priv;
policy->clk = cpu_clk; policy->clk = cpu_clk;
rcu_read_lock(); policy->suspend_freq = dev_pm_opp_get_suspend_opp_freq(cpu_dev) / 1000;
suspend_opp = dev_pm_opp_get_suspend_opp(cpu_dev);
if (suspend_opp)
policy->suspend_freq = dev_pm_opp_get_freq(suspend_opp) / 1000;
rcu_read_unlock();
ret = cpufreq_table_validate_and_show(policy, freq_table); ret = cpufreq_table_validate_and_show(policy, freq_table);
if (ret) { if (ret) {
......
...@@ -1078,15 +1078,11 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu) ...@@ -1078,15 +1078,11 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
return NULL; return NULL;
} }
static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy, bool notify) static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy)
{ {
struct kobject *kobj; struct kobject *kobj;
struct completion *cmp; struct completion *cmp;
if (notify)
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_REMOVE_POLICY, policy);
down_write(&policy->rwsem); down_write(&policy->rwsem);
cpufreq_stats_free_table(policy); cpufreq_stats_free_table(policy);
kobj = &policy->kobj; kobj = &policy->kobj;
...@@ -1104,7 +1100,7 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy, bool notify) ...@@ -1104,7 +1100,7 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy, bool notify)
pr_debug("wait complete\n"); pr_debug("wait complete\n");
} }
static void cpufreq_policy_free(struct cpufreq_policy *policy, bool notify) static void cpufreq_policy_free(struct cpufreq_policy *policy)
{ {
unsigned long flags; unsigned long flags;
int cpu; int cpu;
...@@ -1117,7 +1113,7 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy, bool notify) ...@@ -1117,7 +1113,7 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy, bool notify)
per_cpu(cpufreq_cpu_data, cpu) = NULL; per_cpu(cpufreq_cpu_data, cpu) = NULL;
write_unlock_irqrestore(&cpufreq_driver_lock, flags); write_unlock_irqrestore(&cpufreq_driver_lock, flags);
cpufreq_policy_put_kobj(policy, notify); cpufreq_policy_put_kobj(policy);
free_cpumask_var(policy->real_cpus); free_cpumask_var(policy->real_cpus);
free_cpumask_var(policy->related_cpus); free_cpumask_var(policy->related_cpus);
free_cpumask_var(policy->cpus); free_cpumask_var(policy->cpus);
...@@ -1170,8 +1166,6 @@ static int cpufreq_online(unsigned int cpu) ...@@ -1170,8 +1166,6 @@ static int cpufreq_online(unsigned int cpu)
if (new_policy) { if (new_policy) {
/* related_cpus should at least include policy->cpus. */ /* related_cpus should at least include policy->cpus. */
cpumask_copy(policy->related_cpus, policy->cpus); cpumask_copy(policy->related_cpus, policy->cpus);
/* Clear mask of registered CPUs */
cpumask_clear(policy->real_cpus);
} }
/* /*
...@@ -1244,17 +1238,12 @@ static int cpufreq_online(unsigned int cpu) ...@@ -1244,17 +1238,12 @@ static int cpufreq_online(unsigned int cpu)
goto out_exit_policy; goto out_exit_policy;
cpufreq_stats_create_table(policy); cpufreq_stats_create_table(policy);
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_CREATE_POLICY, policy);
write_lock_irqsave(&cpufreq_driver_lock, flags); write_lock_irqsave(&cpufreq_driver_lock, flags);
list_add(&policy->policy_list, &cpufreq_policy_list); list_add(&policy->policy_list, &cpufreq_policy_list);
write_unlock_irqrestore(&cpufreq_driver_lock, flags); write_unlock_irqrestore(&cpufreq_driver_lock, flags);
} }
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_START, policy);
ret = cpufreq_init_policy(policy); ret = cpufreq_init_policy(policy);
if (ret) { if (ret) {
pr_err("%s: Failed to initialize policy for cpu: %d (%d)\n", pr_err("%s: Failed to initialize policy for cpu: %d (%d)\n",
...@@ -1282,7 +1271,7 @@ static int cpufreq_online(unsigned int cpu) ...@@ -1282,7 +1271,7 @@ static int cpufreq_online(unsigned int cpu)
if (cpufreq_driver->exit) if (cpufreq_driver->exit)
cpufreq_driver->exit(policy); cpufreq_driver->exit(policy);
out_free_policy: out_free_policy:
cpufreq_policy_free(policy, !new_policy); cpufreq_policy_free(policy);
return ret; return ret;
} }
...@@ -1403,7 +1392,7 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif) ...@@ -1403,7 +1392,7 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
remove_cpu_dev_symlink(policy, dev); remove_cpu_dev_symlink(policy, dev);
if (cpumask_empty(policy->real_cpus)) if (cpumask_empty(policy->real_cpus))
cpufreq_policy_free(policy, true); cpufreq_policy_free(policy);
} }
/** /**
......
...@@ -24,9 +24,7 @@ struct cpufreq_stats { ...@@ -24,9 +24,7 @@ struct cpufreq_stats {
unsigned int last_index; unsigned int last_index;
u64 *time_in_state; u64 *time_in_state;
unsigned int *freq_table; unsigned int *freq_table;
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
unsigned int *trans_table; unsigned int *trans_table;
#endif
}; };
static int cpufreq_stats_update(struct cpufreq_stats *stats) static int cpufreq_stats_update(struct cpufreq_stats *stats)
...@@ -45,9 +43,7 @@ static void cpufreq_stats_clear_table(struct cpufreq_stats *stats) ...@@ -45,9 +43,7 @@ static void cpufreq_stats_clear_table(struct cpufreq_stats *stats)
unsigned int count = stats->max_state; unsigned int count = stats->max_state;
memset(stats->time_in_state, 0, count * sizeof(u64)); memset(stats->time_in_state, 0, count * sizeof(u64));
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
memset(stats->trans_table, 0, count * count * sizeof(int)); memset(stats->trans_table, 0, count * count * sizeof(int));
#endif
stats->last_time = get_jiffies_64(); stats->last_time = get_jiffies_64();
stats->total_trans = 0; stats->total_trans = 0;
} }
...@@ -83,7 +79,6 @@ static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf, ...@@ -83,7 +79,6 @@ static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
return count; return count;
} }
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf) static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
{ {
struct cpufreq_stats *stats = policy->stats; struct cpufreq_stats *stats = policy->stats;
...@@ -128,7 +123,6 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf) ...@@ -128,7 +123,6 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
return len; return len;
} }
cpufreq_freq_attr_ro(trans_table); cpufreq_freq_attr_ro(trans_table);
#endif
cpufreq_freq_attr_ro(total_trans); cpufreq_freq_attr_ro(total_trans);
cpufreq_freq_attr_ro(time_in_state); cpufreq_freq_attr_ro(time_in_state);
...@@ -138,9 +132,7 @@ static struct attribute *default_attrs[] = { ...@@ -138,9 +132,7 @@ static struct attribute *default_attrs[] = {
&total_trans.attr, &total_trans.attr,
&time_in_state.attr, &time_in_state.attr,
&reset.attr, &reset.attr,
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
&trans_table.attr, &trans_table.attr,
#endif
NULL NULL
}; };
static struct attribute_group stats_attr_group = { static struct attribute_group stats_attr_group = {
...@@ -199,9 +191,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy) ...@@ -199,9 +191,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
alloc_size = count * sizeof(int) + count * sizeof(u64); alloc_size = count * sizeof(int) + count * sizeof(u64);
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
alloc_size += count * count * sizeof(int); alloc_size += count * count * sizeof(int);
#endif
/* Allocate memory for time_in_state/freq_table/trans_table in one go */ /* Allocate memory for time_in_state/freq_table/trans_table in one go */
stats->time_in_state = kzalloc(alloc_size, GFP_KERNEL); stats->time_in_state = kzalloc(alloc_size, GFP_KERNEL);
...@@ -210,9 +200,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy) ...@@ -210,9 +200,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
stats->freq_table = (unsigned int *)(stats->time_in_state + count); stats->freq_table = (unsigned int *)(stats->time_in_state + count);
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
stats->trans_table = stats->freq_table + count; stats->trans_table = stats->freq_table + count;
#endif
stats->max_state = count; stats->max_state = count;
...@@ -258,8 +246,6 @@ void cpufreq_stats_record_transition(struct cpufreq_policy *policy, ...@@ -258,8 +246,6 @@ void cpufreq_stats_record_transition(struct cpufreq_policy *policy,
cpufreq_stats_update(stats); cpufreq_stats_update(stats);
stats->last_index = new_index; stats->last_index = new_index;
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
stats->trans_table[old_index * stats->max_state + new_index]++; stats->trans_table[old_index * stats->max_state + new_index]++;
#endif
stats->total_trans++; stats->total_trans++;
} }
...@@ -118,12 +118,10 @@ static int init_div_table(void) ...@@ -118,12 +118,10 @@ static int init_div_table(void)
unsigned int tmp, clk_div, ema_div, freq, volt_id; unsigned int tmp, clk_div, ema_div, freq, volt_id;
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
rcu_read_lock();
cpufreq_for_each_entry(pos, freq_tbl) { cpufreq_for_each_entry(pos, freq_tbl) {
opp = dev_pm_opp_find_freq_exact(dvfs_info->dev, opp = dev_pm_opp_find_freq_exact(dvfs_info->dev,
pos->frequency * 1000, true); pos->frequency * 1000, true);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(dvfs_info->dev, dev_err(dvfs_info->dev,
"failed to find valid OPP for %u KHZ\n", "failed to find valid OPP for %u KHZ\n",
pos->frequency); pos->frequency);
...@@ -140,6 +138,7 @@ static int init_div_table(void) ...@@ -140,6 +138,7 @@ static int init_div_table(void)
/* Calculate EMA */ /* Calculate EMA */
volt_id = dev_pm_opp_get_voltage(opp); volt_id = dev_pm_opp_get_voltage(opp);
volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP; volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP;
if (volt_id < PMIC_HIGH_VOLT) { if (volt_id < PMIC_HIGH_VOLT) {
ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) | ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) |
...@@ -157,9 +156,9 @@ static int init_div_table(void) ...@@ -157,9 +156,9 @@ static int init_div_table(void)
__raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 *
(pos - freq_tbl)); (pos - freq_tbl));
dev_pm_opp_put(opp);
} }
rcu_read_unlock();
return 0; return 0;
} }
......
...@@ -53,16 +53,15 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -53,16 +53,15 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
freq_hz = new_freq * 1000; freq_hz = new_freq * 1000;
old_freq = clk_get_rate(arm_clk) / 1000; old_freq = clk_get_rate(arm_clk) / 1000;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(cpu_dev, "failed to find OPP for %ld\n", freq_hz); dev_err(cpu_dev, "failed to find OPP for %ld\n", freq_hz);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
volt = dev_pm_opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
volt_old = regulator_get_voltage(arm_reg); volt_old = regulator_get_voltage(arm_reg);
dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n", dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n",
...@@ -321,14 +320,15 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev) ...@@ -321,14 +320,15 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
* freq_table initialised from OPP is therefore sorted in the * freq_table initialised from OPP is therefore sorted in the
* same order. * same order.
*/ */
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[0].frequency * 1000, true); freq_table[0].frequency * 1000, true);
min_volt = dev_pm_opp_get_voltage(opp); min_volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
opp = dev_pm_opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[--num].frequency * 1000, true); freq_table[--num].frequency * 1000, true);
max_volt = dev_pm_opp_get_voltage(opp); max_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt); ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt);
if (ret > 0) if (ret > 0)
transition_latency += ret * 1000; transition_latency += ret * 1000;
......
...@@ -358,6 +358,8 @@ static struct pstate_funcs pstate_funcs __read_mostly; ...@@ -358,6 +358,8 @@ static struct pstate_funcs pstate_funcs __read_mostly;
static int hwp_active __read_mostly; static int hwp_active __read_mostly;
static bool per_cpu_limits __read_mostly; static bool per_cpu_limits __read_mostly;
static bool driver_registered __read_mostly;
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
static bool acpi_ppc; static bool acpi_ppc;
#endif #endif
...@@ -394,6 +396,7 @@ static struct perf_limits *limits = &performance_limits; ...@@ -394,6 +396,7 @@ static struct perf_limits *limits = &performance_limits;
static struct perf_limits *limits = &powersave_limits; static struct perf_limits *limits = &powersave_limits;
#endif #endif
static DEFINE_MUTEX(intel_pstate_driver_lock);
static DEFINE_MUTEX(intel_pstate_limits_lock); static DEFINE_MUTEX(intel_pstate_limits_lock);
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
...@@ -538,7 +541,6 @@ static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) ...@@ -538,7 +541,6 @@ static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy)
acpi_processor_unregister_performance(policy->cpu); acpi_processor_unregister_performance(policy->cpu);
} }
#else #else
static inline void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy) static inline void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
{ {
...@@ -873,6 +875,9 @@ static void intel_pstate_hwp_set(struct cpufreq_policy *policy) ...@@ -873,6 +875,9 @@ static void intel_pstate_hwp_set(struct cpufreq_policy *policy)
rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap); rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap);
hw_min = HWP_LOWEST_PERF(cap); hw_min = HWP_LOWEST_PERF(cap);
if (limits->no_turbo)
hw_max = HWP_GUARANTEED_PERF(cap);
else
hw_max = HWP_HIGHEST_PERF(cap); hw_max = HWP_HIGHEST_PERF(cap);
range = hw_max - hw_min; range = hw_max - hw_min;
...@@ -887,11 +892,6 @@ static void intel_pstate_hwp_set(struct cpufreq_policy *policy) ...@@ -887,11 +892,6 @@ static void intel_pstate_hwp_set(struct cpufreq_policy *policy)
adj_range = max_perf_pct * range / 100; adj_range = max_perf_pct * range / 100;
max = hw_min + adj_range; max = hw_min + adj_range;
if (limits->no_turbo) {
hw_max = HWP_GUARANTEED_PERF(cap);
if (hw_max < max)
max = hw_max;
}
value &= ~HWP_MAX_PERF(~0L); value &= ~HWP_MAX_PERF(~0L);
value |= HWP_MAX_PERF(max); value |= HWP_MAX_PERF(max);
...@@ -1007,37 +1007,59 @@ static int pid_param_get(void *data, u64 *val) ...@@ -1007,37 +1007,59 @@ static int pid_param_get(void *data, u64 *val)
} }
DEFINE_SIMPLE_ATTRIBUTE(fops_pid_param, pid_param_get, pid_param_set, "%llu\n"); DEFINE_SIMPLE_ATTRIBUTE(fops_pid_param, pid_param_get, pid_param_set, "%llu\n");
static struct dentry *debugfs_parent;
struct pid_param { struct pid_param {
char *name; char *name;
void *value; void *value;
struct dentry *dentry;
}; };
static struct pid_param pid_files[] = { static struct pid_param pid_files[] = {
{"sample_rate_ms", &pid_params.sample_rate_ms}, {"sample_rate_ms", &pid_params.sample_rate_ms, },
{"d_gain_pct", &pid_params.d_gain_pct}, {"d_gain_pct", &pid_params.d_gain_pct, },
{"i_gain_pct", &pid_params.i_gain_pct}, {"i_gain_pct", &pid_params.i_gain_pct, },
{"deadband", &pid_params.deadband}, {"deadband", &pid_params.deadband, },
{"setpoint", &pid_params.setpoint}, {"setpoint", &pid_params.setpoint, },
{"p_gain_pct", &pid_params.p_gain_pct}, {"p_gain_pct", &pid_params.p_gain_pct, },
{NULL, NULL} {NULL, NULL, }
}; };
static void __init intel_pstate_debug_expose_params(void) static void intel_pstate_debug_expose_params(void)
{ {
struct dentry *debugfs_parent; int i;
int i = 0;
debugfs_parent = debugfs_create_dir("pstate_snb", NULL); debugfs_parent = debugfs_create_dir("pstate_snb", NULL);
if (IS_ERR_OR_NULL(debugfs_parent)) if (IS_ERR_OR_NULL(debugfs_parent))
return; return;
while (pid_files[i].name) {
debugfs_create_file(pid_files[i].name, 0660, for (i = 0; pid_files[i].name; i++) {
struct dentry *dentry;
dentry = debugfs_create_file(pid_files[i].name, 0660,
debugfs_parent, pid_files[i].value, debugfs_parent, pid_files[i].value,
&fops_pid_param); &fops_pid_param);
i++; if (!IS_ERR(dentry))
pid_files[i].dentry = dentry;
} }
} }
static void intel_pstate_debug_hide_params(void)
{
int i;
if (IS_ERR_OR_NULL(debugfs_parent))
return;
for (i = 0; pid_files[i].name; i++) {
debugfs_remove(pid_files[i].dentry);
pid_files[i].dentry = NULL;
}
debugfs_remove(debugfs_parent);
debugfs_parent = NULL;
}
/************************** debugfs end ************************/ /************************** debugfs end ************************/
/************************** sysfs begin ************************/ /************************** sysfs begin ************************/
...@@ -1048,6 +1070,34 @@ static void __init intel_pstate_debug_expose_params(void) ...@@ -1048,6 +1070,34 @@ static void __init intel_pstate_debug_expose_params(void)
return sprintf(buf, "%u\n", limits->object); \ return sprintf(buf, "%u\n", limits->object); \
} }
static ssize_t intel_pstate_show_status(char *buf);
static int intel_pstate_update_status(const char *buf, size_t size);
static ssize_t show_status(struct kobject *kobj,
struct attribute *attr, char *buf)
{
ssize_t ret;
mutex_lock(&intel_pstate_driver_lock);
ret = intel_pstate_show_status(buf);
mutex_unlock(&intel_pstate_driver_lock);
return ret;
}
static ssize_t store_status(struct kobject *a, struct attribute *b,
const char *buf, size_t count)
{
char *p = memchr(buf, '\n', count);
int ret;
mutex_lock(&intel_pstate_driver_lock);
ret = intel_pstate_update_status(buf, p ? p - buf : count);
mutex_unlock(&intel_pstate_driver_lock);
return ret < 0 ? ret : count;
}
static ssize_t show_turbo_pct(struct kobject *kobj, static ssize_t show_turbo_pct(struct kobject *kobj,
struct attribute *attr, char *buf) struct attribute *attr, char *buf)
{ {
...@@ -1055,12 +1105,22 @@ static ssize_t show_turbo_pct(struct kobject *kobj, ...@@ -1055,12 +1105,22 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
int total, no_turbo, turbo_pct; int total, no_turbo, turbo_pct;
uint32_t turbo_fp; uint32_t turbo_fp;
mutex_lock(&intel_pstate_driver_lock);
if (!driver_registered) {
mutex_unlock(&intel_pstate_driver_lock);
return -EAGAIN;
}
cpu = all_cpu_data[0]; cpu = all_cpu_data[0];
total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1; total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1;
no_turbo = cpu->pstate.max_pstate - cpu->pstate.min_pstate + 1; no_turbo = cpu->pstate.max_pstate - cpu->pstate.min_pstate + 1;
turbo_fp = div_fp(no_turbo, total); turbo_fp = div_fp(no_turbo, total);
turbo_pct = 100 - fp_toint(mul_fp(turbo_fp, int_tofp(100))); turbo_pct = 100 - fp_toint(mul_fp(turbo_fp, int_tofp(100)));
mutex_unlock(&intel_pstate_driver_lock);
return sprintf(buf, "%u\n", turbo_pct); return sprintf(buf, "%u\n", turbo_pct);
} }
...@@ -1070,8 +1130,18 @@ static ssize_t show_num_pstates(struct kobject *kobj, ...@@ -1070,8 +1130,18 @@ static ssize_t show_num_pstates(struct kobject *kobj,
struct cpudata *cpu; struct cpudata *cpu;
int total; int total;
mutex_lock(&intel_pstate_driver_lock);
if (!driver_registered) {
mutex_unlock(&intel_pstate_driver_lock);
return -EAGAIN;
}
cpu = all_cpu_data[0]; cpu = all_cpu_data[0];
total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1; total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1;
mutex_unlock(&intel_pstate_driver_lock);
return sprintf(buf, "%u\n", total); return sprintf(buf, "%u\n", total);
} }
...@@ -1080,12 +1150,21 @@ static ssize_t show_no_turbo(struct kobject *kobj, ...@@ -1080,12 +1150,21 @@ static ssize_t show_no_turbo(struct kobject *kobj,
{ {
ssize_t ret; ssize_t ret;
mutex_lock(&intel_pstate_driver_lock);
if (!driver_registered) {
mutex_unlock(&intel_pstate_driver_lock);
return -EAGAIN;
}
update_turbo_state(); update_turbo_state();
if (limits->turbo_disabled) if (limits->turbo_disabled)
ret = sprintf(buf, "%u\n", limits->turbo_disabled); ret = sprintf(buf, "%u\n", limits->turbo_disabled);
else else
ret = sprintf(buf, "%u\n", limits->no_turbo); ret = sprintf(buf, "%u\n", limits->no_turbo);
mutex_unlock(&intel_pstate_driver_lock);
return ret; return ret;
} }
...@@ -1099,12 +1178,20 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b, ...@@ -1099,12 +1178,20 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
if (ret != 1) if (ret != 1)
return -EINVAL; return -EINVAL;
mutex_lock(&intel_pstate_driver_lock);
if (!driver_registered) {
mutex_unlock(&intel_pstate_driver_lock);
return -EAGAIN;
}
mutex_lock(&intel_pstate_limits_lock); mutex_lock(&intel_pstate_limits_lock);
update_turbo_state(); update_turbo_state();
if (limits->turbo_disabled) { if (limits->turbo_disabled) {
pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
mutex_unlock(&intel_pstate_limits_lock); mutex_unlock(&intel_pstate_limits_lock);
mutex_unlock(&intel_pstate_driver_lock);
return -EPERM; return -EPERM;
} }
...@@ -1114,6 +1201,8 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b, ...@@ -1114,6 +1201,8 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
intel_pstate_update_policies(); intel_pstate_update_policies();
mutex_unlock(&intel_pstate_driver_lock);
return count; return count;
} }
...@@ -1127,6 +1216,13 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b, ...@@ -1127,6 +1216,13 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
if (ret != 1) if (ret != 1)
return -EINVAL; return -EINVAL;
mutex_lock(&intel_pstate_driver_lock);
if (!driver_registered) {
mutex_unlock(&intel_pstate_driver_lock);
return -EAGAIN;
}
mutex_lock(&intel_pstate_limits_lock); mutex_lock(&intel_pstate_limits_lock);
limits->max_sysfs_pct = clamp_t(int, input, 0 , 100); limits->max_sysfs_pct = clamp_t(int, input, 0 , 100);
...@@ -1142,6 +1238,8 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b, ...@@ -1142,6 +1238,8 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
intel_pstate_update_policies(); intel_pstate_update_policies();
mutex_unlock(&intel_pstate_driver_lock);
return count; return count;
} }
...@@ -1155,6 +1253,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b, ...@@ -1155,6 +1253,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
if (ret != 1) if (ret != 1)
return -EINVAL; return -EINVAL;
mutex_lock(&intel_pstate_driver_lock);
if (!driver_registered) {
mutex_unlock(&intel_pstate_driver_lock);
return -EAGAIN;
}
mutex_lock(&intel_pstate_limits_lock); mutex_lock(&intel_pstate_limits_lock);
limits->min_sysfs_pct = clamp_t(int, input, 0 , 100); limits->min_sysfs_pct = clamp_t(int, input, 0 , 100);
...@@ -1170,12 +1275,15 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b, ...@@ -1170,12 +1275,15 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
intel_pstate_update_policies(); intel_pstate_update_policies();
mutex_unlock(&intel_pstate_driver_lock);
return count; return count;
} }
show_one(max_perf_pct, max_perf_pct); show_one(max_perf_pct, max_perf_pct);
show_one(min_perf_pct, min_perf_pct); show_one(min_perf_pct, min_perf_pct);
define_one_global_rw(status);
define_one_global_rw(no_turbo); define_one_global_rw(no_turbo);
define_one_global_rw(max_perf_pct); define_one_global_rw(max_perf_pct);
define_one_global_rw(min_perf_pct); define_one_global_rw(min_perf_pct);
...@@ -1183,6 +1291,7 @@ define_one_global_ro(turbo_pct); ...@@ -1183,6 +1291,7 @@ define_one_global_ro(turbo_pct);
define_one_global_ro(num_pstates); define_one_global_ro(num_pstates);
static struct attribute *intel_pstate_attributes[] = { static struct attribute *intel_pstate_attributes[] = {
&status.attr,
&no_turbo.attr, &no_turbo.attr,
&turbo_pct.attr, &turbo_pct.attr,
&num_pstates.attr, &num_pstates.attr,
...@@ -1364,48 +1473,71 @@ static int core_get_max_pstate_physical(void) ...@@ -1364,48 +1473,71 @@ static int core_get_max_pstate_physical(void)
return (value >> 8) & 0xFF; return (value >> 8) & 0xFF;
} }
static int core_get_max_pstate(void) static int core_get_tdp_ratio(u64 plat_info)
{ {
u64 tar; /* Check how many TDP levels present */
u64 plat_info;
int max_pstate;
int err;
rdmsrl(MSR_PLATFORM_INFO, plat_info);
max_pstate = (plat_info >> 8) & 0xFF;
err = rdmsrl_safe(MSR_TURBO_ACTIVATION_RATIO, &tar);
if (!err) {
/* Do some sanity checking for safety */
if (plat_info & 0x600000000) { if (plat_info & 0x600000000) {
u64 tdp_ctrl; u64 tdp_ctrl;
u64 tdp_ratio; u64 tdp_ratio;
int tdp_msr; int tdp_msr;
int err;
/* Get the TDP level (0, 1, 2) to get ratios */
err = rdmsrl_safe(MSR_CONFIG_TDP_CONTROL, &tdp_ctrl); err = rdmsrl_safe(MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
if (err) if (err)
goto skip_tar; return err;
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x3); /* TDP MSR are continuous starting at 0x648 */
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03);
err = rdmsrl_safe(tdp_msr, &tdp_ratio); err = rdmsrl_safe(tdp_msr, &tdp_ratio);
if (err) if (err)
goto skip_tar; return err;
/* For level 1 and 2, bits[23:16] contain the ratio */ /* For level 1 and 2, bits[23:16] contain the ratio */
if (tdp_ctrl) if (tdp_ctrl & 0x03)
tdp_ratio >>= 16; tdp_ratio >>= 16;
tdp_ratio &= 0xff; /* ratios are only 8 bits long */ tdp_ratio &= 0xff; /* ratios are only 8 bits long */
if (tdp_ratio - 1 == tar) { pr_debug("tdp_ratio %x\n", (int)tdp_ratio);
max_pstate = tar;
pr_debug("max_pstate=TAC %x\n", max_pstate); return (int)tdp_ratio;
} else {
goto skip_tar;
} }
return -ENXIO;
}
static int core_get_max_pstate(void)
{
u64 tar;
u64 plat_info;
int max_pstate;
int tdp_ratio;
int err;
rdmsrl(MSR_PLATFORM_INFO, plat_info);
max_pstate = (plat_info >> 8) & 0xFF;
tdp_ratio = core_get_tdp_ratio(plat_info);
if (tdp_ratio <= 0)
return max_pstate;
if (hwp_active) {
/* Turbo activation ratio is not used on HWP platforms */
return tdp_ratio;
}
err = rdmsrl_safe(MSR_TURBO_ACTIVATION_RATIO, &tar);
if (!err) {
int tar_levels;
/* Do some sanity checking for safety */
tar_levels = tar & 0xff;
if (tdp_ratio - 1 == tar_levels) {
max_pstate = tar_levels;
pr_debug("max_pstate=TAC %x\n", max_pstate);
} }
} }
skip_tar:
return max_pstate; return max_pstate;
} }
...@@ -2072,6 +2204,20 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) ...@@ -2072,6 +2204,20 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
static int intel_pstate_verify_policy(struct cpufreq_policy *policy) static int intel_pstate_verify_policy(struct cpufreq_policy *policy)
{ {
struct cpudata *cpu = all_cpu_data[policy->cpu];
struct perf_limits *perf_limits;
if (policy->policy == CPUFREQ_POLICY_PERFORMANCE)
perf_limits = &performance_limits;
else
perf_limits = &powersave_limits;
update_turbo_state();
policy->cpuinfo.max_freq = perf_limits->turbo_disabled ||
perf_limits->no_turbo ?
cpu->pstate.max_freq :
cpu->pstate.turbo_freq;
cpufreq_verify_within_cpu_limits(policy); cpufreq_verify_within_cpu_limits(policy);
if (policy->policy != CPUFREQ_POLICY_POWERSAVE && if (policy->policy != CPUFREQ_POLICY_POWERSAVE &&
...@@ -2299,6 +2445,111 @@ static struct cpufreq_driver intel_cpufreq = { ...@@ -2299,6 +2445,111 @@ static struct cpufreq_driver intel_cpufreq = {
static struct cpufreq_driver *intel_pstate_driver = &intel_pstate; static struct cpufreq_driver *intel_pstate_driver = &intel_pstate;
static void intel_pstate_driver_cleanup(void)
{
unsigned int cpu;
get_online_cpus();
for_each_online_cpu(cpu) {
if (all_cpu_data[cpu]) {
if (intel_pstate_driver == &intel_pstate)
intel_pstate_clear_update_util_hook(cpu);
kfree(all_cpu_data[cpu]);
all_cpu_data[cpu] = NULL;
}
}
put_online_cpus();
}
static int intel_pstate_register_driver(void)
{
int ret;
ret = cpufreq_register_driver(intel_pstate_driver);
if (ret) {
intel_pstate_driver_cleanup();
return ret;
}
mutex_lock(&intel_pstate_limits_lock);
driver_registered = true;
mutex_unlock(&intel_pstate_limits_lock);
if (intel_pstate_driver == &intel_pstate && !hwp_active &&
pstate_funcs.get_target_pstate != get_target_pstate_use_cpu_load)
intel_pstate_debug_expose_params();
return 0;
}
static int intel_pstate_unregister_driver(void)
{
if (hwp_active)
return -EBUSY;
if (intel_pstate_driver == &intel_pstate && !hwp_active &&
pstate_funcs.get_target_pstate != get_target_pstate_use_cpu_load)
intel_pstate_debug_hide_params();
mutex_lock(&intel_pstate_limits_lock);
driver_registered = false;
mutex_unlock(&intel_pstate_limits_lock);
cpufreq_unregister_driver(intel_pstate_driver);
intel_pstate_driver_cleanup();
return 0;
}
static ssize_t intel_pstate_show_status(char *buf)
{
if (!driver_registered)
return sprintf(buf, "off\n");
return sprintf(buf, "%s\n", intel_pstate_driver == &intel_pstate ?
"active" : "passive");
}
static int intel_pstate_update_status(const char *buf, size_t size)
{
int ret;
if (size == 3 && !strncmp(buf, "off", size))
return driver_registered ?
intel_pstate_unregister_driver() : -EINVAL;
if (size == 6 && !strncmp(buf, "active", size)) {
if (driver_registered) {
if (intel_pstate_driver == &intel_pstate)
return 0;
ret = intel_pstate_unregister_driver();
if (ret)
return ret;
}
intel_pstate_driver = &intel_pstate;
return intel_pstate_register_driver();
}
if (size == 7 && !strncmp(buf, "passive", size)) {
if (driver_registered) {
if (intel_pstate_driver != &intel_pstate)
return 0;
ret = intel_pstate_unregister_driver();
if (ret)
return ret;
}
intel_pstate_driver = &intel_cpufreq;
return intel_pstate_register_driver();
}
return -EINVAL;
}
static int no_load __initdata; static int no_load __initdata;
static int no_hwp __initdata; static int no_hwp __initdata;
static int hwp_only __initdata; static int hwp_only __initdata;
...@@ -2486,9 +2737,9 @@ static const struct x86_cpu_id hwp_support_ids[] __initconst = { ...@@ -2486,9 +2737,9 @@ static const struct x86_cpu_id hwp_support_ids[] __initconst = {
static int __init intel_pstate_init(void) static int __init intel_pstate_init(void)
{ {
int cpu, rc = 0;
const struct x86_cpu_id *id; const struct x86_cpu_id *id;
struct cpu_defaults *cpu_def; struct cpu_defaults *cpu_def;
int rc = 0;
if (no_load) if (no_load)
return -ENODEV; return -ENODEV;
...@@ -2520,45 +2771,29 @@ static int __init intel_pstate_init(void) ...@@ -2520,45 +2771,29 @@ static int __init intel_pstate_init(void)
if (intel_pstate_platform_pwr_mgmt_exists()) if (intel_pstate_platform_pwr_mgmt_exists())
return -ENODEV; return -ENODEV;
if (!hwp_active && hwp_only)
return -ENOTSUPP;
pr_info("Intel P-state driver initializing\n"); pr_info("Intel P-state driver initializing\n");
all_cpu_data = vzalloc(sizeof(void *) * num_possible_cpus()); all_cpu_data = vzalloc(sizeof(void *) * num_possible_cpus());
if (!all_cpu_data) if (!all_cpu_data)
return -ENOMEM; return -ENOMEM;
if (!hwp_active && hwp_only)
goto out;
intel_pstate_request_control_from_smm(); intel_pstate_request_control_from_smm();
rc = cpufreq_register_driver(intel_pstate_driver);
if (rc)
goto out;
if (intel_pstate_driver == &intel_pstate && !hwp_active &&
pstate_funcs.get_target_pstate != get_target_pstate_use_cpu_load)
intel_pstate_debug_expose_params();
intel_pstate_sysfs_expose_params(); intel_pstate_sysfs_expose_params();
if (hwp_active) mutex_lock(&intel_pstate_driver_lock);
pr_info("HWP enabled\n"); rc = intel_pstate_register_driver();
mutex_unlock(&intel_pstate_driver_lock);
if (rc)
return rc; return rc;
out:
get_online_cpus();
for_each_online_cpu(cpu) {
if (all_cpu_data[cpu]) {
if (intel_pstate_driver == &intel_pstate)
intel_pstate_clear_update_util_hook(cpu);
kfree(all_cpu_data[cpu]); if (hwp_active)
} pr_info("HWP enabled\n");
}
put_online_cpus(); return 0;
vfree(all_cpu_data);
return -ENODEV;
} }
device_initcall(intel_pstate_init); device_initcall(intel_pstate_init);
......
...@@ -232,16 +232,14 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy, ...@@ -232,16 +232,14 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy,
freq_hz = freq_table[index].frequency * 1000; freq_hz = freq_table[index].frequency * 1000;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("cpu%d: failed to find OPP for %ld\n", pr_err("cpu%d: failed to find OPP for %ld\n",
policy->cpu, freq_hz); policy->cpu, freq_hz);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
vproc = dev_pm_opp_get_voltage(opp); vproc = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* /*
* If the new voltage or the intermediate voltage is higher than the * If the new voltage or the intermediate voltage is higher than the
...@@ -411,16 +409,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -411,16 +409,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
/* Search a safe voltage for intermediate frequency. */ /* Search a safe voltage for intermediate frequency. */
rate = clk_get_rate(inter_clk); rate = clk_get_rate(inter_clk);
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("failed to get intermediate opp for cpu%d\n", cpu); pr_err("failed to get intermediate opp for cpu%d\n", cpu);
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto out_free_opp_table; goto out_free_opp_table;
} }
info->intermediate_voltage = dev_pm_opp_get_voltage(opp); info->intermediate_voltage = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
info->cpu_dev = cpu_dev; info->cpu_dev = cpu_dev;
info->proc_reg = proc_reg; info->proc_reg = proc_reg;
......
...@@ -63,16 +63,14 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -63,16 +63,14 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index)
freq = ret; freq = ret;
if (mpu_reg) { if (mpu_reg) {
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(mpu_dev, &freq); opp = dev_pm_opp_find_freq_ceil(mpu_dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n", dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n",
__func__, new_freq); __func__, new_freq);
return -EINVAL; return -EINVAL;
} }
volt = dev_pm_opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
tol = volt * OPP_TOLERANCE / 100; tol = volt * OPP_TOLERANCE / 100;
volt_old = regulator_get_voltage(mpu_reg); volt_old = regulator_get_voltage(mpu_reg);
} }
......
...@@ -144,6 +144,7 @@ static struct powernv_pstate_info { ...@@ -144,6 +144,7 @@ static struct powernv_pstate_info {
unsigned int max; unsigned int max;
unsigned int nominal; unsigned int nominal;
unsigned int nr_pstates; unsigned int nr_pstates;
bool wof_enabled;
} powernv_pstate_info; } powernv_pstate_info;
/* Use following macros for conversions between pstate_id and index */ /* Use following macros for conversions between pstate_id and index */
...@@ -203,6 +204,7 @@ static int init_powernv_pstates(void) ...@@ -203,6 +204,7 @@ static int init_powernv_pstates(void)
const __be32 *pstate_ids, *pstate_freqs; const __be32 *pstate_ids, *pstate_freqs;
u32 len_ids, len_freqs; u32 len_ids, len_freqs;
u32 pstate_min, pstate_max, pstate_nominal; u32 pstate_min, pstate_max, pstate_nominal;
u32 pstate_turbo, pstate_ultra_turbo;
power_mgt = of_find_node_by_path("/ibm,opal/power-mgt"); power_mgt = of_find_node_by_path("/ibm,opal/power-mgt");
if (!power_mgt) { if (!power_mgt) {
...@@ -225,8 +227,29 @@ static int init_powernv_pstates(void) ...@@ -225,8 +227,29 @@ static int init_powernv_pstates(void)
pr_warn("ibm,pstate-nominal not found\n"); pr_warn("ibm,pstate-nominal not found\n");
return -ENODEV; return -ENODEV;
} }
if (of_property_read_u32(power_mgt, "ibm,pstate-ultra-turbo",
&pstate_ultra_turbo)) {
powernv_pstate_info.wof_enabled = false;
goto next;
}
if (of_property_read_u32(power_mgt, "ibm,pstate-turbo",
&pstate_turbo)) {
powernv_pstate_info.wof_enabled = false;
goto next;
}
if (pstate_turbo == pstate_ultra_turbo)
powernv_pstate_info.wof_enabled = false;
else
powernv_pstate_info.wof_enabled = true;
next:
pr_info("cpufreq pstate min %d nominal %d max %d\n", pstate_min, pr_info("cpufreq pstate min %d nominal %d max %d\n", pstate_min,
pstate_nominal, pstate_max); pstate_nominal, pstate_max);
pr_info("Workload Optimized Frequency is %s in the platform\n",
(powernv_pstate_info.wof_enabled) ? "enabled" : "disabled");
pstate_ids = of_get_property(power_mgt, "ibm,pstate-ids", &len_ids); pstate_ids = of_get_property(power_mgt, "ibm,pstate-ids", &len_ids);
if (!pstate_ids) { if (!pstate_ids) {
...@@ -268,6 +291,13 @@ static int init_powernv_pstates(void) ...@@ -268,6 +291,13 @@ static int init_powernv_pstates(void)
powernv_pstate_info.nominal = i; powernv_pstate_info.nominal = i;
else if (id == pstate_min) else if (id == pstate_min)
powernv_pstate_info.min = i; powernv_pstate_info.min = i;
if (powernv_pstate_info.wof_enabled && id == pstate_turbo) {
int j;
for (j = i - 1; j >= (int)powernv_pstate_info.max; j--)
powernv_freqs[j].flags = CPUFREQ_BOOST_FREQ;
}
} }
/* End of list marker entry */ /* End of list marker entry */
...@@ -305,9 +335,12 @@ static ssize_t cpuinfo_nominal_freq_show(struct cpufreq_policy *policy, ...@@ -305,9 +335,12 @@ static ssize_t cpuinfo_nominal_freq_show(struct cpufreq_policy *policy,
struct freq_attr cpufreq_freq_attr_cpuinfo_nominal_freq = struct freq_attr cpufreq_freq_attr_cpuinfo_nominal_freq =
__ATTR_RO(cpuinfo_nominal_freq); __ATTR_RO(cpuinfo_nominal_freq);
#define SCALING_BOOST_FREQS_ATTR_INDEX 2
static struct freq_attr *powernv_cpu_freq_attr[] = { static struct freq_attr *powernv_cpu_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs, &cpufreq_freq_attr_scaling_available_freqs,
&cpufreq_freq_attr_cpuinfo_nominal_freq, &cpufreq_freq_attr_cpuinfo_nominal_freq,
&cpufreq_freq_attr_scaling_boost_freqs,
NULL, NULL,
}; };
...@@ -1013,11 +1046,22 @@ static int __init powernv_cpufreq_init(void) ...@@ -1013,11 +1046,22 @@ static int __init powernv_cpufreq_init(void)
register_reboot_notifier(&powernv_cpufreq_reboot_nb); register_reboot_notifier(&powernv_cpufreq_reboot_nb);
opal_message_notifier_register(OPAL_MSG_OCC, &powernv_cpufreq_opal_nb); opal_message_notifier_register(OPAL_MSG_OCC, &powernv_cpufreq_opal_nb);
rc = cpufreq_register_driver(&powernv_cpufreq_driver); if (powernv_pstate_info.wof_enabled)
if (!rc) powernv_cpufreq_driver.boost_enabled = true;
return 0; else
powernv_cpu_freq_attr[SCALING_BOOST_FREQS_ATTR_INDEX] = NULL;
rc = cpufreq_register_driver(&powernv_cpufreq_driver);
if (rc) {
pr_info("Failed to register the cpufreq driver (%d)\n", rc); pr_info("Failed to register the cpufreq driver (%d)\n", rc);
goto cleanup_notifiers;
}
if (powernv_pstate_info.wof_enabled)
cpufreq_enable_boost_support();
return 0;
cleanup_notifiers:
unregister_all_notifiers(); unregister_all_notifiers();
clean_chip_info(); clean_chip_info();
out: out:
......
...@@ -100,9 +100,6 @@ static int pmi_notifier(struct notifier_block *nb, ...@@ -100,9 +100,6 @@ static int pmi_notifier(struct notifier_block *nb,
/* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY /* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY
* policy events?) * policy events?)
*/ */
if (event == CPUFREQ_START)
return 0;
node = cbe_cpu_to_node(policy->cpu); node = cbe_cpu_to_node(policy->cpu);
pr_debug("got notified, event=%lu, node=%u\n", event, node); pr_debug("got notified, event=%lu, node=%u\n", event, node);
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <linux/cpu_cooling.h> #include <linux/cpu_cooling.h>
#include <linux/errno.h> #include <linux/errno.h>
...@@ -37,53 +38,20 @@ struct cpu_data { ...@@ -37,53 +38,20 @@ struct cpu_data {
struct thermal_cooling_device *cdev; struct thermal_cooling_device *cdev;
}; };
/*
* Don't use cpufreq on this SoC -- used when the SoC would have otherwise
* matched a more generic compatible.
*/
#define SOC_BLACKLIST 1
/** /**
* struct soc_data - SoC specific data * struct soc_data - SoC specific data
* @freq_mask: mask the disallowed frequencies * @flags: SOC_xxx
* @flag: unique flags
*/ */
struct soc_data { struct soc_data {
u32 freq_mask[4]; u32 flags;
u32 flag;
};
#define FREQ_MASK 1
/* see hardware specification for the allowed frqeuencies */
static const struct soc_data sdata[] = {
{ /* used by p2041 and p3041 */
.freq_mask = {0x8, 0x8, 0x2, 0x2},
.flag = FREQ_MASK,
},
{ /* used by p5020 */
.freq_mask = {0x8, 0x2},
.flag = FREQ_MASK,
},
{ /* used by p4080, p5040 */
.freq_mask = {0},
.flag = 0,
},
}; };
/*
* the minimum allowed core frequency, in Hz
* for chassis v1.0, >= platform frequency
* for chassis v2.0, >= platform frequency / 2
*/
static u32 min_cpufreq;
static const u32 *fmask;
#if defined(CONFIG_ARM)
static int get_cpu_physical_id(int cpu)
{
return topology_core_id(cpu);
}
#else
static int get_cpu_physical_id(int cpu)
{
return get_hard_smp_processor_id(cpu);
}
#endif
static u32 get_bus_freq(void) static u32 get_bus_freq(void)
{ {
struct device_node *soc; struct device_node *soc;
...@@ -101,9 +69,10 @@ static u32 get_bus_freq(void) ...@@ -101,9 +69,10 @@ static u32 get_bus_freq(void)
return sysfreq; return sysfreq;
} }
static struct device_node *cpu_to_clk_node(int cpu) static struct clk *cpu_to_clk(int cpu)
{ {
struct device_node *np, *clk_np; struct device_node *np;
struct clk *clk;
if (!cpu_present(cpu)) if (!cpu_present(cpu))
return NULL; return NULL;
...@@ -112,37 +81,28 @@ static struct device_node *cpu_to_clk_node(int cpu) ...@@ -112,37 +81,28 @@ static struct device_node *cpu_to_clk_node(int cpu)
if (!np) if (!np)
return NULL; return NULL;
clk_np = of_parse_phandle(np, "clocks", 0); clk = of_clk_get(np, 0);
if (!clk_np)
return NULL;
of_node_put(np); of_node_put(np);
return clk;
return clk_np;
} }
/* traverse cpu nodes to get cpu mask of sharing clock wire */ /* traverse cpu nodes to get cpu mask of sharing clock wire */
static void set_affected_cpus(struct cpufreq_policy *policy) static void set_affected_cpus(struct cpufreq_policy *policy)
{ {
struct device_node *np, *clk_np;
struct cpumask *dstp = policy->cpus; struct cpumask *dstp = policy->cpus;
struct clk *clk;
int i; int i;
np = cpu_to_clk_node(policy->cpu);
if (!np)
return;
for_each_present_cpu(i) { for_each_present_cpu(i) {
clk_np = cpu_to_clk_node(i); clk = cpu_to_clk(i);
if (!clk_np) if (IS_ERR(clk)) {
pr_err("%s: no clock for cpu %d\n", __func__, i);
continue; continue;
}
if (clk_np == np) if (clk_is_match(policy->clk, clk))
cpumask_set_cpu(i, dstp); cpumask_set_cpu(i, dstp);
of_node_put(clk_np);
} }
of_node_put(np);
} }
/* reduce the duplicated frequencies in frequency table */ /* reduce the duplicated frequencies in frequency table */
...@@ -198,10 +158,11 @@ static void freq_table_sort(struct cpufreq_frequency_table *freq_table, ...@@ -198,10 +158,11 @@ static void freq_table_sort(struct cpufreq_frequency_table *freq_table,
static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy) static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
struct device_node *np, *pnode; struct device_node *np;
int i, count, ret; int i, count, ret;
u32 freq, mask; u32 freq;
struct clk *clk; struct clk *clk;
const struct clk_hw *hwclk;
struct cpufreq_frequency_table *table; struct cpufreq_frequency_table *table;
struct cpu_data *data; struct cpu_data *data;
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
...@@ -221,17 +182,13 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -221,17 +182,13 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy)
goto err_nomem2; goto err_nomem2;
} }
pnode = of_parse_phandle(np, "clocks", 0); hwclk = __clk_get_hw(policy->clk);
if (!pnode) { count = clk_hw_get_num_parents(hwclk);
pr_err("%s: could not get clock information\n", __func__);
goto err_nomem2;
}
count = of_property_count_strings(pnode, "clock-names");
data->pclk = kcalloc(count, sizeof(struct clk *), GFP_KERNEL); data->pclk = kcalloc(count, sizeof(struct clk *), GFP_KERNEL);
if (!data->pclk) { if (!data->pclk) {
pr_err("%s: no memory\n", __func__); pr_err("%s: no memory\n", __func__);
goto err_node; goto err_nomem2;
} }
table = kcalloc(count + 1, sizeof(*table), GFP_KERNEL); table = kcalloc(count + 1, sizeof(*table), GFP_KERNEL);
...@@ -240,22 +197,10 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -240,22 +197,10 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy)
goto err_pclk; goto err_pclk;
} }
if (fmask)
mask = fmask[get_cpu_physical_id(cpu)];
else
mask = 0x0;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
clk = of_clk_get(pnode, i); clk = clk_hw_get_parent_by_index(hwclk, i)->clk;
data->pclk[i] = clk; data->pclk[i] = clk;
freq = clk_get_rate(clk); freq = clk_get_rate(clk);
/*
* the clock is valid if its frequency is not masked
* and large than minimum allowed frequency.
*/
if (freq < min_cpufreq || (mask & (1 << i)))
table[i].frequency = CPUFREQ_ENTRY_INVALID;
else
table[i].frequency = freq / 1000; table[i].frequency = freq / 1000;
table[i].driver_data = i; table[i].driver_data = i;
} }
...@@ -282,7 +227,6 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -282,7 +227,6 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = u64temp + 1; policy->cpuinfo.transition_latency = u64temp + 1;
of_node_put(np); of_node_put(np);
of_node_put(pnode);
return 0; return 0;
...@@ -290,10 +234,7 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -290,10 +234,7 @@ static int qoriq_cpufreq_cpu_init(struct cpufreq_policy *policy)
kfree(table); kfree(table);
err_pclk: err_pclk:
kfree(data->pclk); kfree(data->pclk);
err_node:
of_node_put(pnode);
err_nomem2: err_nomem2:
policy->driver_data = NULL;
kfree(data); kfree(data);
err_np: err_np:
of_node_put(np); of_node_put(np);
...@@ -357,12 +298,25 @@ static struct cpufreq_driver qoriq_cpufreq_driver = { ...@@ -357,12 +298,25 @@ static struct cpufreq_driver qoriq_cpufreq_driver = {
.attr = cpufreq_generic_attr, .attr = cpufreq_generic_attr,
}; };
static const struct soc_data blacklist = {
.flags = SOC_BLACKLIST,
};
static const struct of_device_id node_matches[] __initconst = { static const struct of_device_id node_matches[] __initconst = {
{ .compatible = "fsl,p2041-clockgen", .data = &sdata[0], }, /* e6500 cannot use cpufreq due to erratum A-008083 */
{ .compatible = "fsl,p3041-clockgen", .data = &sdata[0], }, { .compatible = "fsl,b4420-clockgen", &blacklist },
{ .compatible = "fsl,p5020-clockgen", .data = &sdata[1], }, { .compatible = "fsl,b4860-clockgen", &blacklist },
{ .compatible = "fsl,p4080-clockgen", .data = &sdata[2], }, { .compatible = "fsl,t2080-clockgen", &blacklist },
{ .compatible = "fsl,p5040-clockgen", .data = &sdata[2], }, { .compatible = "fsl,t4240-clockgen", &blacklist },
{ .compatible = "fsl,ls1012a-clockgen", },
{ .compatible = "fsl,ls1021a-clockgen", },
{ .compatible = "fsl,ls1043a-clockgen", },
{ .compatible = "fsl,ls1046a-clockgen", },
{ .compatible = "fsl,ls1088a-clockgen", },
{ .compatible = "fsl,ls2080a-clockgen", },
{ .compatible = "fsl,p4080-clockgen", },
{ .compatible = "fsl,qoriq-clockgen-1.0", },
{ .compatible = "fsl,qoriq-clockgen-2.0", }, { .compatible = "fsl,qoriq-clockgen-2.0", },
{} {}
}; };
...@@ -380,16 +334,12 @@ static int __init qoriq_cpufreq_init(void) ...@@ -380,16 +334,12 @@ static int __init qoriq_cpufreq_init(void)
match = of_match_node(node_matches, np); match = of_match_node(node_matches, np);
data = match->data; data = match->data;
if (data) {
if (data->flag)
fmask = data->freq_mask;
min_cpufreq = get_bus_freq();
} else {
min_cpufreq = get_bus_freq() / 2;
}
of_node_put(np); of_node_put(np);
if (data && data->flags & SOC_BLACKLIST)
return -ENODEV;
ret = cpufreq_register_driver(&qoriq_cpufreq_driver); ret = cpufreq_register_driver(&qoriq_cpufreq_driver);
if (!ret) if (!ret)
pr_info("Freescale QorIQ CPU frequency scaling driver\n"); pr_info("Freescale QorIQ CPU frequency scaling driver\n");
......
...@@ -400,7 +400,6 @@ static int s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy) ...@@ -400,7 +400,6 @@ static int s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy)
rate = clk_get_rate(s3c_freq->hclk); rate = clk_get_rate(s3c_freq->hclk);
if (rate < 133 * 1000 * 1000) { if (rate < 133 * 1000 * 1000) {
pr_err("cpufreq: HCLK not at 133MHz\n"); pr_err("cpufreq: HCLK not at 133MHz\n");
clk_put(s3c_freq->hclk);
ret = -EINVAL; ret = -EINVAL;
goto err_armclk; goto err_armclk;
} }
......
...@@ -160,6 +160,7 @@ static int sti_cpufreq_set_opp_info(void) ...@@ -160,6 +160,7 @@ static int sti_cpufreq_set_opp_info(void)
int pcode, substrate, major, minor; int pcode, substrate, major, minor;
int ret; int ret;
char name[MAX_PCODE_NAME_LEN]; char name[MAX_PCODE_NAME_LEN];
struct opp_table *opp_table;
reg_fields = sti_cpufreq_match(); reg_fields = sti_cpufreq_match();
if (!reg_fields) { if (!reg_fields) {
...@@ -211,20 +212,20 @@ static int sti_cpufreq_set_opp_info(void) ...@@ -211,20 +212,20 @@ static int sti_cpufreq_set_opp_info(void)
snprintf(name, MAX_PCODE_NAME_LEN, "pcode%d", pcode); snprintf(name, MAX_PCODE_NAME_LEN, "pcode%d", pcode);
ret = dev_pm_opp_set_prop_name(dev, name); opp_table = dev_pm_opp_set_prop_name(dev, name);
if (ret) { if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to set prop name\n"); dev_err(dev, "Failed to set prop name\n");
return ret; return PTR_ERR(opp_table);
} }
version[0] = BIT(major); version[0] = BIT(major);
version[1] = BIT(minor); version[1] = BIT(minor);
version[2] = BIT(substrate); version[2] = BIT(substrate);
ret = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS); opp_table = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS);
if (ret) { if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to set supported hardware\n"); dev_err(dev, "Failed to set supported hardware\n");
return ret; return PTR_ERR(opp_table);
} }
dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n", dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n",
......
/*
* TI CPUFreq/OPP hw-supported driver
*
* Copyright (C) 2016-2017 Texas Instruments, Inc.
* Dave Gerlach <d-gerlach@ti.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/cpu.h>
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/pm_opp.h>
#include <linux/regmap.h>
#include <linux/slab.h>
#define REVISION_MASK 0xF
#define REVISION_SHIFT 28
#define AM33XX_800M_ARM_MPU_MAX_FREQ 0x1E2F
#define AM43XX_600M_ARM_MPU_MAX_FREQ 0xFFA
#define DRA7_EFUSE_HAS_OD_MPU_OPP 11
#define DRA7_EFUSE_HAS_HIGH_MPU_OPP 15
#define DRA7_EFUSE_HAS_ALL_MPU_OPP 23
#define DRA7_EFUSE_NOM_MPU_OPP BIT(0)
#define DRA7_EFUSE_OD_MPU_OPP BIT(1)
#define DRA7_EFUSE_HIGH_MPU_OPP BIT(2)
#define VERSION_COUNT 2
struct ti_cpufreq_data;
struct ti_cpufreq_soc_data {
unsigned long (*efuse_xlate)(struct ti_cpufreq_data *opp_data,
unsigned long efuse);
unsigned long efuse_fallback;
unsigned long efuse_offset;
unsigned long efuse_mask;
unsigned long efuse_shift;
unsigned long rev_offset;
};
struct ti_cpufreq_data {
struct device *cpu_dev;
struct device_node *opp_node;
struct regmap *syscon;
const struct ti_cpufreq_soc_data *soc_data;
};
static unsigned long amx3_efuse_xlate(struct ti_cpufreq_data *opp_data,
unsigned long efuse)
{
if (!efuse)
efuse = opp_data->soc_data->efuse_fallback;
/* AM335x and AM437x use "OPP disable" bits, so invert */
return ~efuse;
}
static unsigned long dra7_efuse_xlate(struct ti_cpufreq_data *opp_data,
unsigned long efuse)
{
unsigned long calculated_efuse = DRA7_EFUSE_NOM_MPU_OPP;
/*
* The efuse on dra7 and am57 parts contains a specific
* value indicating the highest available OPP.
*/
switch (efuse) {
case DRA7_EFUSE_HAS_ALL_MPU_OPP:
case DRA7_EFUSE_HAS_HIGH_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP;
case DRA7_EFUSE_HAS_OD_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_OD_MPU_OPP;
}
return calculated_efuse;
}
static struct ti_cpufreq_soc_data am3x_soc_data = {
.efuse_xlate = amx3_efuse_xlate,
.efuse_fallback = AM33XX_800M_ARM_MPU_MAX_FREQ,
.efuse_offset = 0x07fc,
.efuse_mask = 0x1fff,
.rev_offset = 0x600,
};
static struct ti_cpufreq_soc_data am4x_soc_data = {
.efuse_xlate = amx3_efuse_xlate,
.efuse_fallback = AM43XX_600M_ARM_MPU_MAX_FREQ,
.efuse_offset = 0x0610,
.efuse_mask = 0x3f,
.rev_offset = 0x600,
};
static struct ti_cpufreq_soc_data dra7_soc_data = {
.efuse_xlate = dra7_efuse_xlate,
.efuse_offset = 0x020c,
.efuse_mask = 0xf80000,
.efuse_shift = 19,
.rev_offset = 0x204,
};
/**
* ti_cpufreq_get_efuse() - Parse and return efuse value present on SoC
* @opp_data: pointer to ti_cpufreq_data context
* @efuse_value: Set to the value parsed from efuse
*
* Returns error code if efuse not read properly.
*/
static int ti_cpufreq_get_efuse(struct ti_cpufreq_data *opp_data,
u32 *efuse_value)
{
struct device *dev = opp_data->cpu_dev;
u32 efuse;
int ret;
ret = regmap_read(opp_data->syscon, opp_data->soc_data->efuse_offset,
&efuse);
if (ret) {
dev_err(dev,
"Failed to read the efuse value from syscon: %d\n",
ret);
return ret;
}
efuse = (efuse & opp_data->soc_data->efuse_mask);
efuse >>= opp_data->soc_data->efuse_shift;
*efuse_value = opp_data->soc_data->efuse_xlate(opp_data, efuse);
return 0;
}
/**
* ti_cpufreq_get_rev() - Parse and return rev value present on SoC
* @opp_data: pointer to ti_cpufreq_data context
* @revision_value: Set to the value parsed from revision register
*
* Returns error code if revision not read properly.
*/
static int ti_cpufreq_get_rev(struct ti_cpufreq_data *opp_data,
u32 *revision_value)
{
struct device *dev = opp_data->cpu_dev;
u32 revision;
int ret;
ret = regmap_read(opp_data->syscon, opp_data->soc_data->rev_offset,
&revision);
if (ret) {
dev_err(dev,
"Failed to read the revision number from syscon: %d\n",
ret);
return ret;
}
*revision_value = BIT((revision >> REVISION_SHIFT) & REVISION_MASK);
return 0;
}
static int ti_cpufreq_setup_syscon_register(struct ti_cpufreq_data *opp_data)
{
struct device *dev = opp_data->cpu_dev;
struct device_node *np = opp_data->opp_node;
opp_data->syscon = syscon_regmap_lookup_by_phandle(np,
"syscon");
if (IS_ERR(opp_data->syscon)) {
dev_err(dev,
"\"syscon\" is missing, cannot use OPPv2 table.\n");
return PTR_ERR(opp_data->syscon);
}
return 0;
}
static const struct of_device_id ti_cpufreq_of_match[] = {
{ .compatible = "ti,am33xx", .data = &am3x_soc_data, },
{ .compatible = "ti,am4372", .data = &am4x_soc_data, },
{ .compatible = "ti,dra7", .data = &dra7_soc_data },
{},
};
static int ti_cpufreq_init(void)
{
u32 version[VERSION_COUNT];
struct device_node *np;
const struct of_device_id *match;
struct ti_cpufreq_data *opp_data;
int ret;
np = of_find_node_by_path("/");
match = of_match_node(ti_cpufreq_of_match, np);
if (!match)
return -ENODEV;
opp_data = kzalloc(sizeof(*opp_data), GFP_KERNEL);
if (!opp_data)
return -ENOMEM;
opp_data->soc_data = match->data;
opp_data->cpu_dev = get_cpu_device(0);
if (!opp_data->cpu_dev) {
pr_err("%s: Failed to get device for CPU0\n", __func__);
return -ENODEV;
}
opp_data->opp_node = dev_pm_opp_of_get_opp_desc_node(opp_data->cpu_dev);
if (!opp_data->opp_node) {
dev_info(opp_data->cpu_dev,
"OPP-v2 not supported, cpufreq-dt will attempt to use legacy tables.\n");
goto register_cpufreq_dt;
}
ret = ti_cpufreq_setup_syscon_register(opp_data);
if (ret)
goto fail_put_node;
/*
* OPPs determine whether or not they are supported based on
* two metrics:
* 0 - SoC Revision
* 1 - eFuse value
*/
ret = ti_cpufreq_get_rev(opp_data, &version[0]);
if (ret)
goto fail_put_node;
ret = ti_cpufreq_get_efuse(opp_data, &version[1]);
if (ret)
goto fail_put_node;
of_node_put(opp_data->opp_node);
ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev,
version, VERSION_COUNT));
if (ret) {
dev_err(opp_data->cpu_dev,
"Failed to set supported hardware\n");
goto fail_put_node;
}
register_cpufreq_dt:
platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
return 0;
fail_put_node:
of_node_put(opp_data->opp_node);
return ret;
}
device_initcall(ti_cpufreq_init);
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/tick.h> #include <linux/tick.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/math64.h> #include <linux/math64.h>
#include <linux/cpu.h>
/* /*
* Please note when changing the tuning values: * Please note when changing the tuning values:
...@@ -280,17 +281,23 @@ static unsigned int get_typical_interval(struct menu_device *data) ...@@ -280,17 +281,23 @@ static unsigned int get_typical_interval(struct menu_device *data)
static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{ {
struct menu_device *data = this_cpu_ptr(&menu_devices); struct menu_device *data = this_cpu_ptr(&menu_devices);
struct device *device = get_cpu_device(dev->cpu);
int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
int i; int i;
unsigned int interactivity_req; unsigned int interactivity_req;
unsigned int expected_interval; unsigned int expected_interval;
unsigned long nr_iowaiters, cpu_load; unsigned long nr_iowaiters, cpu_load;
int resume_latency = dev_pm_qos_read_value(device);
if (data->needs_update) { if (data->needs_update) {
menu_update(drv, dev); menu_update(drv, dev);
data->needs_update = 0; data->needs_update = 0;
} }
/* resume_latency is 0 means no restriction */
if (resume_latency && resume_latency < latency_req)
latency_req = resume_latency;
/* Special case when user has set very strict latency requirement */ /* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) if (unlikely(latency_req == 0))
return 0; return 0;
...@@ -357,9 +364,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -357,9 +364,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
if (s->disabled || su->disable) if (s->disabled || su->disable)
continue; continue;
if (s->target_residency > data->predicted_us) if (s->target_residency > data->predicted_us)
continue; break;
if (s->exit_latency > latency_req) if (s->exit_latency > latency_req)
continue; break;
data->last_state_idx = i; data->last_state_idx = i;
} }
......
...@@ -306,7 +306,7 @@ struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev, ...@@ -306,7 +306,7 @@ struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev,
struct devfreq_event_desc *desc) struct devfreq_event_desc *desc)
{ {
struct devfreq_event_dev *edev; struct devfreq_event_dev *edev;
static atomic_t event_no = ATOMIC_INIT(0); static atomic_t event_no = ATOMIC_INIT(-1);
int ret; int ret;
if (!dev || !desc) if (!dev || !desc)
...@@ -329,7 +329,7 @@ struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev, ...@@ -329,7 +329,7 @@ struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev,
edev->dev.class = devfreq_event_class; edev->dev.class = devfreq_event_class;
edev->dev.release = devfreq_event_release_edev; edev->dev.release = devfreq_event_release_edev;
dev_set_name(&edev->dev, "event.%d", atomic_inc_return(&event_no) - 1); dev_set_name(&edev->dev, "event%d", atomic_inc_return(&event_no));
ret = device_register(&edev->dev); ret = device_register(&edev->dev);
if (ret < 0) { if (ret < 0) {
put_device(&edev->dev); put_device(&edev->dev);
......
...@@ -111,18 +111,16 @@ static void devfreq_set_freq_table(struct devfreq *devfreq) ...@@ -111,18 +111,16 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
return; return;
} }
rcu_read_lock();
for (i = 0, freq = 0; i < profile->max_state; i++, freq++) { for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {
opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq); opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
devm_kfree(devfreq->dev.parent, profile->freq_table); devm_kfree(devfreq->dev.parent, profile->freq_table);
profile->max_state = 0; profile->max_state = 0;
rcu_read_unlock();
return; return;
} }
dev_pm_opp_put(opp);
profile->freq_table[i] = freq; profile->freq_table[i] = freq;
} }
rcu_read_unlock();
} }
/** /**
...@@ -130,7 +128,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq) ...@@ -130,7 +128,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
* @devfreq: the devfreq instance * @devfreq: the devfreq instance
* @freq: the update target frequency * @freq: the update target frequency
*/ */
static int devfreq_update_status(struct devfreq *devfreq, unsigned long freq) int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
{ {
int lev, prev_lev, ret = 0; int lev, prev_lev, ret = 0;
unsigned long cur_time; unsigned long cur_time;
...@@ -166,6 +164,7 @@ static int devfreq_update_status(struct devfreq *devfreq, unsigned long freq) ...@@ -166,6 +164,7 @@ static int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
devfreq->last_stat_updated = cur_time; devfreq->last_stat_updated = cur_time;
return ret; return ret;
} }
EXPORT_SYMBOL(devfreq_update_status);
/** /**
* find_devfreq_governor() - find devfreq governor from name * find_devfreq_governor() - find devfreq governor from name
...@@ -474,11 +473,15 @@ static int devfreq_notifier_call(struct notifier_block *nb, unsigned long type, ...@@ -474,11 +473,15 @@ static int devfreq_notifier_call(struct notifier_block *nb, unsigned long type,
} }
/** /**
* _remove_devfreq() - Remove devfreq from the list and release its resources. * devfreq_dev_release() - Callback for struct device to release the device.
* @devfreq: the devfreq struct * @dev: the devfreq device
*
* Remove devfreq from the list and release its resources.
*/ */
static void _remove_devfreq(struct devfreq *devfreq) static void devfreq_dev_release(struct device *dev)
{ {
struct devfreq *devfreq = to_devfreq(dev);
mutex_lock(&devfreq_list_lock); mutex_lock(&devfreq_list_lock);
if (IS_ERR(find_device_devfreq(devfreq->dev.parent))) { if (IS_ERR(find_device_devfreq(devfreq->dev.parent))) {
mutex_unlock(&devfreq_list_lock); mutex_unlock(&devfreq_list_lock);
...@@ -499,19 +502,6 @@ static void _remove_devfreq(struct devfreq *devfreq) ...@@ -499,19 +502,6 @@ static void _remove_devfreq(struct devfreq *devfreq)
kfree(devfreq); kfree(devfreq);
} }
/**
* devfreq_dev_release() - Callback for struct device to release the device.
* @dev: the devfreq device
*
* This calls _remove_devfreq() if _remove_devfreq() is not called.
*/
static void devfreq_dev_release(struct device *dev)
{
struct devfreq *devfreq = to_devfreq(dev);
_remove_devfreq(devfreq);
}
/** /**
* devfreq_add_device() - Add devfreq feature to the device * devfreq_add_device() - Add devfreq feature to the device
* @dev: the device to add devfreq feature. * @dev: the device to add devfreq feature.
...@@ -527,6 +517,7 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -527,6 +517,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
{ {
struct devfreq *devfreq; struct devfreq *devfreq;
struct devfreq_governor *governor; struct devfreq_governor *governor;
static atomic_t devfreq_no = ATOMIC_INIT(-1);
int err = 0; int err = 0;
if (!dev || !profile || !governor_name) { if (!dev || !profile || !governor_name) {
...@@ -538,15 +529,14 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -538,15 +529,14 @@ struct devfreq *devfreq_add_device(struct device *dev,
devfreq = find_device_devfreq(dev); devfreq = find_device_devfreq(dev);
mutex_unlock(&devfreq_list_lock); mutex_unlock(&devfreq_list_lock);
if (!IS_ERR(devfreq)) { if (!IS_ERR(devfreq)) {
dev_err(dev, "%s: Unable to create devfreq for the device. It already has one.\n", __func__); dev_err(dev, "%s: Unable to create devfreq for the device.\n",
__func__);
err = -EINVAL; err = -EINVAL;
goto err_out; goto err_out;
} }
devfreq = kzalloc(sizeof(struct devfreq), GFP_KERNEL); devfreq = kzalloc(sizeof(struct devfreq), GFP_KERNEL);
if (!devfreq) { if (!devfreq) {
dev_err(dev, "%s: Unable to create devfreq for the device\n",
__func__);
err = -ENOMEM; err = -ENOMEM;
goto err_out; goto err_out;
} }
...@@ -569,18 +559,21 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -569,18 +559,21 @@ struct devfreq *devfreq_add_device(struct device *dev,
mutex_lock(&devfreq->lock); mutex_lock(&devfreq->lock);
} }
dev_set_name(&devfreq->dev, "%s", dev_name(dev)); dev_set_name(&devfreq->dev, "devfreq%d",
atomic_inc_return(&devfreq_no));
err = device_register(&devfreq->dev); err = device_register(&devfreq->dev);
if (err) { if (err) {
mutex_unlock(&devfreq->lock); mutex_unlock(&devfreq->lock);
goto err_out; goto err_out;
} }
devfreq->trans_table = devm_kzalloc(&devfreq->dev, sizeof(unsigned int) * devfreq->trans_table = devm_kzalloc(&devfreq->dev,
sizeof(unsigned int) *
devfreq->profile->max_state * devfreq->profile->max_state *
devfreq->profile->max_state, devfreq->profile->max_state,
GFP_KERNEL); GFP_KERNEL);
devfreq->time_in_state = devm_kzalloc(&devfreq->dev, sizeof(unsigned long) * devfreq->time_in_state = devm_kzalloc(&devfreq->dev,
sizeof(unsigned long) *
devfreq->profile->max_state, devfreq->profile->max_state,
GFP_KERNEL); GFP_KERNEL);
devfreq->last_stat_updated = jiffies; devfreq->last_stat_updated = jiffies;
...@@ -939,6 +932,9 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr, ...@@ -939,6 +932,9 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
if (df->governor == governor) { if (df->governor == governor) {
ret = 0; ret = 0;
goto out; goto out;
} else if (df->governor->immutable || governor->immutable) {
ret = -EINVAL;
goto out;
} }
if (df->governor) { if (df->governor) {
...@@ -968,13 +964,33 @@ static ssize_t available_governors_show(struct device *d, ...@@ -968,13 +964,33 @@ static ssize_t available_governors_show(struct device *d,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
struct devfreq_governor *tmp_governor; struct devfreq *df = to_devfreq(d);
ssize_t count = 0; ssize_t count = 0;
mutex_lock(&devfreq_list_lock); mutex_lock(&devfreq_list_lock);
list_for_each_entry(tmp_governor, &devfreq_governor_list, node)
/*
* The devfreq with immutable governor (e.g., passive) shows
* only own governor.
*/
if (df->governor->immutable) {
count = scnprintf(&buf[count], DEVFREQ_NAME_LEN,
"%s ", df->governor_name);
/*
* The devfreq device shows the registered governor except for
* immutable governors such as passive governor .
*/
} else {
struct devfreq_governor *governor;
list_for_each_entry(governor, &devfreq_governor_list, node) {
if (governor->immutable)
continue;
count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
"%s ", tmp_governor->name); "%s ", governor->name);
}
}
mutex_unlock(&devfreq_list_lock); mutex_unlock(&devfreq_list_lock);
/* Truncate the trailing space */ /* Truncate the trailing space */
...@@ -1112,17 +1128,16 @@ static ssize_t available_frequencies_show(struct device *d, ...@@ -1112,17 +1128,16 @@ static ssize_t available_frequencies_show(struct device *d,
ssize_t count = 0; ssize_t count = 0;
unsigned long freq = 0; unsigned long freq = 0;
rcu_read_lock();
do { do {
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) if (IS_ERR(opp))
break; break;
dev_pm_opp_put(opp);
count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
"%lu ", freq); "%lu ", freq);
freq++; freq++;
} while (1); } while (1);
rcu_read_unlock();
/* Truncate the trailing space */ /* Truncate the trailing space */
if (count) if (count)
...@@ -1224,11 +1239,8 @@ subsys_initcall(devfreq_init); ...@@ -1224,11 +1239,8 @@ subsys_initcall(devfreq_init);
* @freq: The frequency given to target function * @freq: The frequency given to target function
* @flags: Flags handed from devfreq framework. * @flags: Flags handed from devfreq framework.
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, unsigned long *freq,
...@@ -1265,18 +1277,7 @@ EXPORT_SYMBOL(devfreq_recommended_opp); ...@@ -1265,18 +1277,7 @@ EXPORT_SYMBOL(devfreq_recommended_opp);
*/ */
int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq) int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq)
{ {
struct srcu_notifier_head *nh; return dev_pm_opp_register_notifier(dev, &devfreq->nb);
int ret = 0;
rcu_read_lock();
nh = dev_pm_opp_get_notifier(dev);
if (IS_ERR(nh))
ret = PTR_ERR(nh);
rcu_read_unlock();
if (!ret)
ret = srcu_notifier_chain_register(nh, &devfreq->nb);
return ret;
} }
EXPORT_SYMBOL(devfreq_register_opp_notifier); EXPORT_SYMBOL(devfreq_register_opp_notifier);
...@@ -1292,18 +1293,7 @@ EXPORT_SYMBOL(devfreq_register_opp_notifier); ...@@ -1292,18 +1293,7 @@ EXPORT_SYMBOL(devfreq_register_opp_notifier);
*/ */
int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq) int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq)
{ {
struct srcu_notifier_head *nh; return dev_pm_opp_unregister_notifier(dev, &devfreq->nb);
int ret = 0;
rcu_read_lock();
nh = dev_pm_opp_get_notifier(dev);
if (IS_ERR(nh))
ret = PTR_ERR(nh);
rcu_read_unlock();
if (!ret)
ret = srcu_notifier_chain_unregister(nh, &devfreq->nb);
return ret;
} }
EXPORT_SYMBOL(devfreq_unregister_opp_notifier); EXPORT_SYMBOL(devfreq_unregister_opp_notifier);
......
...@@ -17,13 +17,13 @@ ...@@ -17,13 +17,13 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/devfreq-event.h> #include <linux/devfreq-event.h>
#include "exynos-ppmu.h" #include "exynos-ppmu.h"
struct exynos_ppmu_data { struct exynos_ppmu_data {
void __iomem *base;
struct clk *clk; struct clk *clk;
}; };
...@@ -33,6 +33,7 @@ struct exynos_ppmu { ...@@ -33,6 +33,7 @@ struct exynos_ppmu {
unsigned int num_events; unsigned int num_events;
struct device *dev; struct device *dev;
struct regmap *regmap;
struct exynos_ppmu_data ppmu; struct exynos_ppmu_data ppmu;
}; };
...@@ -107,20 +108,28 @@ static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev) ...@@ -107,20 +108,28 @@ static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev)
static int exynos_ppmu_disable(struct devfreq_event_dev *edev) static int exynos_ppmu_disable(struct devfreq_event_dev *edev)
{ {
struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
int ret;
u32 pmnc; u32 pmnc;
/* Disable all counters */ /* Disable all counters */
__raw_writel(PPMU_CCNT_MASK | ret = regmap_write(info->regmap, PPMU_CNTENC,
PPMU_CCNT_MASK |
PPMU_PMCNT0_MASK | PPMU_PMCNT0_MASK |
PPMU_PMCNT1_MASK | PPMU_PMCNT1_MASK |
PPMU_PMCNT2_MASK | PPMU_PMCNT2_MASK |
PPMU_PMCNT3_MASK, PPMU_PMCNT3_MASK);
info->ppmu.base + PPMU_CNTENC); if (ret < 0)
return ret;
/* Disable PPMU */ /* Disable PPMU */
pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC); ret = regmap_read(info->regmap, PPMU_PMNC, &pmnc);
if (ret < 0)
return ret;
pmnc &= ~PPMU_PMNC_ENABLE_MASK; pmnc &= ~PPMU_PMNC_ENABLE_MASK;
__raw_writel(pmnc, info->ppmu.base + PPMU_PMNC); ret = regmap_write(info->regmap, PPMU_PMNC, pmnc);
if (ret < 0)
return ret;
return 0; return 0;
} }
...@@ -129,29 +138,42 @@ static int exynos_ppmu_set_event(struct devfreq_event_dev *edev) ...@@ -129,29 +138,42 @@ static int exynos_ppmu_set_event(struct devfreq_event_dev *edev)
{ {
struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
int id = exynos_ppmu_find_ppmu_id(edev); int id = exynos_ppmu_find_ppmu_id(edev);
int ret;
u32 pmnc, cntens; u32 pmnc, cntens;
if (id < 0) if (id < 0)
return id; return id;
/* Enable specific counter */ /* Enable specific counter */
cntens = __raw_readl(info->ppmu.base + PPMU_CNTENS); ret = regmap_read(info->regmap, PPMU_CNTENS, &cntens);
if (ret < 0)
return ret;
cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id));
__raw_writel(cntens, info->ppmu.base + PPMU_CNTENS); ret = regmap_write(info->regmap, PPMU_CNTENS, cntens);
if (ret < 0)
return ret;
/* Set the event of Read/Write data count */ /* Set the event of Read/Write data count */
__raw_writel(PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT, ret = regmap_write(info->regmap, PPMU_BEVTxSEL(id),
info->ppmu.base + PPMU_BEVTxSEL(id)); PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT);
if (ret < 0)
return ret;
/* Reset cycle counter/performance counter and enable PPMU */ /* Reset cycle counter/performance counter and enable PPMU */
pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC); ret = regmap_read(info->regmap, PPMU_PMNC, &pmnc);
if (ret < 0)
return ret;
pmnc &= ~(PPMU_PMNC_ENABLE_MASK pmnc &= ~(PPMU_PMNC_ENABLE_MASK
| PPMU_PMNC_COUNTER_RESET_MASK | PPMU_PMNC_COUNTER_RESET_MASK
| PPMU_PMNC_CC_RESET_MASK); | PPMU_PMNC_CC_RESET_MASK);
pmnc |= (PPMU_ENABLE << PPMU_PMNC_ENABLE_SHIFT); pmnc |= (PPMU_ENABLE << PPMU_PMNC_ENABLE_SHIFT);
pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT); pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT);
pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT); pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT);
__raw_writel(pmnc, info->ppmu.base + PPMU_PMNC); ret = regmap_write(info->regmap, PPMU_PMNC, pmnc);
if (ret < 0)
return ret;
return 0; return 0;
} }
...@@ -161,40 +183,64 @@ static int exynos_ppmu_get_event(struct devfreq_event_dev *edev, ...@@ -161,40 +183,64 @@ static int exynos_ppmu_get_event(struct devfreq_event_dev *edev,
{ {
struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
int id = exynos_ppmu_find_ppmu_id(edev); int id = exynos_ppmu_find_ppmu_id(edev);
u32 pmnc, cntenc; unsigned int total_count, load_count;
unsigned int pmcnt3_high, pmcnt3_low;
unsigned int pmnc, cntenc;
int ret;
if (id < 0) if (id < 0)
return -EINVAL; return -EINVAL;
/* Disable PPMU */ /* Disable PPMU */
pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC); ret = regmap_read(info->regmap, PPMU_PMNC, &pmnc);
if (ret < 0)
return ret;
pmnc &= ~PPMU_PMNC_ENABLE_MASK; pmnc &= ~PPMU_PMNC_ENABLE_MASK;
__raw_writel(pmnc, info->ppmu.base + PPMU_PMNC); ret = regmap_write(info->regmap, PPMU_PMNC, pmnc);
if (ret < 0)
return ret;
/* Read cycle count */ /* Read cycle count */
edata->total_count = __raw_readl(info->ppmu.base + PPMU_CCNT); ret = regmap_read(info->regmap, PPMU_CCNT, &total_count);
if (ret < 0)
return ret;
edata->total_count = total_count;
/* Read performance count */ /* Read performance count */
switch (id) { switch (id) {
case PPMU_PMNCNT0: case PPMU_PMNCNT0:
case PPMU_PMNCNT1: case PPMU_PMNCNT1:
case PPMU_PMNCNT2: case PPMU_PMNCNT2:
edata->load_count ret = regmap_read(info->regmap, PPMU_PMNCT(id), &load_count);
= __raw_readl(info->ppmu.base + PPMU_PMNCT(id)); if (ret < 0)
return ret;
edata->load_count = load_count;
break; break;
case PPMU_PMNCNT3: case PPMU_PMNCNT3:
edata->load_count = ret = regmap_read(info->regmap, PPMU_PMCNT3_HIGH, &pmcnt3_high);
((__raw_readl(info->ppmu.base + PPMU_PMCNT3_HIGH) << 8) if (ret < 0)
| __raw_readl(info->ppmu.base + PPMU_PMCNT3_LOW)); return ret;
ret = regmap_read(info->regmap, PPMU_PMCNT3_LOW, &pmcnt3_low);
if (ret < 0)
return ret;
edata->load_count = ((pmcnt3_high << 8) | pmcnt3_low);
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
/* Disable specific counter */ /* Disable specific counter */
cntenc = __raw_readl(info->ppmu.base + PPMU_CNTENC); ret = regmap_read(info->regmap, PPMU_CNTENC, &cntenc);
if (ret < 0)
return ret;
cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id));
__raw_writel(cntenc, info->ppmu.base + PPMU_CNTENC); ret = regmap_write(info->regmap, PPMU_CNTENC, cntenc);
if (ret < 0)
return ret;
dev_dbg(&edev->dev, "%s (event: %ld/%ld)\n", edev->desc->name, dev_dbg(&edev->dev, "%s (event: %ld/%ld)\n", edev->desc->name,
edata->load_count, edata->total_count); edata->load_count, edata->total_count);
...@@ -214,36 +260,93 @@ static const struct devfreq_event_ops exynos_ppmu_ops = { ...@@ -214,36 +260,93 @@ static const struct devfreq_event_ops exynos_ppmu_ops = {
static int exynos_ppmu_v2_disable(struct devfreq_event_dev *edev) static int exynos_ppmu_v2_disable(struct devfreq_event_dev *edev)
{ {
struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
int ret;
u32 pmnc, clear; u32 pmnc, clear;
/* Disable all counters */ /* Disable all counters */
clear = (PPMU_CCNT_MASK | PPMU_PMCNT0_MASK | PPMU_PMCNT1_MASK clear = (PPMU_CCNT_MASK | PPMU_PMCNT0_MASK | PPMU_PMCNT1_MASK
| PPMU_PMCNT2_MASK | PPMU_PMCNT3_MASK); | PPMU_PMCNT2_MASK | PPMU_PMCNT3_MASK);
ret = regmap_write(info->regmap, PPMU_V2_FLAG, clear);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_INTENC, clear);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CNTENC, clear);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CNT_RESET, clear);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CIG_CFG0, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CIG_CFG1, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CIG_CFG2, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CIG_RESULT, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CNT_AUTO, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CH_EV0_TYPE, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_CH_EV1_TYPE, 0x0);
if (ret < 0)
return ret;
__raw_writel(clear, info->ppmu.base + PPMU_V2_FLAG); ret = regmap_write(info->regmap, PPMU_V2_CH_EV2_TYPE, 0x0);
__raw_writel(clear, info->ppmu.base + PPMU_V2_INTENC); if (ret < 0)
__raw_writel(clear, info->ppmu.base + PPMU_V2_CNTENC); return ret;
__raw_writel(clear, info->ppmu.base + PPMU_V2_CNT_RESET);
ret = regmap_write(info->regmap, PPMU_V2_CH_EV3_TYPE, 0x0);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_CFG0); if (ret < 0)
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_CFG1); return ret;
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_CFG2);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_RESULT); ret = regmap_write(info->regmap, PPMU_V2_SM_ID_V, 0x0);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CNT_AUTO); if (ret < 0)
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV0_TYPE); return ret;
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV1_TYPE);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV2_TYPE); ret = regmap_write(info->regmap, PPMU_V2_SM_ID_A, 0x0);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV3_TYPE); if (ret < 0)
__raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_ID_V); return ret;
__raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_ID_A);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_OTHERS_V); ret = regmap_write(info->regmap, PPMU_V2_SM_OTHERS_V, 0x0);
__raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_OTHERS_A); if (ret < 0)
__raw_writel(0x0, info->ppmu.base + PPMU_V2_INTERRUPT_RESET); return ret;
ret = regmap_write(info->regmap, PPMU_V2_SM_OTHERS_A, 0x0);
if (ret < 0)
return ret;
ret = regmap_write(info->regmap, PPMU_V2_INTERRUPT_RESET, 0x0);
if (ret < 0)
return ret;
/* Disable PPMU */ /* Disable PPMU */
pmnc = __raw_readl(info->ppmu.base + PPMU_V2_PMNC); ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc);
if (ret < 0)
return ret;
pmnc &= ~PPMU_PMNC_ENABLE_MASK; pmnc &= ~PPMU_PMNC_ENABLE_MASK;
__raw_writel(pmnc, info->ppmu.base + PPMU_V2_PMNC); ret = regmap_write(info->regmap, PPMU_V2_PMNC, pmnc);
if (ret < 0)
return ret;
return 0; return 0;
} }
...@@ -251,30 +354,43 @@ static int exynos_ppmu_v2_disable(struct devfreq_event_dev *edev) ...@@ -251,30 +354,43 @@ static int exynos_ppmu_v2_disable(struct devfreq_event_dev *edev)
static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev) static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev)
{ {
struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
unsigned int pmnc, cntens;
int id = exynos_ppmu_find_ppmu_id(edev); int id = exynos_ppmu_find_ppmu_id(edev);
u32 pmnc, cntens; int ret;
/* Enable all counters */ /* Enable all counters */
cntens = __raw_readl(info->ppmu.base + PPMU_V2_CNTENS); ret = regmap_read(info->regmap, PPMU_V2_CNTENS, &cntens);
if (ret < 0)
return ret;
cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id));
__raw_writel(cntens, info->ppmu.base + PPMU_V2_CNTENS); ret = regmap_write(info->regmap, PPMU_V2_CNTENS, cntens);
if (ret < 0)
return ret;
/* Set the event of Read/Write data count */ /* Set the event of Read/Write data count */
switch (id) { switch (id) {
case PPMU_PMNCNT0: case PPMU_PMNCNT0:
case PPMU_PMNCNT1: case PPMU_PMNCNT1:
case PPMU_PMNCNT2: case PPMU_PMNCNT2:
__raw_writel(PPMU_V2_RO_DATA_CNT | PPMU_V2_WO_DATA_CNT, ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
info->ppmu.base + PPMU_V2_CH_EVx_TYPE(id)); PPMU_V2_RO_DATA_CNT | PPMU_V2_WO_DATA_CNT);
if (ret < 0)
return ret;
break; break;
case PPMU_PMNCNT3: case PPMU_PMNCNT3:
__raw_writel(PPMU_V2_EVT3_RW_DATA_CNT, ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
info->ppmu.base + PPMU_V2_CH_EVx_TYPE(id)); PPMU_V2_EVT3_RW_DATA_CNT);
if (ret < 0)
return ret;
break; break;
} }
/* Reset cycle counter/performance counter and enable PPMU */ /* Reset cycle counter/performance counter and enable PPMU */
pmnc = __raw_readl(info->ppmu.base + PPMU_V2_PMNC); ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc);
if (ret < 0)
return ret;
pmnc &= ~(PPMU_PMNC_ENABLE_MASK pmnc &= ~(PPMU_PMNC_ENABLE_MASK
| PPMU_PMNC_COUNTER_RESET_MASK | PPMU_PMNC_COUNTER_RESET_MASK
| PPMU_PMNC_CC_RESET_MASK | PPMU_PMNC_CC_RESET_MASK
...@@ -284,7 +400,10 @@ static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev) ...@@ -284,7 +400,10 @@ static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev)
pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT); pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT);
pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT); pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT);
pmnc |= (PPMU_V2_MODE_MANUAL << PPMU_V2_PMNC_START_MODE_SHIFT); pmnc |= (PPMU_V2_MODE_MANUAL << PPMU_V2_PMNC_START_MODE_SHIFT);
__raw_writel(pmnc, info->ppmu.base + PPMU_V2_PMNC);
ret = regmap_write(info->regmap, PPMU_V2_PMNC, pmnc);
if (ret < 0)
return ret;
return 0; return 0;
} }
...@@ -294,37 +413,61 @@ static int exynos_ppmu_v2_get_event(struct devfreq_event_dev *edev, ...@@ -294,37 +413,61 @@ static int exynos_ppmu_v2_get_event(struct devfreq_event_dev *edev,
{ {
struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); struct exynos_ppmu *info = devfreq_event_get_drvdata(edev);
int id = exynos_ppmu_find_ppmu_id(edev); int id = exynos_ppmu_find_ppmu_id(edev);
u32 pmnc, cntenc; int ret;
u32 pmcnt_high, pmcnt_low; unsigned int pmnc, cntenc;
u64 load_count = 0; unsigned int pmcnt_high, pmcnt_low;
unsigned int total_count, count;
unsigned long load_count = 0;
/* Disable PPMU */ /* Disable PPMU */
pmnc = __raw_readl(info->ppmu.base + PPMU_V2_PMNC); ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc);
if (ret < 0)
return ret;
pmnc &= ~PPMU_PMNC_ENABLE_MASK; pmnc &= ~PPMU_PMNC_ENABLE_MASK;
__raw_writel(pmnc, info->ppmu.base + PPMU_V2_PMNC); ret = regmap_write(info->regmap, PPMU_V2_PMNC, pmnc);
if (ret < 0)
return ret;
/* Read cycle count and performance count */ /* Read cycle count and performance count */
edata->total_count = __raw_readl(info->ppmu.base + PPMU_V2_CCNT); ret = regmap_read(info->regmap, PPMU_V2_CCNT, &total_count);
if (ret < 0)
return ret;
edata->total_count = total_count;
switch (id) { switch (id) {
case PPMU_PMNCNT0: case PPMU_PMNCNT0:
case PPMU_PMNCNT1: case PPMU_PMNCNT1:
case PPMU_PMNCNT2: case PPMU_PMNCNT2:
load_count = __raw_readl(info->ppmu.base + PPMU_V2_PMNCT(id)); ret = regmap_read(info->regmap, PPMU_V2_PMNCT(id), &count);
if (ret < 0)
return ret;
load_count = count;
break; break;
case PPMU_PMNCNT3: case PPMU_PMNCNT3:
pmcnt_high = __raw_readl(info->ppmu.base + PPMU_V2_PMCNT3_HIGH); ret = regmap_read(info->regmap, PPMU_V2_PMCNT3_HIGH,
pmcnt_low = __raw_readl(info->ppmu.base + PPMU_V2_PMCNT3_LOW); &pmcnt_high);
load_count = ((u64)((pmcnt_high & 0xff)) << 32) if (ret < 0)
+ (u64)pmcnt_low; return ret;
ret = regmap_read(info->regmap, PPMU_V2_PMCNT3_LOW, &pmcnt_low);
if (ret < 0)
return ret;
load_count = ((u64)((pmcnt_high & 0xff)) << 32)+ (u64)pmcnt_low;
break; break;
} }
edata->load_count = load_count; edata->load_count = load_count;
/* Disable all counters */ /* Disable all counters */
cntenc = __raw_readl(info->ppmu.base + PPMU_V2_CNTENC); ret = regmap_read(info->regmap, PPMU_V2_CNTENC, &cntenc);
if (ret < 0)
return 0;
cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id));
__raw_writel(cntenc, info->ppmu.base + PPMU_V2_CNTENC); ret = regmap_write(info->regmap, PPMU_V2_CNTENC, cntenc);
if (ret < 0)
return ret;
dev_dbg(&edev->dev, "%25s (load: %ld / %ld)\n", edev->desc->name, dev_dbg(&edev->dev, "%25s (load: %ld / %ld)\n", edev->desc->name,
edata->load_count, edata->total_count); edata->load_count, edata->total_count);
...@@ -411,10 +554,19 @@ static int of_get_devfreq_events(struct device_node *np, ...@@ -411,10 +554,19 @@ static int of_get_devfreq_events(struct device_node *np,
return 0; return 0;
} }
static int exynos_ppmu_parse_dt(struct exynos_ppmu *info) static struct regmap_config exynos_ppmu_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
};
static int exynos_ppmu_parse_dt(struct platform_device *pdev,
struct exynos_ppmu *info)
{ {
struct device *dev = info->dev; struct device *dev = info->dev;
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
struct resource *res;
void __iomem *base;
int ret = 0; int ret = 0;
if (!np) { if (!np) {
...@@ -423,10 +575,17 @@ static int exynos_ppmu_parse_dt(struct exynos_ppmu *info) ...@@ -423,10 +575,17 @@ static int exynos_ppmu_parse_dt(struct exynos_ppmu *info)
} }
/* Maps the memory mapped IO to control PPMU register */ /* Maps the memory mapped IO to control PPMU register */
info->ppmu.base = of_iomap(np, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (IS_ERR_OR_NULL(info->ppmu.base)) { base = devm_ioremap_resource(dev, res);
dev_err(dev, "failed to map memory region\n"); if (IS_ERR(base))
return -ENOMEM; return PTR_ERR(base);
exynos_ppmu_regmap_config.max_register = resource_size(res) - 4;
info->regmap = devm_regmap_init_mmio(dev, base,
&exynos_ppmu_regmap_config);
if (IS_ERR(info->regmap)) {
dev_err(dev, "failed to initialize regmap\n");
return PTR_ERR(info->regmap);
} }
info->ppmu.clk = devm_clk_get(dev, "ppmu"); info->ppmu.clk = devm_clk_get(dev, "ppmu");
...@@ -438,15 +597,10 @@ static int exynos_ppmu_parse_dt(struct exynos_ppmu *info) ...@@ -438,15 +597,10 @@ static int exynos_ppmu_parse_dt(struct exynos_ppmu *info)
ret = of_get_devfreq_events(np, info); ret = of_get_devfreq_events(np, info);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "failed to parse exynos ppmu dt node\n"); dev_err(dev, "failed to parse exynos ppmu dt node\n");
goto err; return ret;
} }
return 0; return 0;
err:
iounmap(info->ppmu.base);
return ret;
} }
static int exynos_ppmu_probe(struct platform_device *pdev) static int exynos_ppmu_probe(struct platform_device *pdev)
...@@ -463,7 +617,7 @@ static int exynos_ppmu_probe(struct platform_device *pdev) ...@@ -463,7 +617,7 @@ static int exynos_ppmu_probe(struct platform_device *pdev)
info->dev = &pdev->dev; info->dev = &pdev->dev;
/* Parse dt data to get resource */ /* Parse dt data to get resource */
ret = exynos_ppmu_parse_dt(info); ret = exynos_ppmu_parse_dt(pdev, info);
if (ret < 0) { if (ret < 0) {
dev_err(&pdev->dev, dev_err(&pdev->dev,
"failed to parse devicetree for resource\n"); "failed to parse devicetree for resource\n");
...@@ -476,8 +630,7 @@ static int exynos_ppmu_probe(struct platform_device *pdev) ...@@ -476,8 +630,7 @@ static int exynos_ppmu_probe(struct platform_device *pdev)
if (!info->edev) { if (!info->edev) {
dev_err(&pdev->dev, dev_err(&pdev->dev,
"failed to allocate memory devfreq-event devices\n"); "failed to allocate memory devfreq-event devices\n");
ret = -ENOMEM; return -ENOMEM;
goto err;
} }
edev = info->edev; edev = info->edev;
platform_set_drvdata(pdev, info); platform_set_drvdata(pdev, info);
...@@ -488,17 +641,16 @@ static int exynos_ppmu_probe(struct platform_device *pdev) ...@@ -488,17 +641,16 @@ static int exynos_ppmu_probe(struct platform_device *pdev)
ret = PTR_ERR(edev[i]); ret = PTR_ERR(edev[i]);
dev_err(&pdev->dev, dev_err(&pdev->dev,
"failed to add devfreq-event device\n"); "failed to add devfreq-event device\n");
goto err; return PTR_ERR(edev[i]);
} }
pr_info("exynos-ppmu: new PPMU device registered %s (%s)\n",
dev_name(&pdev->dev), desc[i].name);
} }
clk_prepare_enable(info->ppmu.clk); clk_prepare_enable(info->ppmu.clk);
return 0; return 0;
err:
iounmap(info->ppmu.base);
return ret;
} }
static int exynos_ppmu_remove(struct platform_device *pdev) static int exynos_ppmu_remove(struct platform_device *pdev)
...@@ -506,7 +658,6 @@ static int exynos_ppmu_remove(struct platform_device *pdev) ...@@ -506,7 +658,6 @@ static int exynos_ppmu_remove(struct platform_device *pdev)
struct exynos_ppmu *info = platform_get_drvdata(pdev); struct exynos_ppmu *info = platform_get_drvdata(pdev);
clk_disable_unprepare(info->ppmu.clk); clk_disable_unprepare(info->ppmu.clk);
iounmap(info->ppmu.base);
return 0; return 0;
} }
......
...@@ -103,18 +103,17 @@ static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags) ...@@ -103,18 +103,17 @@ static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
int ret = 0; int ret = 0;
/* Get new opp-bus instance according to new bus clock */ /* Get new opp-bus instance according to new bus clock */
rcu_read_lock();
new_opp = devfreq_recommended_opp(dev, freq, flags); new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) { if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n"); dev_err(dev, "failed to get recommended opp instance\n");
rcu_read_unlock();
return PTR_ERR(new_opp); return PTR_ERR(new_opp);
} }
new_freq = dev_pm_opp_get_freq(new_opp); new_freq = dev_pm_opp_get_freq(new_opp);
new_volt = dev_pm_opp_get_voltage(new_opp); new_volt = dev_pm_opp_get_voltage(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq; old_freq = bus->curr_freq;
rcu_read_unlock();
if (old_freq == new_freq) if (old_freq == new_freq)
return 0; return 0;
...@@ -147,8 +146,8 @@ static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags) ...@@ -147,8 +146,8 @@ static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
} }
bus->curr_freq = new_freq; bus->curr_freq = new_freq;
dev_dbg(dev, "Set the frequency of bus (%lukHz -> %lukHz)\n", dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
old_freq/1000, new_freq/1000); old_freq, new_freq, clk_get_rate(bus->clk));
out: out:
mutex_unlock(&bus->lock); mutex_unlock(&bus->lock);
...@@ -214,17 +213,16 @@ static int exynos_bus_passive_target(struct device *dev, unsigned long *freq, ...@@ -214,17 +213,16 @@ static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
int ret = 0; int ret = 0;
/* Get new opp-bus instance according to new bus clock */ /* Get new opp-bus instance according to new bus clock */
rcu_read_lock();
new_opp = devfreq_recommended_opp(dev, freq, flags); new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) { if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n"); dev_err(dev, "failed to get recommended opp instance\n");
rcu_read_unlock();
return PTR_ERR(new_opp); return PTR_ERR(new_opp);
} }
new_freq = dev_pm_opp_get_freq(new_opp); new_freq = dev_pm_opp_get_freq(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq; old_freq = bus->curr_freq;
rcu_read_unlock();
if (old_freq == new_freq) if (old_freq == new_freq)
return 0; return 0;
...@@ -241,8 +239,8 @@ static int exynos_bus_passive_target(struct device *dev, unsigned long *freq, ...@@ -241,8 +239,8 @@ static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
*freq = new_freq; *freq = new_freq;
bus->curr_freq = new_freq; bus->curr_freq = new_freq;
dev_dbg(dev, "Set the frequency of bus (%lukHz -> %lukHz)\n", dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
old_freq/1000, new_freq/1000); old_freq, new_freq, clk_get_rate(bus->clk));
out: out:
mutex_unlock(&bus->lock); mutex_unlock(&bus->lock);
...@@ -358,16 +356,14 @@ static int exynos_bus_parse_of(struct device_node *np, ...@@ -358,16 +356,14 @@ static int exynos_bus_parse_of(struct device_node *np,
rate = clk_get_rate(bus->clk); rate = clk_get_rate(bus->clk);
rcu_read_lock();
opp = devfreq_recommended_opp(dev, &rate, 0); opp = devfreq_recommended_opp(dev, &rate, 0);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
dev_err(dev, "failed to find dev_pm_opp\n"); dev_err(dev, "failed to find dev_pm_opp\n");
rcu_read_unlock();
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto err_opp; goto err_opp;
} }
bus->curr_freq = dev_pm_opp_get_freq(opp); bus->curr_freq = dev_pm_opp_get_freq(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
return 0; return 0;
......
...@@ -38,4 +38,6 @@ extern void devfreq_interval_update(struct devfreq *devfreq, ...@@ -38,4 +38,6 @@ extern void devfreq_interval_update(struct devfreq *devfreq,
extern int devfreq_add_governor(struct devfreq_governor *governor); extern int devfreq_add_governor(struct devfreq_governor *governor);
extern int devfreq_remove_governor(struct devfreq_governor *governor); extern int devfreq_remove_governor(struct devfreq_governor *governor);
extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
#endif /* _GOVERNOR_H */ #endif /* _GOVERNOR_H */
...@@ -59,14 +59,14 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq, ...@@ -59,14 +59,14 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
* list of parent device. Because in this case, *freq is temporary * list of parent device. Because in this case, *freq is temporary
* value which is decided by ondemand governor. * value which is decided by ondemand governor.
*/ */
rcu_read_lock();
opp = devfreq_recommended_opp(parent_devfreq->dev.parent, freq, 0); opp = devfreq_recommended_opp(parent_devfreq->dev.parent, freq, 0);
rcu_read_unlock();
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto out; goto out;
} }
dev_pm_opp_put(opp);
/* /*
* Get the OPP table's index of decided freqeuncy by governor * Get the OPP table's index of decided freqeuncy by governor
* of parent device. * of parent device.
...@@ -112,6 +112,11 @@ static int update_devfreq_passive(struct devfreq *devfreq, unsigned long freq) ...@@ -112,6 +112,11 @@ static int update_devfreq_passive(struct devfreq *devfreq, unsigned long freq)
if (ret < 0) if (ret < 0)
goto out; goto out;
if (devfreq->profile->freq_table
&& (devfreq_update_status(devfreq, freq)))
dev_err(&devfreq->dev,
"Couldn't update frequency transition information.\n");
devfreq->previous_freq = freq; devfreq->previous_freq = freq;
out: out:
...@@ -179,6 +184,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq, ...@@ -179,6 +184,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
static struct devfreq_governor devfreq_passive = { static struct devfreq_governor devfreq_passive = {
.name = "passive", .name = "passive",
.immutable = 1,
.get_target_freq = devfreq_passive_get_target_freq, .get_target_freq = devfreq_passive_get_target_freq,
.event_handler = devfreq_passive_event_handler, .event_handler = devfreq_passive_event_handler,
}; };
......
/* /*
* linux/drivers/devfreq/governor_simpleondemand.c * linux/drivers/devfreq/governor_userspace.c
* *
* Copyright (C) 2011 Samsung Electronics * Copyright (C) 2011 Samsung Electronics
* MyungJoo Ham <myungjoo.ham@samsung.com> * MyungJoo Ham <myungjoo.ham@samsung.com>
...@@ -50,7 +50,6 @@ static ssize_t store_freq(struct device *dev, struct device_attribute *attr, ...@@ -50,7 +50,6 @@ static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
unsigned long wanted; unsigned long wanted;
int err = 0; int err = 0;
mutex_lock(&devfreq->lock); mutex_lock(&devfreq->lock);
data = devfreq->data; data = devfreq->data;
...@@ -112,7 +111,13 @@ static int userspace_init(struct devfreq *devfreq) ...@@ -112,7 +111,13 @@ static int userspace_init(struct devfreq *devfreq)
static void userspace_exit(struct devfreq *devfreq) static void userspace_exit(struct devfreq *devfreq)
{ {
/*
* Remove the sysfs entry, unless this is being called after
* device_del(), which should have done this already via kobject_del().
*/
if (devfreq->dev.kobj.sd)
sysfs_remove_group(&devfreq->dev.kobj, &dev_attr_group); sysfs_remove_group(&devfreq->dev.kobj, &dev_attr_group);
kfree(devfreq->data); kfree(devfreq->data);
devfreq->data = NULL; devfreq->data = NULL;
} }
......
...@@ -91,17 +91,13 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq, ...@@ -91,17 +91,13 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
unsigned long target_volt, target_rate; unsigned long target_volt, target_rate;
int err; int err;
rcu_read_lock();
opp = devfreq_recommended_opp(dev, freq, flags); opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp)) { if (IS_ERR(opp))
rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
}
target_rate = dev_pm_opp_get_freq(opp); target_rate = dev_pm_opp_get_freq(opp);
target_volt = dev_pm_opp_get_voltage(opp); target_volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
rcu_read_unlock();
if (dmcfreq->rate == target_rate) if (dmcfreq->rate == target_rate)
return 0; return 0;
...@@ -422,15 +418,13 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev) ...@@ -422,15 +418,13 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
data->rate = clk_get_rate(data->dmc_clk); data->rate = clk_get_rate(data->dmc_clk);
rcu_read_lock();
opp = devfreq_recommended_opp(dev, &data->rate, 0); opp = devfreq_recommended_opp(dev, &data->rate, 0);
if (IS_ERR(opp)) { if (IS_ERR(opp))
rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
}
data->rate = dev_pm_opp_get_freq(opp); data->rate = dev_pm_opp_get_freq(opp);
data->volt = dev_pm_opp_get_voltage(opp); data->volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
rk3399_devfreq_dmc_profile.initial_freq = data->rate; rk3399_devfreq_dmc_profile.initial_freq = data->rate;
......
...@@ -487,15 +487,13 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq, ...@@ -487,15 +487,13 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
unsigned long rate = *freq * KHZ; unsigned long rate = *freq * KHZ;
rcu_read_lock();
opp = devfreq_recommended_opp(dev, &rate, flags); opp = devfreq_recommended_opp(dev, &rate, flags);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(dev, "Failed to find opp for %lu KHz\n", *freq); dev_err(dev, "Failed to find opp for %lu KHz\n", *freq);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
rate = dev_pm_opp_get_freq(opp); rate = dev_pm_opp_get_freq(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
clk_set_min_rate(tegra->emc_clock, rate); clk_set_min_rate(tegra->emc_clock, rate);
clk_set_rate(tegra->emc_clock, 0); clk_set_rate(tegra->emc_clock, 0);
......
...@@ -297,8 +297,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, ...@@ -297,8 +297,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
if (!power_table) if (!power_table)
return -ENOMEM; return -ENOMEM;
rcu_read_lock();
for (freq = 0, i = 0; for (freq = 0, i = 0;
opp = dev_pm_opp_find_freq_ceil(dev, &freq), !IS_ERR(opp); opp = dev_pm_opp_find_freq_ceil(dev, &freq), !IS_ERR(opp);
freq++, i++) { freq++, i++) {
...@@ -306,13 +304,13 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, ...@@ -306,13 +304,13 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
u64 power; u64 power;
if (i >= num_opps) { if (i >= num_opps) {
rcu_read_unlock();
ret = -EAGAIN; ret = -EAGAIN;
goto free_power_table; goto free_power_table;
} }
freq_mhz = freq / 1000000; freq_mhz = freq / 1000000;
voltage_mv = dev_pm_opp_get_voltage(opp) / 1000; voltage_mv = dev_pm_opp_get_voltage(opp) / 1000;
dev_pm_opp_put(opp);
/* /*
* Do the multiplication with MHz and millivolt so as * Do the multiplication with MHz and millivolt so as
...@@ -328,8 +326,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, ...@@ -328,8 +326,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
power_table[i].power = power; power_table[i].power = power;
} }
rcu_read_unlock();
if (i != num_opps) { if (i != num_opps) {
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto free_power_table; goto free_power_table;
...@@ -433,13 +429,10 @@ static int get_static_power(struct cpufreq_cooling_device *cpufreq_device, ...@@ -433,13 +429,10 @@ static int get_static_power(struct cpufreq_cooling_device *cpufreq_device,
return 0; return 0;
} }
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(cpufreq_device->cpu_dev, freq_hz, opp = dev_pm_opp_find_freq_exact(cpufreq_device->cpu_dev, freq_hz,
true); true);
voltage = dev_pm_opp_get_voltage(opp); voltage = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
rcu_read_unlock();
if (voltage == 0) { if (voltage == 0) {
dev_warn_ratelimited(cpufreq_device->cpu_dev, dev_warn_ratelimited(cpufreq_device->cpu_dev,
......
...@@ -113,15 +113,15 @@ static int partition_enable_opps(struct devfreq_cooling_device *dfc, ...@@ -113,15 +113,15 @@ static int partition_enable_opps(struct devfreq_cooling_device *dfc,
unsigned int freq = dfc->freq_table[i]; unsigned int freq = dfc->freq_table[i];
bool want_enable = i >= cdev_state ? true : false; bool want_enable = i >= cdev_state ? true : false;
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, freq, !want_enable); opp = dev_pm_opp_find_freq_exact(dev, freq, !want_enable);
rcu_read_unlock();
if (PTR_ERR(opp) == -ERANGE) if (PTR_ERR(opp) == -ERANGE)
continue; continue;
else if (IS_ERR(opp)) else if (IS_ERR(opp))
return PTR_ERR(opp); return PTR_ERR(opp);
dev_pm_opp_put(opp);
if (want_enable) if (want_enable)
ret = dev_pm_opp_enable(dev, freq); ret = dev_pm_opp_enable(dev, freq);
else else
...@@ -221,15 +221,12 @@ get_static_power(struct devfreq_cooling_device *dfc, unsigned long freq) ...@@ -221,15 +221,12 @@ get_static_power(struct devfreq_cooling_device *dfc, unsigned long freq)
if (!dfc->power_ops->get_static_power) if (!dfc->power_ops->get_static_power)
return 0; return 0;
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, freq, true); opp = dev_pm_opp_find_freq_exact(dev, freq, true);
if (IS_ERR(opp) && (PTR_ERR(opp) == -ERANGE)) if (IS_ERR(opp) && (PTR_ERR(opp) == -ERANGE))
opp = dev_pm_opp_find_freq_exact(dev, freq, false); opp = dev_pm_opp_find_freq_exact(dev, freq, false);
voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */ voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */
dev_pm_opp_put(opp);
rcu_read_unlock();
if (voltage == 0) { if (voltage == 0) {
dev_warn_ratelimited(dev, dev_warn_ratelimited(dev,
...@@ -412,18 +409,14 @@ static int devfreq_cooling_gen_tables(struct devfreq_cooling_device *dfc) ...@@ -412,18 +409,14 @@ static int devfreq_cooling_gen_tables(struct devfreq_cooling_device *dfc)
unsigned long power_dyn, voltage; unsigned long power_dyn, voltage;
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
rcu_read_lock();
opp = dev_pm_opp_find_freq_floor(dev, &freq); opp = dev_pm_opp_find_freq_floor(dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto free_tables; goto free_tables;
} }
voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */ voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */
dev_pm_opp_put(opp);
rcu_read_unlock();
if (dfc->power_ops) { if (dfc->power_ops) {
power_dyn = get_dynamic_power(dfc, freq, voltage); power_dyn = get_dynamic_power(dfc, freq, voltage);
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
#define CPUFREQ_ETERNAL (-1) #define CPUFREQ_ETERNAL (-1)
#define CPUFREQ_NAME_LEN 16 #define CPUFREQ_NAME_LEN 16
/* Print length for names. Extra 1 space for accomodating '\n' in prints */ /* Print length for names. Extra 1 space for accommodating '\n' in prints */
#define CPUFREQ_NAME_PLEN (CPUFREQ_NAME_LEN + 1) #define CPUFREQ_NAME_PLEN (CPUFREQ_NAME_LEN + 1)
struct cpufreq_governor; struct cpufreq_governor;
...@@ -115,7 +115,7 @@ struct cpufreq_policy { ...@@ -115,7 +115,7 @@ struct cpufreq_policy {
* guarantee that frequency can be changed on any CPU sharing the * guarantee that frequency can be changed on any CPU sharing the
* policy and that the change will affect all of the policy CPUs then. * policy and that the change will affect all of the policy CPUs then.
* - fast_switch_enabled is to be set by governors that support fast * - fast_switch_enabled is to be set by governors that support fast
* freqnency switching with the help of cpufreq_enable_fast_switch(). * frequency switching with the help of cpufreq_enable_fast_switch().
*/ */
bool fast_switch_possible; bool fast_switch_possible;
bool fast_switch_enabled; bool fast_switch_enabled;
...@@ -415,9 +415,6 @@ static inline void cpufreq_resume(void) {} ...@@ -415,9 +415,6 @@ static inline void cpufreq_resume(void) {}
/* Policy Notifiers */ /* Policy Notifiers */
#define CPUFREQ_ADJUST (0) #define CPUFREQ_ADJUST (0)
#define CPUFREQ_NOTIFY (1) #define CPUFREQ_NOTIFY (1)
#define CPUFREQ_START (2)
#define CPUFREQ_CREATE_POLICY (3)
#define CPUFREQ_REMOVE_POLICY (4)
#ifdef CONFIG_CPU_FREQ #ifdef CONFIG_CPU_FREQ
int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);
......
...@@ -104,6 +104,8 @@ struct devfreq_dev_profile { ...@@ -104,6 +104,8 @@ struct devfreq_dev_profile {
* struct devfreq_governor - Devfreq policy governor * struct devfreq_governor - Devfreq policy governor
* @node: list node - contains registered devfreq governors * @node: list node - contains registered devfreq governors
* @name: Governor's name * @name: Governor's name
* @immutable: Immutable flag for governor. If the value is 1,
* this govenror is never changeable to other governor.
* @get_target_freq: Returns desired operating frequency for the device. * @get_target_freq: Returns desired operating frequency for the device.
* Basically, get_target_freq will run * Basically, get_target_freq will run
* devfreq_dev_profile.get_dev_status() to get the * devfreq_dev_profile.get_dev_status() to get the
...@@ -121,6 +123,7 @@ struct devfreq_governor { ...@@ -121,6 +123,7 @@ struct devfreq_governor {
struct list_head node; struct list_head node;
const char name[DEVFREQ_NAME_LEN]; const char name[DEVFREQ_NAME_LEN];
const unsigned int immutable;
int (*get_target_freq)(struct devfreq *this, unsigned long *freq); int (*get_target_freq)(struct devfreq *this, unsigned long *freq);
int (*event_handler)(struct devfreq *devfreq, int (*event_handler)(struct devfreq *devfreq,
unsigned int event, void *data); unsigned int event, void *data);
......
...@@ -182,6 +182,9 @@ static inline int pm_genpd_remove(struct generic_pm_domain *genpd) ...@@ -182,6 +182,9 @@ static inline int pm_genpd_remove(struct generic_pm_domain *genpd)
{ {
return -ENOTSUPP; return -ENOTSUPP;
} }
#define simple_qos_governor (*(struct dev_power_governor *)(NULL))
#define pm_domain_always_on_gov (*(struct dev_power_governor *)(NULL))
#endif #endif
static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
......
...@@ -78,6 +78,9 @@ struct dev_pm_set_opp_data { ...@@ -78,6 +78,9 @@ struct dev_pm_set_opp_data {
#if defined(CONFIG_PM_OPP) #if defined(CONFIG_PM_OPP)
struct opp_table *dev_pm_opp_get_opp_table(struct device *dev);
void dev_pm_opp_put_opp_table(struct opp_table *opp_table);
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp); unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp);
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp); unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp);
...@@ -88,7 +91,7 @@ int dev_pm_opp_get_opp_count(struct device *dev); ...@@ -88,7 +91,7 @@ int dev_pm_opp_get_opp_count(struct device *dev);
unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev); unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev);
unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev); unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev);
unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev); unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev);
struct dev_pm_opp *dev_pm_opp_get_suspend_opp(struct device *dev); unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev);
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq, unsigned long freq,
...@@ -99,6 +102,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -99,6 +102,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq); unsigned long *freq);
void dev_pm_opp_put(struct dev_pm_opp *opp);
int dev_pm_opp_add(struct device *dev, unsigned long freq, int dev_pm_opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt); unsigned long u_volt);
...@@ -108,22 +112,30 @@ int dev_pm_opp_enable(struct device *dev, unsigned long freq); ...@@ -108,22 +112,30 @@ int dev_pm_opp_enable(struct device *dev, unsigned long freq);
int dev_pm_opp_disable(struct device *dev, unsigned long freq); int dev_pm_opp_disable(struct device *dev, unsigned long freq);
struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev); int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb);
int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb);
unsigned int count);
void dev_pm_opp_put_supported_hw(struct device *dev); struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count);
int dev_pm_opp_set_prop_name(struct device *dev, const char *name); void dev_pm_opp_put_supported_hw(struct opp_table *opp_table);
void dev_pm_opp_put_prop_name(struct device *dev); struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name);
void dev_pm_opp_put_prop_name(struct opp_table *opp_table);
struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count);
void dev_pm_opp_put_regulators(struct opp_table *opp_table); void dev_pm_opp_put_regulators(struct opp_table *opp_table);
int dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
void dev_pm_opp_register_put_opp_helper(struct device *dev); void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table);
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask); int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
void dev_pm_opp_remove_table(struct device *dev); void dev_pm_opp_remove_table(struct device *dev);
void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask); void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask);
#else #else
static inline struct opp_table *dev_pm_opp_get_opp_table(struct device *dev)
{
return ERR_PTR(-ENOTSUPP);
}
static inline void dev_pm_opp_put_opp_table(struct opp_table *opp_table) {}
static inline unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp) static inline unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{ {
return 0; return 0;
...@@ -159,9 +171,9 @@ static inline unsigned long dev_pm_opp_get_max_transition_latency(struct device ...@@ -159,9 +171,9 @@ static inline unsigned long dev_pm_opp_get_max_transition_latency(struct device
return 0; return 0;
} }
static inline struct dev_pm_opp *dev_pm_opp_get_suspend_opp(struct device *dev) static inline unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev)
{ {
return NULL; return 0;
} }
static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
...@@ -182,6 +194,8 @@ static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, ...@@ -182,6 +194,8 @@ static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
return ERR_PTR(-ENOTSUPP); return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_put(struct dev_pm_opp *opp) {}
static inline int dev_pm_opp_add(struct device *dev, unsigned long freq, static inline int dev_pm_opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt) unsigned long u_volt)
{ {
...@@ -202,35 +216,39 @@ static inline int dev_pm_opp_disable(struct device *dev, unsigned long freq) ...@@ -202,35 +216,39 @@ static inline int dev_pm_opp_disable(struct device *dev, unsigned long freq)
return 0; return 0;
} }
static inline struct srcu_notifier_head *dev_pm_opp_get_notifier( static inline int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb)
struct device *dev)
{ {
return ERR_PTR(-ENOTSUPP); return -ENOTSUPP;
}
static inline int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb)
{
return -ENOTSUPP;
} }
static inline int dev_pm_opp_set_supported_hw(struct device *dev, static inline struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
const u32 *versions, const u32 *versions,
unsigned int count) unsigned int count)
{ {
return -ENOTSUPP; return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_put_supported_hw(struct device *dev) {} static inline void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) {}
static inline int dev_pm_opp_register_set_opp_helper(struct device *dev, static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
int (*set_opp)(struct dev_pm_set_opp_data *data)) int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
return -ENOTSUPP; return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_register_put_opp_helper(struct device *dev) {} static inline void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table) {}
static inline int dev_pm_opp_set_prop_name(struct device *dev, const char *name) static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
{ {
return -ENOTSUPP; return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_put_prop_name(struct device *dev) {} static inline void dev_pm_opp_put_prop_name(struct opp_table *opp_table) {}
static inline struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count) static inline struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count)
{ {
...@@ -270,6 +288,7 @@ void dev_pm_opp_of_remove_table(struct device *dev); ...@@ -270,6 +288,7 @@ void dev_pm_opp_of_remove_table(struct device *dev);
int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask); int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask);
void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask); void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask);
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev);
#else #else
static inline int dev_pm_opp_of_add_table(struct device *dev) static inline int dev_pm_opp_of_add_table(struct device *dev)
{ {
...@@ -293,6 +312,11 @@ static inline int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct ...@@ -293,6 +312,11 @@ static inline int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct
{ {
return -ENOTSUPP; return -ENOTSUPP;
} }
static inline struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
{
return NULL;
}
#endif #endif
#endif /* __LINUX_OPP_H__ */ #endif /* __LINUX_OPP_H__ */
...@@ -6,7 +6,6 @@ ...@@ -6,7 +6,6 @@
*/ */
#include <linux/plist.h> #include <linux/plist.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/miscdevice.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
......
...@@ -166,7 +166,7 @@ static int __init setup_test_suspend(char *value) ...@@ -166,7 +166,7 @@ static int __init setup_test_suspend(char *value)
return 0; return 0;
} }
for (i = 0; pm_labels[i]; i++) for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
if (!strcmp(pm_labels[i], suspend_type)) { if (!strcmp(pm_labels[i], suspend_type)) {
test_state_label = pm_labels[i]; test_state_label = pm_labels[i];
return 0; return 0;
......
...@@ -201,7 +201,7 @@ void free_all_swap_pages(int swap) ...@@ -201,7 +201,7 @@ void free_all_swap_pages(int swap)
struct swsusp_extent *ext; struct swsusp_extent *ext;
unsigned long offset; unsigned long offset;
ext = container_of(node, struct swsusp_extent, node); ext = rb_entry(node, struct swsusp_extent, node);
rb_erase(node, &swsusp_extents); rb_erase(node, &swsusp_extents);
for (offset = ext->start; offset <= ext->end; offset++) for (offset = ext->start; offset <= ext->end; offset++)
swap_free(swp_entry(swap, offset)); swap_free(swp_entry(swap, offset));
......
...@@ -24,11 +24,6 @@ ...@@ -24,11 +24,6 @@
# https://01.org/suspendresume # https://01.org/suspendresume
# Source repo # Source repo
# https://github.com/01org/suspendresume # https://github.com/01org/suspendresume
# Documentation
# Getting Started
# https://01.org/suspendresume/documentation/getting-started
# Command List:
# https://01.org/suspendresume/documentation/command-list
# #
# Description: # Description:
# This tool is designed to assist kernel and OS developers in optimizing # This tool is designed to assist kernel and OS developers in optimizing
...@@ -66,6 +61,8 @@ import platform ...@@ -66,6 +61,8 @@ import platform
from datetime import datetime from datetime import datetime
import struct import struct
import ConfigParser import ConfigParser
from threading import Thread
from subprocess import call, Popen, PIPE
# ----------------- CLASSES -------------------- # ----------------- CLASSES --------------------
...@@ -75,11 +72,15 @@ import ConfigParser ...@@ -75,11 +72,15 @@ import ConfigParser
# store system values and test parameters # store system values and test parameters
class SystemValues: class SystemValues:
ansi = False ansi = False
version = '4.2' version = '4.5'
verbose = False verbose = False
addlogs = False addlogs = False
mindevlen = 0.001 mindevlen = 0.0
mincglen = 1.0 mincglen = 0.0
cgphase = ''
cgtest = -1
callloopmaxgap = 0.0001
callloopmaxlen = 0.005
srgap = 0 srgap = 0
cgexp = False cgexp = False
outdir = '' outdir = ''
...@@ -92,6 +93,7 @@ class SystemValues: ...@@ -92,6 +93,7 @@ class SystemValues:
'device_pm_callback_end', 'device_pm_callback_end',
'device_pm_callback_start' 'device_pm_callback_start'
] ]
logmsg = ''
testcommand = '' testcommand = ''
mempath = '/dev/mem' mempath = '/dev/mem'
powerfile = '/sys/power/state' powerfile = '/sys/power/state'
...@@ -117,19 +119,19 @@ class SystemValues: ...@@ -117,19 +119,19 @@ class SystemValues:
usetracemarkers = True usetracemarkers = True
usekprobes = True usekprobes = True
usedevsrc = False usedevsrc = False
useprocmon = False
notestrun = False notestrun = False
mixedphaseheight = True
devprops = dict() devprops = dict()
postresumetime = 0 predelay = 0
postdelay = 0
procexecfmt = 'ps - (?P<ps>.*)$'
devpropfmt = '# Device Properties: .*' devpropfmt = '# Device Properties: .*'
tracertypefmt = '# tracer: (?P<t>.*)' tracertypefmt = '# tracer: (?P<t>.*)'
firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$' firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$'
postresumefmt = '# post resume time (?P<t>[0-9]*)$'
stampfmt = '# suspend-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\ stampfmt = '# suspend-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\
'(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\ '(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\
' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$' ' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$'
kprobecolor = 'rgba(204,204,204,0.5)'
synccolor = 'rgba(204,204,204,0.5)'
debugfuncs = []
tracefuncs = { tracefuncs = {
'sys_sync': dict(), 'sys_sync': dict(),
'pm_prepare_console': dict(), 'pm_prepare_console': dict(),
...@@ -152,44 +154,66 @@ class SystemValues: ...@@ -152,44 +154,66 @@ class SystemValues:
'CPU_OFF': { 'CPU_OFF': {
'func':'_cpu_down', 'func':'_cpu_down',
'args_x86_64': {'cpu':'%di:s32'}, 'args_x86_64': {'cpu':'%di:s32'},
'format': 'CPU_OFF[{cpu}]', 'format': 'CPU_OFF[{cpu}]'
'mask': 'CPU_.*_DOWN'
}, },
'CPU_ON': { 'CPU_ON': {
'func':'_cpu_up', 'func':'_cpu_up',
'args_x86_64': {'cpu':'%di:s32'}, 'args_x86_64': {'cpu':'%di:s32'},
'format': 'CPU_ON[{cpu}]', 'format': 'CPU_ON[{cpu}]'
'mask': 'CPU_.*_UP'
}, },
} }
dev_tracefuncs = { dev_tracefuncs = {
# general wait/delay/sleep # general wait/delay/sleep
'msleep': { 'args_x86_64': {'time':'%di:s32'} }, 'msleep': { 'args_x86_64': {'time':'%di:s32'}, 'ub': 1 },
'udelay': { 'func':'__const_udelay', 'args_x86_64': {'loops':'%di:s32'} }, 'schedule_timeout_uninterruptible': { 'args_x86_64': {'timeout':'%di:s32'}, 'ub': 1 },
'acpi_os_stall': dict(), 'schedule_timeout': { 'args_x86_64': {'timeout':'%di:s32'}, 'ub': 1 },
'udelay': { 'func':'__const_udelay', 'args_x86_64': {'loops':'%di:s32'}, 'ub': 1 },
'usleep_range': { 'args_x86_64': {'min':'%di:s32', 'max':'%si:s32'}, 'ub': 1 },
'mutex_lock_slowpath': { 'func':'__mutex_lock_slowpath', 'ub': 1 },
'acpi_os_stall': {'ub': 1},
# ACPI # ACPI
'acpi_resume_power_resources': dict(), 'acpi_resume_power_resources': dict(),
'acpi_ps_parse_aml': dict(), 'acpi_ps_parse_aml': dict(),
# filesystem # filesystem
'ext4_sync_fs': dict(), 'ext4_sync_fs': dict(),
# 80211
'iwlagn_mac_start': dict(),
'iwlagn_alloc_bcast_station': dict(),
'iwl_trans_pcie_start_hw': dict(),
'iwl_trans_pcie_start_fw': dict(),
'iwl_run_init_ucode': dict(),
'iwl_load_ucode_wait_alive': dict(),
'iwl_alive_start': dict(),
'iwlagn_mac_stop': dict(),
'iwlagn_mac_suspend': dict(),
'iwlagn_mac_resume': dict(),
'iwlagn_mac_add_interface': dict(),
'iwlagn_mac_remove_interface': dict(),
'iwlagn_mac_change_interface': dict(),
'iwlagn_mac_config': dict(),
'iwlagn_configure_filter': dict(),
'iwlagn_mac_hw_scan': dict(),
'iwlagn_bss_info_changed': dict(),
'iwlagn_mac_channel_switch': dict(),
'iwlagn_mac_flush': dict(),
# ATA # ATA
'ata_eh_recover': { 'args_x86_64': {'port':'+36(%di):s32'} }, 'ata_eh_recover': { 'args_x86_64': {'port':'+36(%di):s32'} },
# i915 # i915
'i915_gem_restore_gtt_mappings': dict(), 'i915_gem_resume': dict(),
'i915_restore_state': dict(),
'intel_opregion_setup': dict(), 'intel_opregion_setup': dict(),
'g4x_pre_enable_dp': dict(),
'vlv_pre_enable_dp': dict(),
'chv_pre_enable_dp': dict(),
'g4x_enable_dp': dict(),
'vlv_enable_dp': dict(),
'intel_hpd_init': dict(),
'intel_opregion_register': dict(),
'intel_dp_detect': dict(), 'intel_dp_detect': dict(),
'intel_hdmi_detect': dict(), 'intel_hdmi_detect': dict(),
'intel_opregion_init': dict(), 'intel_opregion_init': dict(),
'intel_fbdev_set_suspend': dict(),
} }
kprobes_postresume = [
{
'name': 'ataportrst',
'func': 'ata_eh_recover',
'args': {'port':'+36(%di):s32'},
'format': 'ata{port}_port_reset',
'mask': 'ata.*_port_reset'
}
]
kprobes = dict() kprobes = dict()
timeformat = '%.3f' timeformat = '%.3f'
def __init__(self): def __init__(self):
...@@ -198,6 +222,7 @@ class SystemValues: ...@@ -198,6 +222,7 @@ class SystemValues:
self.embedded = True self.embedded = True
self.addlogs = True self.addlogs = True
self.htmlfile = os.environ['LOG_FILE'] self.htmlfile = os.environ['LOG_FILE']
self.archargs = 'args_'+platform.machine()
self.hostname = platform.node() self.hostname = platform.node()
if(self.hostname == ''): if(self.hostname == ''):
self.hostname = 'localhost' self.hostname = 'localhost'
...@@ -214,6 +239,13 @@ class SystemValues: ...@@ -214,6 +239,13 @@ class SystemValues:
if num < 0 or num > 6: if num < 0 or num > 6:
return return
self.timeformat = '%.{0}f'.format(num) self.timeformat = '%.{0}f'.format(num)
def setOutputFolder(self, value):
args = dict()
n = datetime.now()
args['date'] = n.strftime('%y%m%d')
args['time'] = n.strftime('%H%M%S')
args['hostname'] = self.hostname
self.outdir = value.format(**args)
def setOutputFile(self): def setOutputFile(self):
if((self.htmlfile == '') and (self.dmesgfile != '')): if((self.htmlfile == '') and (self.dmesgfile != '')):
m = re.match('(?P<name>.*)_dmesg\.txt$', self.dmesgfile) m = re.match('(?P<name>.*)_dmesg\.txt$', self.dmesgfile)
...@@ -253,10 +285,14 @@ class SystemValues: ...@@ -253,10 +285,14 @@ class SystemValues:
self.testdir+'/'+self.prefix+'_'+self.suspendmode+'.html' self.testdir+'/'+self.prefix+'_'+self.suspendmode+'.html'
if not os.path.isdir(self.testdir): if not os.path.isdir(self.testdir):
os.mkdir(self.testdir) os.mkdir(self.testdir)
def setDeviceFilter(self, devnames): def setDeviceFilter(self, value):
self.devicefilter = string.split(devnames) self.devicefilter = []
if value:
value = value.split(',')
for i in value:
self.devicefilter.append(i.strip())
def rtcWakeAlarmOn(self): def rtcWakeAlarmOn(self):
os.system('echo 0 > '+self.rtcpath+'/wakealarm') call('echo 0 > '+self.rtcpath+'/wakealarm', shell=True)
outD = open(self.rtcpath+'/date', 'r').read().strip() outD = open(self.rtcpath+'/date', 'r').read().strip()
outT = open(self.rtcpath+'/time', 'r').read().strip() outT = open(self.rtcpath+'/time', 'r').read().strip()
mD = re.match('^(?P<y>[0-9]*)-(?P<m>[0-9]*)-(?P<d>[0-9]*)', outD) mD = re.match('^(?P<y>[0-9]*)-(?P<m>[0-9]*)-(?P<d>[0-9]*)', outD)
...@@ -272,12 +308,12 @@ class SystemValues: ...@@ -272,12 +308,12 @@ class SystemValues:
# if hardware time fails, use the software time # if hardware time fails, use the software time
nowtime = int(datetime.now().strftime('%s')) nowtime = int(datetime.now().strftime('%s'))
alarm = nowtime + self.rtcwaketime alarm = nowtime + self.rtcwaketime
os.system('echo %d > %s/wakealarm' % (alarm, self.rtcpath)) call('echo %d > %s/wakealarm' % (alarm, self.rtcpath), shell=True)
def rtcWakeAlarmOff(self): def rtcWakeAlarmOff(self):
os.system('echo 0 > %s/wakealarm' % self.rtcpath) call('echo 0 > %s/wakealarm' % self.rtcpath, shell=True)
def initdmesg(self): def initdmesg(self):
# get the latest time stamp from the dmesg log # get the latest time stamp from the dmesg log
fp = os.popen('dmesg') fp = Popen('dmesg', stdout=PIPE).stdout
ktime = '0' ktime = '0'
for line in fp: for line in fp:
line = line.replace('\r\n', '') line = line.replace('\r\n', '')
...@@ -291,7 +327,7 @@ class SystemValues: ...@@ -291,7 +327,7 @@ class SystemValues:
self.dmesgstart = float(ktime) self.dmesgstart = float(ktime)
def getdmesg(self): def getdmesg(self):
# store all new dmesg lines since initdmesg was called # store all new dmesg lines since initdmesg was called
fp = os.popen('dmesg') fp = Popen('dmesg', stdout=PIPE).stdout
op = open(self.dmesgfile, 'a') op = open(self.dmesgfile, 'a')
for line in fp: for line in fp:
line = line.replace('\r\n', '') line = line.replace('\r\n', '')
...@@ -317,18 +353,11 @@ class SystemValues: ...@@ -317,18 +353,11 @@ class SystemValues:
def getFtraceFilterFunctions(self, current): def getFtraceFilterFunctions(self, current):
rootCheck(True) rootCheck(True)
if not current: if not current:
os.system('cat '+self.tpath+'available_filter_functions') call('cat '+self.tpath+'available_filter_functions', shell=True)
return return
fp = open(self.tpath+'available_filter_functions') fp = open(self.tpath+'available_filter_functions')
master = fp.read().split('\n') master = fp.read().split('\n')
fp.close() fp.close()
if len(self.debugfuncs) > 0:
for i in self.debugfuncs:
if i in master:
print i
else:
print self.colorText(i)
else:
for i in self.tracefuncs: for i in self.tracefuncs:
if 'func' in self.tracefuncs[i]: if 'func' in self.tracefuncs[i]:
i = self.tracefuncs[i]['func'] i = self.tracefuncs[i]['func']
...@@ -351,22 +380,15 @@ class SystemValues: ...@@ -351,22 +380,15 @@ class SystemValues:
fp = open(self.tpath+'set_graph_function', 'w') fp = open(self.tpath+'set_graph_function', 'w')
fp.write(flist) fp.write(flist)
fp.close() fp.close()
def kprobeMatch(self, name, target):
if name not in self.kprobes:
return False
if re.match(self.kprobes[name]['mask'], target):
return True
return False
def basicKprobe(self, name): def basicKprobe(self, name):
self.kprobes[name] = {'name': name,'func': name,'args': dict(),'format': name,'mask': name} self.kprobes[name] = {'name': name,'func': name,'args': dict(),'format': name}
def defaultKprobe(self, name, kdata): def defaultKprobe(self, name, kdata):
k = kdata k = kdata
for field in ['name', 'format', 'mask', 'func']: for field in ['name', 'format', 'func']:
if field not in k: if field not in k:
k[field] = name k[field] = name
archargs = 'args_'+platform.machine() if self.archargs in k:
if archargs in k: k['args'] = k[self.archargs]
k['args'] = k[archargs]
else: else:
k['args'] = dict() k['args'] = dict()
k['format'] = name k['format'] = name
...@@ -403,49 +425,80 @@ class SystemValues: ...@@ -403,49 +425,80 @@ class SystemValues:
out = fmt.format(**arglist) out = fmt.format(**arglist)
out = out.replace(' ', '_').replace('"', '') out = out.replace(' ', '_').replace('"', '')
return out return out
def kprobeText(self, kprobe): def kprobeText(self, kname, kprobe):
name, fmt, func, args = kprobe['name'], kprobe['format'], kprobe['func'], kprobe['args'] name = fmt = func = kname
args = dict()
if 'name' in kprobe:
name = kprobe['name']
if 'format' in kprobe:
fmt = kprobe['format']
if 'func' in kprobe:
func = kprobe['func']
if self.archargs in kprobe:
args = kprobe[self.archargs]
if 'args' in kprobe:
args = kprobe['args']
if re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', func): if re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', func):
doError('Kprobe "%s" has format info in the function name "%s"' % (name, func), False) doError('Kprobe "%s" has format info in the function name "%s"' % (name, func))
for arg in re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', fmt): for arg in re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', fmt):
if arg not in args: if arg not in args:
doError('Kprobe "%s" is missing argument "%s"' % (name, arg), False) doError('Kprobe "%s" is missing argument "%s"' % (name, arg))
val = 'p:%s_cal %s' % (name, func) val = 'p:%s_cal %s' % (name, func)
for i in sorted(args): for i in sorted(args):
val += ' %s=%s' % (i, args[i]) val += ' %s=%s' % (i, args[i])
val += '\nr:%s_ret %s $retval\n' % (name, func) val += '\nr:%s_ret %s $retval\n' % (name, func)
return val return val
def addKprobes(self): def addKprobes(self, output=False):
if len(sysvals.kprobes) < 1:
return
if output:
print(' kprobe functions in this kernel:')
# first test each kprobe # first test each kprobe
print('INITIALIZING KPROBES...')
rejects = [] rejects = []
# sort kprobes: trace, ub-dev, custom, dev
kpl = [[], [], [], []]
for name in sorted(self.kprobes): for name in sorted(self.kprobes):
if not self.testKprobe(self.kprobes[name]): res = self.colorText('YES', 32)
if not self.testKprobe(name, self.kprobes[name]):
res = self.colorText('NO')
rejects.append(name) rejects.append(name)
else:
if name in self.tracefuncs:
kpl[0].append(name)
elif name in self.dev_tracefuncs:
if 'ub' in self.dev_tracefuncs[name]:
kpl[1].append(name)
else:
kpl[3].append(name)
else:
kpl[2].append(name)
if output:
print(' %s: %s' % (name, res))
kplist = kpl[0] + kpl[1] + kpl[2] + kpl[3]
# remove all failed ones from the list # remove all failed ones from the list
for name in rejects: for name in rejects:
vprint('Skipping KPROBE: %s' % name)
self.kprobes.pop(name) self.kprobes.pop(name)
# set the kprobes all at once
self.fsetVal('', 'kprobe_events') self.fsetVal('', 'kprobe_events')
kprobeevents = '' kprobeevents = ''
# set the kprobes all at once for kp in kplist:
for kp in self.kprobes: kprobeevents += self.kprobeText(kp, self.kprobes[kp])
val = self.kprobeText(self.kprobes[kp])
vprint('Adding KPROBE: %s\n%s' % (kp, val.strip()))
kprobeevents += self.kprobeText(self.kprobes[kp])
self.fsetVal(kprobeevents, 'kprobe_events') self.fsetVal(kprobeevents, 'kprobe_events')
# verify that the kprobes were set as ordered # verify that the kprobes were set as ordered
check = self.fgetVal('kprobe_events') check = self.fgetVal('kprobe_events')
linesout = len(kprobeevents.split('\n')) linesout = len(kprobeevents.split('\n')) - 1
linesack = len(check.split('\n')) linesack = len(check.split('\n')) - 1
if output:
res = '%d/%d' % (linesack, linesout)
if linesack < linesout: if linesack < linesout:
# if not, try appending the kprobes 1 by 1 res = self.colorText(res, 31)
for kp in self.kprobes: else:
kprobeevents = self.kprobeText(self.kprobes[kp]) res = self.colorText(res, 32)
self.fsetVal(kprobeevents, 'kprobe_events', 'a') print(' working kprobe functions enabled: %s' % res)
self.fsetVal('1', 'events/kprobes/enable') self.fsetVal('1', 'events/kprobes/enable')
def testKprobe(self, kprobe): def testKprobe(self, kname, kprobe):
kprobeevents = self.kprobeText(kprobe) self.fsetVal('0', 'events/kprobes/enable')
kprobeevents = self.kprobeText(kname, kprobe)
if not kprobeevents: if not kprobeevents:
return False return False
try: try:
...@@ -463,8 +516,9 @@ class SystemValues: ...@@ -463,8 +516,9 @@ class SystemValues:
if not os.path.exists(file): if not os.path.exists(file):
return False return False
try: try:
fp = open(file, mode) fp = open(file, mode, 0)
fp.write(val) fp.write(val)
fp.flush()
fp.close() fp.close()
except: except:
pass pass
...@@ -491,21 +545,17 @@ class SystemValues: ...@@ -491,21 +545,17 @@ class SystemValues:
for name in self.dev_tracefuncs: for name in self.dev_tracefuncs:
self.defaultKprobe(name, self.dev_tracefuncs[name]) self.defaultKprobe(name, self.dev_tracefuncs[name])
def isCallgraphFunc(self, name): def isCallgraphFunc(self, name):
if len(self.debugfuncs) < 1 and self.suspendmode == 'command': if len(self.tracefuncs) < 1 and self.suspendmode == 'command':
return True
if name in self.debugfuncs:
return True return True
funclist = []
for i in self.tracefuncs: for i in self.tracefuncs:
if 'func' in self.tracefuncs[i]: if 'func' in self.tracefuncs[i]:
funclist.append(self.tracefuncs[i]['func']) f = self.tracefuncs[i]['func']
else: else:
funclist.append(i) f = i
if name in funclist: if name == f:
return True return True
return False return False
def initFtrace(self, testing=False): def initFtrace(self, testing=False):
tp = self.tpath
print('INITIALIZING FTRACE...') print('INITIALIZING FTRACE...')
# turn trace off # turn trace off
self.fsetVal('0', 'tracing_on') self.fsetVal('0', 'tracing_on')
...@@ -518,18 +568,7 @@ class SystemValues: ...@@ -518,18 +568,7 @@ class SystemValues:
# go no further if this is just a status check # go no further if this is just a status check
if testing: if testing:
return return
if self.usekprobes: # initialize the callgraph trace
# add tracefunc kprobes so long as were not using full callgraph
if(not self.usecallgraph or len(self.debugfuncs) > 0):
for name in self.tracefuncs:
self.defaultKprobe(name, self.tracefuncs[name])
if self.usedevsrc:
for name in self.dev_tracefuncs:
self.defaultKprobe(name, self.dev_tracefuncs[name])
else:
self.usedevsrc = False
self.addKprobes()
# initialize the callgraph trace, unless this is an x2 run
if(self.usecallgraph): if(self.usecallgraph):
# set trace type # set trace type
self.fsetVal('function_graph', 'current_tracer') self.fsetVal('function_graph', 'current_tracer')
...@@ -545,11 +584,6 @@ class SystemValues: ...@@ -545,11 +584,6 @@ class SystemValues:
self.fsetVal('context-info', 'trace_options') self.fsetVal('context-info', 'trace_options')
self.fsetVal('graph-time', 'trace_options') self.fsetVal('graph-time', 'trace_options')
self.fsetVal('0', 'max_graph_depth') self.fsetVal('0', 'max_graph_depth')
if len(self.debugfuncs) > 0:
self.setFtraceFilterFunctions(self.debugfuncs)
elif self.suspendmode == 'command':
self.fsetVal('', 'set_graph_function')
else:
cf = ['dpm_run_callback'] cf = ['dpm_run_callback']
if(self.usetraceeventsonly): if(self.usetraceeventsonly):
cf += ['dpm_prepare', 'dpm_complete'] cf += ['dpm_prepare', 'dpm_complete']
...@@ -559,6 +593,15 @@ class SystemValues: ...@@ -559,6 +593,15 @@ class SystemValues:
else: else:
cf.append(fn) cf.append(fn)
self.setFtraceFilterFunctions(cf) self.setFtraceFilterFunctions(cf)
# initialize the kprobe trace
elif self.usekprobes:
for name in self.tracefuncs:
self.defaultKprobe(name, self.tracefuncs[name])
if self.usedevsrc:
for name in self.dev_tracefuncs:
self.defaultKprobe(name, self.dev_tracefuncs[name])
print('INITIALIZING KPROBES...')
self.addKprobes(self.verbose)
if(self.usetraceevents): if(self.usetraceevents):
# turn trace events on # turn trace events on
events = iter(self.traceevents) events = iter(self.traceevents)
...@@ -590,10 +633,10 @@ class SystemValues: ...@@ -590,10 +633,10 @@ class SystemValues:
if(os.path.exists(tp+f) == False): if(os.path.exists(tp+f) == False):
return False return False
return True return True
def colorText(self, str): def colorText(self, str, color=31):
if not self.ansi: if not self.ansi:
return str return str
return '\x1B[31;40m'+str+'\x1B[m' return '\x1B[%d;40m%s\x1B[m' % (color, str)
sysvals = SystemValues() sysvals = SystemValues()
...@@ -625,8 +668,8 @@ class DevProps: ...@@ -625,8 +668,8 @@ class DevProps:
if self.xtraclass: if self.xtraclass:
return ' '+self.xtraclass return ' '+self.xtraclass
if self.async: if self.async:
return ' async' return ' async_device'
return ' sync' return ' sync_device'
# Class: DeviceNode # Class: DeviceNode
# Description: # Description:
...@@ -646,8 +689,6 @@ class DeviceNode: ...@@ -646,8 +689,6 @@ class DeviceNode:
# The primary container for suspend/resume test data. There is one for # The primary container for suspend/resume test data. There is one for
# each test run. The data is organized into a cronological hierarchy: # each test run. The data is organized into a cronological hierarchy:
# Data.dmesg { # Data.dmesg {
# root structure, started as dmesg & ftrace, but now only ftrace
# contents: times for suspend start/end, resume start/end, fwdata
# phases { # phases {
# 10 sequential, non-overlapping phases of S/R # 10 sequential, non-overlapping phases of S/R
# contents: times for phase start/end, order/color data for html # contents: times for phase start/end, order/color data for html
...@@ -658,7 +699,7 @@ class DeviceNode: ...@@ -658,7 +699,7 @@ class DeviceNode:
# contents: start/stop times, pid/cpu/driver info # contents: start/stop times, pid/cpu/driver info
# parents/children, html id for timeline/callgraph # parents/children, html id for timeline/callgraph
# optionally includes an ftrace callgraph # optionally includes an ftrace callgraph
# optionally includes intradev trace events # optionally includes dev/ps data
# } # }
# } # }
# } # }
...@@ -671,19 +712,24 @@ class Data: ...@@ -671,19 +712,24 @@ class Data:
end = 0.0 # test end end = 0.0 # test end
tSuspended = 0.0 # low-level suspend start tSuspended = 0.0 # low-level suspend start
tResumed = 0.0 # low-level resume start tResumed = 0.0 # low-level resume start
tKernSus = 0.0 # kernel level suspend start
tKernRes = 0.0 # kernel level resume end
tLow = 0.0 # time spent in low-level suspend (standby/freeze) tLow = 0.0 # time spent in low-level suspend (standby/freeze)
fwValid = False # is firmware data available fwValid = False # is firmware data available
fwSuspend = 0 # time spent in firmware suspend fwSuspend = 0 # time spent in firmware suspend
fwResume = 0 # time spent in firmware resume fwResume = 0 # time spent in firmware resume
dmesgtext = [] # dmesg text file in memory dmesgtext = [] # dmesg text file in memory
pstl = 0 # process timeline
testnumber = 0 testnumber = 0
idstr = '' idstr = ''
html_device_id = 0 html_device_id = 0
stamp = 0 stamp = 0
outfile = '' outfile = ''
dev_ubiquitous = ['msleep', 'udelay'] devpids = []
kerror = False
def __init__(self, num): def __init__(self, num):
idchar = 'abcdefghijklmnopqrstuvwxyz' idchar = 'abcdefghij'
self.pstl = dict()
self.testnumber = num self.testnumber = num
self.idstr = idchar[num] self.idstr = idchar[num]
self.dmesgtext = [] self.dmesgtext = []
...@@ -714,16 +760,39 @@ class Data: ...@@ -714,16 +760,39 @@ class Data:
self.devicegroups = [] self.devicegroups = []
for phase in self.phases: for phase in self.phases:
self.devicegroups.append([phase]) self.devicegroups.append([phase])
def getStart(self): self.errorinfo = {'suspend':[],'resume':[]}
return self.dmesg[self.phases[0]]['start'] def extractErrorInfo(self, dmesg):
error = ''
tm = 0.0
for i in range(len(dmesg)):
if 'Call Trace:' in dmesg[i]:
m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) .*', dmesg[i])
if not m:
continue
tm = float(m.group('ktime'))
if tm < self.start or tm > self.end:
continue
for j in range(i-10, i+1):
error += dmesg[j]
continue
if error:
m = re.match('[ \t]*\[ *[0-9\.]*\] \[\<[0-9a-fA-F]*\>\] .*', dmesg[i])
if m:
error += dmesg[i]
else:
if tm < self.tSuspended:
dir = 'suspend'
else:
dir = 'resume'
error = error.replace('<', '&lt').replace('>', '&gt')
vprint('kernel error found in %s at %f' % (dir, tm))
self.errorinfo[dir].append((tm, error))
self.kerror = True
error = ''
def setStart(self, time): def setStart(self, time):
self.start = time self.start = time
self.dmesg[self.phases[0]]['start'] = time
def getEnd(self):
return self.dmesg[self.phases[-1]]['end']
def setEnd(self, time): def setEnd(self, time):
self.end = time self.end = time
self.dmesg[self.phases[-1]]['end'] = time
def isTraceEventOutsideDeviceCalls(self, pid, time): def isTraceEventOutsideDeviceCalls(self, pid, time):
for phase in self.phases: for phase in self.phases:
list = self.dmesg[phase]['list'] list = self.dmesg[phase]['list']
...@@ -733,39 +802,67 @@ class Data: ...@@ -733,39 +802,67 @@ class Data:
time < d['end']): time < d['end']):
return False return False
return True return True
def targetDevice(self, phaselist, start, end, pid=-1): def sourcePhase(self, start):
for phase in self.phases:
pend = self.dmesg[phase]['end']
if start <= pend:
return phase
return 'resume_complete'
def sourceDevice(self, phaselist, start, end, pid, type):
tgtdev = '' tgtdev = ''
for phase in phaselist: for phase in phaselist:
list = self.dmesg[phase]['list'] list = self.dmesg[phase]['list']
for devname in list: for devname in list:
dev = list[devname] dev = list[devname]
if(pid >= 0 and dev['pid'] != pid): # pid must match
if dev['pid'] != pid:
continue continue
devS = dev['start'] devS = dev['start']
devE = dev['end'] devE = dev['end']
if type == 'device':
# device target event is entirely inside the source boundary
if(start < devS or start >= devE or end <= devS or end > devE): if(start < devS or start >= devE or end <= devS or end > devE):
continue continue
elif type == 'thread':
# thread target event will expand the source boundary
if start < devS:
dev['start'] = start
if end > devE:
dev['end'] = end
tgtdev = dev tgtdev = dev
break break
return tgtdev return tgtdev
def addDeviceFunctionCall(self, displayname, kprobename, proc, pid, start, end, cdata, rdata): def addDeviceFunctionCall(self, displayname, kprobename, proc, pid, start, end, cdata, rdata):
machstart = self.dmesg['suspend_machine']['start'] # try to place the call in a device
machend = self.dmesg['resume_machine']['end'] tgtdev = self.sourceDevice(self.phases, start, end, pid, 'device')
tgtdev = self.targetDevice(self.phases, start, end, pid) # calls with device pids that occur outside device bounds are dropped
if not tgtdev and start >= machstart and end < machend: # TODO: include these somehow
# device calls in machine phases should be serial if not tgtdev and pid in self.devpids:
tgtdev = self.targetDevice(['suspend_machine', 'resume_machine'], start, end) return False
# try to place the call in a thread
if not tgtdev:
tgtdev = self.sourceDevice(self.phases, start, end, pid, 'thread')
# create new thread blocks, expand as new calls are found
if not tgtdev: if not tgtdev:
if 'scsi_eh' in proc: if proc == '<...>':
self.newActionGlobal(proc, start, end, pid) threadname = 'kthread-%d' % (pid)
self.addDeviceFunctionCall(displayname, kprobename, proc, pid, start, end, cdata, rdata)
else: else:
vprint('IGNORE: %s[%s](%d) [%f - %f] | %s | %s | %s' % (displayname, kprobename, threadname = '%s-%d' % (proc, pid)
pid, start, end, cdata, rdata, proc)) tgtphase = self.sourcePhase(start)
self.newAction(tgtphase, threadname, pid, '', start, end, '', ' kth', '')
return self.addDeviceFunctionCall(displayname, kprobename, proc, pid, start, end, cdata, rdata)
# this should not happen
if not tgtdev:
vprint('[%f - %f] %s-%d %s %s %s' % \
(start, end, proc, pid, kprobename, cdata, rdata))
return False return False
# detail block fits within tgtdev # place the call data inside the src element of the tgtdev
if('src' not in tgtdev): if('src' not in tgtdev):
tgtdev['src'] = [] tgtdev['src'] = []
dtf = sysvals.dev_tracefuncs
ubiquitous = False
if kprobename in dtf and 'ub' in dtf[kprobename]:
ubiquitous = True
title = cdata+' '+rdata title = cdata+' '+rdata
mstr = '\(.*\) *(?P<args>.*) *\((?P<caller>.*)\+.* arg1=(?P<ret>.*)' mstr = '\(.*\) *(?P<args>.*) *\((?P<caller>.*)\+.* arg1=(?P<ret>.*)'
m = re.match(mstr, title) m = re.match(mstr, title)
...@@ -777,14 +874,81 @@ class Data: ...@@ -777,14 +874,81 @@ class Data:
r = '' r = ''
else: else:
r = 'ret=%s ' % r r = 'ret=%s ' % r
l = '%0.3fms' % ((end - start) * 1000) if ubiquitous and c in dtf and 'ub' in dtf[c]:
if kprobename in self.dev_ubiquitous: return False
title = '%s(%s) <- %s, %s(%s)' % (displayname, a, c, r, l) color = sysvals.kprobeColor(kprobename)
else: e = DevFunction(displayname, a, c, r, start, end, ubiquitous, proc, pid, color)
title = '%s(%s) %s(%s)' % (displayname, a, r, l)
e = TraceEvent(title, kprobename, start, end - start)
tgtdev['src'].append(e) tgtdev['src'].append(e)
return True return True
def overflowDevices(self):
# get a list of devices that extend beyond the end of this test run
devlist = []
for phase in self.phases:
list = self.dmesg[phase]['list']
for devname in list:
dev = list[devname]
if dev['end'] > self.end:
devlist.append(dev)
return devlist
def mergeOverlapDevices(self, devlist):
# merge any devices that overlap devlist
for dev in devlist:
devname = dev['name']
for phase in self.phases:
list = self.dmesg[phase]['list']
if devname not in list:
continue
tdev = list[devname]
o = min(dev['end'], tdev['end']) - max(dev['start'], tdev['start'])
if o <= 0:
continue
dev['end'] = tdev['end']
if 'src' not in dev or 'src' not in tdev:
continue
dev['src'] += tdev['src']
del list[devname]
def usurpTouchingThread(self, name, dev):
# the caller test has priority of this thread, give it to him
for phase in self.phases:
list = self.dmesg[phase]['list']
if name in list:
tdev = list[name]
if tdev['start'] - dev['end'] < 0.1:
dev['end'] = tdev['end']
if 'src' not in dev:
dev['src'] = []
if 'src' in tdev:
dev['src'] += tdev['src']
del list[name]
break
def stitchTouchingThreads(self, testlist):
# merge any threads between tests that touch
for phase in self.phases:
list = self.dmesg[phase]['list']
for devname in list:
dev = list[devname]
if 'htmlclass' not in dev or 'kth' not in dev['htmlclass']:
continue
for data in testlist:
data.usurpTouchingThread(devname, dev)
def optimizeDevSrc(self):
# merge any src call loops to reduce timeline size
for phase in self.phases:
list = self.dmesg[phase]['list']
for dev in list:
if 'src' not in list[dev]:
continue
src = list[dev]['src']
p = 0
for e in sorted(src, key=lambda event: event.time):
if not p or not e.repeat(p):
p = e
continue
# e is another iteration of p, move it into p
p.end = e.end
p.length = p.end - p.time
p.count += 1
src.remove(e)
def trimTimeVal(self, t, t0, dT, left): def trimTimeVal(self, t, t0, dT, left):
if left: if left:
if(t > t0): if(t > t0):
...@@ -804,6 +968,8 @@ class Data: ...@@ -804,6 +968,8 @@ class Data:
self.tSuspended = self.trimTimeVal(self.tSuspended, t0, dT, left) self.tSuspended = self.trimTimeVal(self.tSuspended, t0, dT, left)
self.tResumed = self.trimTimeVal(self.tResumed, t0, dT, left) self.tResumed = self.trimTimeVal(self.tResumed, t0, dT, left)
self.start = self.trimTimeVal(self.start, t0, dT, left) self.start = self.trimTimeVal(self.start, t0, dT, left)
self.tKernSus = self.trimTimeVal(self.tKernSus, t0, dT, left)
self.tKernRes = self.trimTimeVal(self.tKernRes, t0, dT, left)
self.end = self.trimTimeVal(self.end, t0, dT, left) self.end = self.trimTimeVal(self.end, t0, dT, left)
for phase in self.phases: for phase in self.phases:
p = self.dmesg[phase] p = self.dmesg[phase]
...@@ -832,36 +998,6 @@ class Data: ...@@ -832,36 +998,6 @@ class Data:
else: else:
self.trimTime(self.tSuspended, \ self.trimTime(self.tSuspended, \
self.tResumed-self.tSuspended, False) self.tResumed-self.tSuspended, False)
def newPhaseWithSingleAction(self, phasename, devname, start, end, color):
for phase in self.phases:
self.dmesg[phase]['order'] += 1
self.html_device_id += 1
devid = '%s%d' % (self.idstr, self.html_device_id)
list = dict()
list[devname] = \
{'start': start, 'end': end, 'pid': 0, 'par': '',
'length': (end-start), 'row': 0, 'id': devid, 'drv': '' };
self.dmesg[phasename] = \
{'list': list, 'start': start, 'end': end,
'row': 0, 'color': color, 'order': 0}
self.phases = self.sortedPhases()
def newPhase(self, phasename, start, end, color, order):
if(order < 0):
order = len(self.phases)
for phase in self.phases[order:]:
self.dmesg[phase]['order'] += 1
if(order > 0):
p = self.phases[order-1]
self.dmesg[p]['end'] = start
if(order < len(self.phases)):
p = self.phases[order]
self.dmesg[p]['start'] = end
list = dict()
self.dmesg[phasename] = \
{'list': list, 'start': start, 'end': end,
'row': 0, 'color': color, 'order': order}
self.phases = self.sortedPhases()
self.devicegroups.append([phasename])
def setPhase(self, phase, ktime, isbegin): def setPhase(self, phase, ktime, isbegin):
if(isbegin): if(isbegin):
self.dmesg[phase]['start'] = ktime self.dmesg[phase]['start'] = ktime
...@@ -881,7 +1017,7 @@ class Data: ...@@ -881,7 +1017,7 @@ class Data:
for t in sorted(tmp): for t in sorted(tmp):
slist.append(tmp[t]) slist.append(tmp[t])
return slist return slist
def fixupInitcalls(self, phase, end): def fixupInitcalls(self, phase):
# if any calls never returned, clip them at system resume end # if any calls never returned, clip them at system resume end
phaselist = self.dmesg[phase]['list'] phaselist = self.dmesg[phase]['list']
for devname in phaselist: for devname in phaselist:
...@@ -893,37 +1029,23 @@ class Data: ...@@ -893,37 +1029,23 @@ class Data:
break break
vprint('%s (%s): callback didnt return' % (devname, phase)) vprint('%s (%s): callback didnt return' % (devname, phase))
def deviceFilter(self, devicefilter): def deviceFilter(self, devicefilter):
# remove all by the relatives of the filter devnames
filter = []
for phase in self.phases:
list = self.dmesg[phase]['list']
for name in devicefilter:
dev = name
while(dev in list):
if(dev not in filter):
filter.append(dev)
dev = list[dev]['par']
children = self.deviceDescendants(name, phase)
for dev in children:
if(dev not in filter):
filter.append(dev)
for phase in self.phases: for phase in self.phases:
list = self.dmesg[phase]['list'] list = self.dmesg[phase]['list']
rmlist = [] rmlist = []
for name in list: for name in list:
pid = list[name]['pid'] keep = False
if(name not in filter and pid >= 0): for filter in devicefilter:
if filter in name or \
('drv' in list[name] and filter in list[name]['drv']):
keep = True
if not keep:
rmlist.append(name) rmlist.append(name)
for name in rmlist: for name in rmlist:
del list[name] del list[name]
def fixupInitcallsThatDidntReturn(self): def fixupInitcallsThatDidntReturn(self):
# if any calls never returned, clip them at system resume end # if any calls never returned, clip them at system resume end
for phase in self.phases: for phase in self.phases:
self.fixupInitcalls(phase, self.getEnd()) self.fixupInitcalls(phase)
def isInsideTimeline(self, start, end):
if(self.start <= start and self.end > start):
return True
return False
def phaseOverlap(self, phases): def phaseOverlap(self, phases):
rmgroups = [] rmgroups = []
newgroup = [] newgroup = []
...@@ -940,30 +1062,35 @@ class Data: ...@@ -940,30 +1062,35 @@ class Data:
self.devicegroups.remove(group) self.devicegroups.remove(group)
self.devicegroups.append(newgroup) self.devicegroups.append(newgroup)
def newActionGlobal(self, name, start, end, pid=-1, color=''): def newActionGlobal(self, name, start, end, pid=-1, color=''):
# if event starts before timeline start, expand timeline # which phase is this device callback or action in
if(start < self.start): targetphase = 'none'
self.setStart(start)
# if event ends after timeline end, expand the timeline
if(end > self.end):
self.setEnd(end)
# which phase is this device callback or action "in"
targetphase = "none"
htmlclass = '' htmlclass = ''
overlap = 0.0 overlap = 0.0
phases = [] phases = []
for phase in self.phases: for phase in self.phases:
pstart = self.dmesg[phase]['start'] pstart = self.dmesg[phase]['start']
pend = self.dmesg[phase]['end'] pend = self.dmesg[phase]['end']
# see if the action overlaps this phase
o = max(0, min(end, pend) - max(start, pstart)) o = max(0, min(end, pend) - max(start, pstart))
if o > 0: if o > 0:
phases.append(phase) phases.append(phase)
# set the target phase to the one that overlaps most
if o > overlap: if o > overlap:
if overlap > 0 and phase == 'post_resume': if overlap > 0 and phase == 'post_resume':
continue continue
targetphase = phase targetphase = phase
overlap = o overlap = o
# if no target phase was found, pin it to the edge
if targetphase == 'none':
p0start = self.dmesg[self.phases[0]]['start']
if start <= p0start:
targetphase = self.phases[0]
else:
targetphase = self.phases[-1]
if pid == -2: if pid == -2:
htmlclass = ' bg' htmlclass = ' bg'
elif pid == -3:
htmlclass = ' ps'
if len(phases) > 1: if len(phases) > 1:
htmlclass = ' bg' htmlclass = ' bg'
self.phaseOverlap(phases) self.phaseOverlap(phases)
...@@ -985,29 +1112,13 @@ class Data: ...@@ -985,29 +1112,13 @@ class Data:
while(name in list): while(name in list):
name = '%s[%d]' % (origname, i) name = '%s[%d]' % (origname, i)
i += 1 i += 1
list[name] = {'start': start, 'end': end, 'pid': pid, 'par': parent, list[name] = {'name': name, 'start': start, 'end': end, 'pid': pid,
'length': length, 'row': 0, 'id': devid, 'drv': drv } 'par': parent, 'length': length, 'row': 0, 'id': devid, 'drv': drv }
if htmlclass: if htmlclass:
list[name]['htmlclass'] = htmlclass list[name]['htmlclass'] = htmlclass
if color: if color:
list[name]['color'] = color list[name]['color'] = color
return name return name
def deviceIDs(self, devlist, phase):
idlist = []
list = self.dmesg[phase]['list']
for devname in list:
if devname in devlist:
idlist.append(list[devname]['id'])
return idlist
def deviceParentID(self, devname, phase):
pdev = ''
pdevid = ''
list = self.dmesg[phase]['list']
if devname in list:
pdev = list[devname]['par']
if pdev in list:
return list[pdev]['id']
return pdev
def deviceChildren(self, devname, phase): def deviceChildren(self, devname, phase):
devlist = [] devlist = []
list = self.dmesg[phase]['list'] list = self.dmesg[phase]['list']
...@@ -1015,21 +1126,15 @@ class Data: ...@@ -1015,21 +1126,15 @@ class Data:
if(list[child]['par'] == devname): if(list[child]['par'] == devname):
devlist.append(child) devlist.append(child)
return devlist return devlist
def deviceDescendants(self, devname, phase):
children = self.deviceChildren(devname, phase)
family = children
for child in children:
family += self.deviceDescendants(child, phase)
return family
def deviceChildrenIDs(self, devname, phase):
devlist = self.deviceChildren(devname, phase)
return self.deviceIDs(devlist, phase)
def printDetails(self): def printDetails(self):
vprint('Timeline Details:')
vprint(' test start: %f' % self.start) vprint(' test start: %f' % self.start)
vprint('kernel suspend start: %f' % self.tKernSus)
for phase in self.phases: for phase in self.phases:
dc = len(self.dmesg[phase]['list']) dc = len(self.dmesg[phase]['list'])
vprint(' %16s: %f - %f (%d devices)' % (phase, \ vprint(' %16s: %f - %f (%d devices)' % (phase, \
self.dmesg[phase]['start'], self.dmesg[phase]['end'], dc)) self.dmesg[phase]['start'], self.dmesg[phase]['end'], dc))
vprint(' kernel resume end: %f' % self.tKernRes)
vprint(' test end: %f' % self.end) vprint(' test end: %f' % self.end)
def deviceChildrenAllPhases(self, devname): def deviceChildrenAllPhases(self, devname):
devlist = [] devlist = []
...@@ -1108,21 +1213,134 @@ class Data: ...@@ -1108,21 +1213,134 @@ class Data:
if width != '0.000000' and length >= mindevlen: if width != '0.000000' and length >= mindevlen:
devlist.append(dev) devlist.append(dev)
self.tdevlist[phase] = devlist self.tdevlist[phase] = devlist
def addHorizontalDivider(self, devname, devend):
# Class: TraceEvent phase = 'suspend_prepare'
self.newAction(phase, devname, -2, '', \
self.start, devend, '', ' sec', '')
if phase not in self.tdevlist:
self.tdevlist[phase] = []
self.tdevlist[phase].append(devname)
d = DevItem(0, phase, self.dmesg[phase]['list'][devname])
return d
def addProcessUsageEvent(self, name, times):
# get the start and end times for this process
maxC = 0
tlast = 0
start = -1
end = -1
for t in sorted(times):
if tlast == 0:
tlast = t
continue
if name in self.pstl[t]:
if start == -1 or tlast < start:
start = tlast
if end == -1 or t > end:
end = t
tlast = t
if start == -1 or end == -1:
return 0
# add a new action for this process and get the object
out = self.newActionGlobal(name, start, end, -3)
if not out:
return 0
phase, devname = out
dev = self.dmesg[phase]['list'][devname]
# get the cpu exec data
tlast = 0
clast = 0
cpuexec = dict()
for t in sorted(times):
if tlast == 0 or t <= start or t > end:
tlast = t
continue
list = self.pstl[t]
c = 0
if name in list:
c = list[name]
if c > maxC:
maxC = c
if c != clast:
key = (tlast, t)
cpuexec[key] = c
tlast = t
clast = c
dev['cpuexec'] = cpuexec
return maxC
def createProcessUsageEvents(self):
# get an array of process names
proclist = []
for t in self.pstl:
pslist = self.pstl[t]
for ps in pslist:
if ps not in proclist:
proclist.append(ps)
# get a list of data points for suspend and resume
tsus = []
tres = []
for t in sorted(self.pstl):
if t < self.tSuspended:
tsus.append(t)
else:
tres.append(t)
# process the events for suspend and resume
if len(proclist) > 0:
vprint('Process Execution:')
for ps in proclist:
c = self.addProcessUsageEvent(ps, tsus)
if c > 0:
vprint('%25s (sus): %d' % (ps, c))
c = self.addProcessUsageEvent(ps, tres)
if c > 0:
vprint('%25s (res): %d' % (ps, c))
# Class: DevFunction
# Description: # Description:
# A container for trace event data found in the ftrace file # A container for kprobe function data we want in the dev timeline
class TraceEvent: class DevFunction:
text = ''
time = 0.0
length = 0.0
title = ''
row = 0 row = 0
def __init__(self, a, n, t, l): count = 1
self.title = a def __init__(self, name, args, caller, ret, start, end, u, proc, pid, color):
self.text = n self.name = name
self.time = t self.args = args
self.length = l self.caller = caller
self.ret = ret
self.time = start
self.length = end - start
self.end = end
self.ubiquitous = u
self.proc = proc
self.pid = pid
self.color = color
def title(self):
cnt = ''
if self.count > 1:
cnt = '(x%d)' % self.count
l = '%0.3fms' % (self.length * 1000)
if self.ubiquitous:
title = '%s(%s)%s <- %s, %s(%s)' % \
(self.name, self.args, cnt, self.caller, self.ret, l)
else:
title = '%s(%s) %s%s(%s)' % (self.name, self.args, self.ret, cnt, l)
return title.replace('"', '')
def text(self):
if self.count > 1:
text = '%s(x%d)' % (self.name, self.count)
else:
text = self.name
return text
def repeat(self, tgt):
# is the tgt call just a repeat of this call (e.g. are we in a loop)
dt = self.time - tgt.end
# only combine calls if -all- attributes are identical
if tgt.caller == self.caller and \
tgt.name == self.name and tgt.args == self.args and \
tgt.proc == self.proc and tgt.pid == self.pid and \
tgt.ret == self.ret and dt >= 0 and \
dt <= sysvals.callloopmaxgap and \
self.length < sysvals.callloopmaxlen:
return True
return False
# Class: FTraceLine # Class: FTraceLine
# Description: # Description:
...@@ -1226,7 +1444,6 @@ class FTraceLine: ...@@ -1226,7 +1444,6 @@ class FTraceLine:
print('%s -- %f (%02d): %s() { (%.3f us)' % (dev, self.time, \ print('%s -- %f (%02d): %s() { (%.3f us)' % (dev, self.time, \
self.depth, self.name, self.length*1000000)) self.depth, self.name, self.length*1000000))
def startMarker(self): def startMarker(self):
global sysvals
# Is this the starting line of a suspend? # Is this the starting line of a suspend?
if not self.fevent: if not self.fevent:
return False return False
...@@ -1506,6 +1723,16 @@ class FTraceCallGraph: ...@@ -1506,6 +1723,16 @@ class FTraceCallGraph:
l.depth, l.name, l.length*1000000)) l.depth, l.name, l.length*1000000))
print(' ') print(' ')
class DevItem:
def __init__(self, test, phase, dev):
self.test = test
self.phase = phase
self.dev = dev
def isa(self, cls):
if 'htmlclass' in self.dev and cls in self.dev['htmlclass']:
return True
return False
# Class: Timeline # Class: Timeline
# Description: # Description:
# A container for a device timeline which calculates # A container for a device timeline which calculates
...@@ -1517,12 +1744,11 @@ class Timeline: ...@@ -1517,12 +1744,11 @@ class Timeline:
rowH = 30 # device row height rowH = 30 # device row height
bodyH = 0 # body height bodyH = 0 # body height
rows = 0 # total timeline rows rows = 0 # total timeline rows
phases = [] rowlines = dict()
rowmaxlines = dict()
rowcount = dict()
rowheight = dict() rowheight = dict()
def __init__(self, rowheight): def __init__(self, rowheight, scaleheight):
self.rowH = rowheight self.rowH = rowheight
self.scaleH = scaleheight
self.html = { self.html = {
'header': '', 'header': '',
'timeline': '', 'timeline': '',
...@@ -1537,21 +1763,19 @@ class Timeline: ...@@ -1537,21 +1763,19 @@ class Timeline:
# The total number of rows needed to display this phase of the timeline # The total number of rows needed to display this phase of the timeline
def getDeviceRows(self, rawlist): def getDeviceRows(self, rawlist):
# clear all rows and set them to undefined # clear all rows and set them to undefined
lendict = dict() sortdict = dict()
for item in rawlist: for item in rawlist:
item.row = -1 item.row = -1
lendict[item] = item.length sortdict[item] = item.length
list = [] sortlist = sorted(sortdict, key=sortdict.get, reverse=True)
for i in sorted(lendict, key=lendict.get, reverse=True): remaining = len(sortlist)
list.append(i)
remaining = len(list)
rowdata = dict() rowdata = dict()
row = 1 row = 1
# try to pack each row with as many ranges as possible # try to pack each row with as many ranges as possible
while(remaining > 0): while(remaining > 0):
if(row not in rowdata): if(row not in rowdata):
rowdata[row] = [] rowdata[row] = []
for i in list: for i in sortlist:
if(i.row >= 0): if(i.row >= 0):
continue continue
s = i.time s = i.time
...@@ -1575,81 +1799,86 @@ class Timeline: ...@@ -1575,81 +1799,86 @@ class Timeline:
# Organize the timeline entries into the smallest # Organize the timeline entries into the smallest
# number of rows possible, with no entry overlapping # number of rows possible, with no entry overlapping
# Arguments: # Arguments:
# list: the list of devices/actions for a single phase # devlist: the list of devices/actions in a group of contiguous phases
# devlist: string list of device names to use
# Output: # Output:
# The total number of rows needed to display this phase of the timeline # The total number of rows needed to display this phase of the timeline
def getPhaseRows(self, dmesg, devlist): def getPhaseRows(self, devlist, row=0):
# clear all rows and set them to undefined # clear all rows and set them to undefined
remaining = len(devlist) remaining = len(devlist)
rowdata = dict() rowdata = dict()
row = 0 sortdict = dict()
lendict = dict()
myphases = [] myphases = []
# initialize all device rows to -1 and calculate devrows
for item in devlist: for item in devlist:
if item[0] not in self.phases: dev = item.dev
self.phases.append(item[0]) tp = (item.test, item.phase)
if item[0] not in myphases: if tp not in myphases:
myphases.append(item[0]) myphases.append(tp)
self.rowmaxlines[item[0]] = dict()
self.rowheight[item[0]] = dict()
dev = dmesg[item[0]]['list'][item[1]]
dev['row'] = -1 dev['row'] = -1
lendict[item] = float(dev['end']) - float(dev['start']) # sort by length 1st, then name 2nd
sortdict[item] = (float(dev['end']) - float(dev['start']), item.dev['name'])
if 'src' in dev: if 'src' in dev:
dev['devrows'] = self.getDeviceRows(dev['src']) dev['devrows'] = self.getDeviceRows(dev['src'])
lenlist = [] # sort the devlist by length so that large items graph on top
for i in sorted(lendict, key=lendict.get, reverse=True): sortlist = sorted(sortdict, key=sortdict.get, reverse=True)
lenlist.append(i)
orderedlist = [] orderedlist = []
for item in lenlist: for item in sortlist:
dev = dmesg[item[0]]['list'][item[1]] if item.dev['pid'] == -2:
if dev['pid'] == -2:
orderedlist.append(item) orderedlist.append(item)
for item in lenlist: for item in sortlist:
if item not in orderedlist: if item not in orderedlist:
orderedlist.append(item) orderedlist.append(item)
# try to pack each row with as many ranges as possible # try to pack each row with as many devices as possible
while(remaining > 0): while(remaining > 0):
rowheight = 1 rowheight = 1
if(row not in rowdata): if(row not in rowdata):
rowdata[row] = [] rowdata[row] = []
for item in orderedlist: for item in orderedlist:
dev = dmesg[item[0]]['list'][item[1]] dev = item.dev
if(dev['row'] < 0): if(dev['row'] < 0):
s = dev['start'] s = dev['start']
e = dev['end'] e = dev['end']
valid = True valid = True
for ritem in rowdata[row]: for ritem in rowdata[row]:
rs = ritem['start'] rs = ritem.dev['start']
re = ritem['end'] re = ritem.dev['end']
if(not (((s <= rs) and (e <= rs)) or if(not (((s <= rs) and (e <= rs)) or
((s >= re) and (e >= re)))): ((s >= re) and (e >= re)))):
valid = False valid = False
break break
if(valid): if(valid):
rowdata[row].append(dev) rowdata[row].append(item)
dev['row'] = row dev['row'] = row
remaining -= 1 remaining -= 1
if 'devrows' in dev and dev['devrows'] > rowheight: if 'devrows' in dev and dev['devrows'] > rowheight:
rowheight = dev['devrows'] rowheight = dev['devrows']
for phase in myphases: for t, p in myphases:
self.rowmaxlines[phase][row] = rowheight if t not in self.rowlines or t not in self.rowheight:
self.rowheight[phase][row] = rowheight * self.rowH self.rowlines[t] = dict()
self.rowheight[t] = dict()
if p not in self.rowlines[t] or p not in self.rowheight[t]:
self.rowlines[t][p] = dict()
self.rowheight[t][p] = dict()
rh = self.rowH
# section headers should use a different row height
if len(rowdata[row]) == 1 and \
'htmlclass' in rowdata[row][0].dev and \
'sec' in rowdata[row][0].dev['htmlclass']:
rh = 15
self.rowlines[t][p][row] = rowheight
self.rowheight[t][p][row] = rowheight * rh
row += 1 row += 1
if(row > self.rows): if(row > self.rows):
self.rows = int(row) self.rows = int(row)
for phase in myphases:
self.rowcount[phase] = row
return row return row
def phaseRowHeight(self, phase, row): def phaseRowHeight(self, test, phase, row):
return self.rowheight[phase][row] return self.rowheight[test][phase][row]
def phaseRowTop(self, phase, row): def phaseRowTop(self, test, phase, row):
top = 0 top = 0
for i in sorted(self.rowheight[phase]): for i in sorted(self.rowheight[test][phase]):
if i >= row: if i >= row:
break break
top += self.rowheight[phase][i] top += self.rowheight[test][phase][i]
return top return top
# Function: calcTotalRows # Function: calcTotalRows
# Description: # Description:
...@@ -1657,19 +1886,21 @@ class Timeline: ...@@ -1657,19 +1886,21 @@ class Timeline:
def calcTotalRows(self): def calcTotalRows(self):
maxrows = 0 maxrows = 0
standardphases = [] standardphases = []
for phase in self.phases: for t in self.rowlines:
for p in self.rowlines[t]:
total = 0 total = 0
for i in sorted(self.rowmaxlines[phase]): for i in sorted(self.rowlines[t][p]):
total += self.rowmaxlines[phase][i] total += self.rowlines[t][p][i]
if total > maxrows: if total > maxrows:
maxrows = total maxrows = total
if total == self.rowcount[phase]: if total == len(self.rowlines[t][p]):
standardphases.append(phase) standardphases.append((t, p))
self.height = self.scaleH + (maxrows*self.rowH) self.height = self.scaleH + (maxrows*self.rowH)
self.bodyH = self.height - self.scaleH self.bodyH = self.height - self.scaleH
for phase in standardphases: # if there is 1 line per row, draw them the standard way
for i in sorted(self.rowheight[phase]): for t, p in standardphases:
self.rowheight[phase][i] = self.bodyH/self.rowcount[phase] for i in sorted(self.rowheight[t][p]):
self.rowheight[t][p][i] = self.bodyH/len(self.rowlines[t][p])
# Function: createTimeScale # Function: createTimeScale
# Description: # Description:
# Create the timescale for a timeline block # Create the timescale for a timeline block
...@@ -1716,7 +1947,6 @@ class Timeline: ...@@ -1716,7 +1947,6 @@ class Timeline:
# A list of values describing the properties of these test runs # A list of values describing the properties of these test runs
class TestProps: class TestProps:
stamp = '' stamp = ''
tracertype = ''
S0i3 = False S0i3 = False
fwdata = [] fwdata = []
ftrace_line_fmt_fg = \ ftrace_line_fmt_fg = \
...@@ -1734,14 +1964,13 @@ class TestProps: ...@@ -1734,14 +1964,13 @@ class TestProps:
def __init__(self): def __init__(self):
self.ktemp = dict() self.ktemp = dict()
def setTracerType(self, tracer): def setTracerType(self, tracer):
self.tracertype = tracer
if(tracer == 'function_graph'): if(tracer == 'function_graph'):
self.cgformat = True self.cgformat = True
self.ftrace_line_fmt = self.ftrace_line_fmt_fg self.ftrace_line_fmt = self.ftrace_line_fmt_fg
elif(tracer == 'nop'): elif(tracer == 'nop'):
self.ftrace_line_fmt = self.ftrace_line_fmt_nop self.ftrace_line_fmt = self.ftrace_line_fmt_nop
else: else:
doError('Invalid tracer format: [%s]' % tracer, False) doError('Invalid tracer format: [%s]' % tracer)
# Class: TestRun # Class: TestRun
# Description: # Description:
...@@ -1756,6 +1985,51 @@ class TestRun: ...@@ -1756,6 +1985,51 @@ class TestRun:
self.ftemp = dict() self.ftemp = dict()
self.ttemp = dict() self.ttemp = dict()
class ProcessMonitor:
proclist = dict()
running = False
def procstat(self):
c = ['cat /proc/[1-9]*/stat 2>/dev/null']
process = Popen(c, shell=True, stdout=PIPE)
running = dict()
for line in process.stdout:
data = line.split()
pid = data[0]
name = re.sub('[()]', '', data[1])
user = int(data[13])
kern = int(data[14])
kjiff = ujiff = 0
if pid not in self.proclist:
self.proclist[pid] = {'name' : name, 'user' : user, 'kern' : kern}
else:
val = self.proclist[pid]
ujiff = user - val['user']
kjiff = kern - val['kern']
val['user'] = user
val['kern'] = kern
if ujiff > 0 or kjiff > 0:
running[pid] = ujiff + kjiff
result = process.wait()
out = ''
for pid in running:
jiffies = running[pid]
val = self.proclist[pid]
if out:
out += ','
out += '%s-%s %d' % (val['name'], pid, jiffies)
return 'ps - '+out
def processMonitor(self, tid):
while self.running:
out = self.procstat()
if out:
sysvals.fsetVal(out, 'trace_marker')
def start(self):
self.thread = Thread(target=self.processMonitor, args=(0,))
self.running = True
self.thread.start()
def stop(self):
self.running = False
# ----------------- FUNCTIONS -------------------- # ----------------- FUNCTIONS --------------------
# Function: vprint # Function: vprint
...@@ -1764,7 +2038,7 @@ class TestRun: ...@@ -1764,7 +2038,7 @@ class TestRun:
# Arguments: # Arguments:
# msg: the debug/log message to print # msg: the debug/log message to print
def vprint(msg): def vprint(msg):
global sysvals sysvals.logmsg += msg+'\n'
if(sysvals.verbose): if(sysvals.verbose):
print(msg) print(msg)
...@@ -1775,8 +2049,6 @@ def vprint(msg): ...@@ -1775,8 +2049,6 @@ def vprint(msg):
# Arguments: # Arguments:
# m: the valid re.match output for the stamp line # m: the valid re.match output for the stamp line
def parseStamp(line, data): def parseStamp(line, data):
global sysvals
m = re.match(sysvals.stampfmt, line) m = re.match(sysvals.stampfmt, line)
data.stamp = {'time': '', 'host': '', 'mode': ''} data.stamp = {'time': '', 'host': '', 'mode': ''}
dt = datetime(int(m.group('y'))+2000, int(m.group('m')), dt = datetime(int(m.group('y'))+2000, int(m.group('m')),
...@@ -1788,6 +2060,14 @@ def parseStamp(line, data): ...@@ -1788,6 +2060,14 @@ def parseStamp(line, data):
data.stamp['kernel'] = m.group('kernel') data.stamp['kernel'] = m.group('kernel')
sysvals.hostname = data.stamp['host'] sysvals.hostname = data.stamp['host']
sysvals.suspendmode = data.stamp['mode'] sysvals.suspendmode = data.stamp['mode']
if sysvals.suspendmode == 'command' and sysvals.ftracefile != '':
modes = ['on', 'freeze', 'standby', 'mem']
out = Popen(['grep', 'suspend_enter', sysvals.ftracefile],
stderr=PIPE, stdout=PIPE).stdout.read()
m = re.match('.* suspend_enter\[(?P<mode>.*)\]', out)
if m and m.group('mode') in ['1', '2', '3']:
sysvals.suspendmode = modes[int(m.group('mode'))]
data.stamp['mode'] = sysvals.suspendmode
if not sysvals.stamp: if not sysvals.stamp:
sysvals.stamp = data.stamp sysvals.stamp = data.stamp
...@@ -1817,18 +2097,17 @@ def diffStamp(stamp1, stamp2): ...@@ -1817,18 +2097,17 @@ def diffStamp(stamp1, stamp2):
# required for primary parsing. Set the usetraceevents and/or # required for primary parsing. Set the usetraceevents and/or
# usetraceeventsonly flags in the global sysvals object # usetraceeventsonly flags in the global sysvals object
def doesTraceLogHaveTraceEvents(): def doesTraceLogHaveTraceEvents():
global sysvals
# check for kprobes # check for kprobes
sysvals.usekprobes = False sysvals.usekprobes = False
out = os.system('grep -q "_cal: (" '+sysvals.ftracefile) out = call('grep -q "_cal: (" '+sysvals.ftracefile, shell=True)
if(out == 0): if(out == 0):
sysvals.usekprobes = True sysvals.usekprobes = True
# check for callgraph data on trace event blocks # check for callgraph data on trace event blocks
out = os.system('grep -q "_cpu_down()" '+sysvals.ftracefile) out = call('grep -q "_cpu_down()" '+sysvals.ftracefile, shell=True)
if(out == 0): if(out == 0):
sysvals.usekprobes = True sysvals.usekprobes = True
out = os.popen('head -1 '+sysvals.ftracefile).read().replace('\n', '') out = Popen(['head', '-1', sysvals.ftracefile],
stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '')
m = re.match(sysvals.stampfmt, out) m = re.match(sysvals.stampfmt, out)
if m and m.group('mode') == 'command': if m and m.group('mode') == 'command':
sysvals.usetraceeventsonly = True sysvals.usetraceeventsonly = True
...@@ -1838,14 +2117,14 @@ def doesTraceLogHaveTraceEvents(): ...@@ -1838,14 +2117,14 @@ def doesTraceLogHaveTraceEvents():
sysvals.usetraceeventsonly = True sysvals.usetraceeventsonly = True
sysvals.usetraceevents = False sysvals.usetraceevents = False
for e in sysvals.traceevents: for e in sysvals.traceevents:
out = os.system('grep -q "'+e+': " '+sysvals.ftracefile) out = call('grep -q "'+e+': " '+sysvals.ftracefile, shell=True)
if(out != 0): if(out != 0):
sysvals.usetraceeventsonly = False sysvals.usetraceeventsonly = False
if(e == 'suspend_resume' and out == 0): if(e == 'suspend_resume' and out == 0):
sysvals.usetraceevents = True sysvals.usetraceevents = True
# determine is this log is properly formatted # determine is this log is properly formatted
for e in ['SUSPEND START', 'RESUME COMPLETE']: for e in ['SUSPEND START', 'RESUME COMPLETE']:
out = os.system('grep -q "'+e+'" '+sysvals.ftracefile) out = call('grep -q "'+e+'" '+sysvals.ftracefile, shell=True)
if(out != 0): if(out != 0):
sysvals.usetracemarkers = False sysvals.usetracemarkers = False
...@@ -1860,8 +2139,6 @@ def doesTraceLogHaveTraceEvents(): ...@@ -1860,8 +2139,6 @@ def doesTraceLogHaveTraceEvents():
# Arguments: # Arguments:
# testruns: the array of Data objects obtained from parseKernelLog # testruns: the array of Data objects obtained from parseKernelLog
def appendIncompleteTraceLog(testruns): def appendIncompleteTraceLog(testruns):
global sysvals
# create TestRun vessels for ftrace parsing # create TestRun vessels for ftrace parsing
testcnt = len(testruns) testcnt = len(testruns)
testidx = 0 testidx = 0
...@@ -2052,7 +2329,6 @@ def appendIncompleteTraceLog(testruns): ...@@ -2052,7 +2329,6 @@ def appendIncompleteTraceLog(testruns):
dev['ftrace'] = cg dev['ftrace'] = cg
break break
if(sysvals.verbose):
test.data.printDetails() test.data.printDetails()
# Function: parseTraceLog # Function: parseTraceLog
...@@ -2064,14 +2340,12 @@ def appendIncompleteTraceLog(testruns): ...@@ -2064,14 +2340,12 @@ def appendIncompleteTraceLog(testruns):
# Output: # Output:
# An array of Data objects # An array of Data objects
def parseTraceLog(): def parseTraceLog():
global sysvals
vprint('Analyzing the ftrace data...') vprint('Analyzing the ftrace data...')
if(os.path.exists(sysvals.ftracefile) == False): if(os.path.exists(sysvals.ftracefile) == False):
doError('%s does not exist' % sysvals.ftracefile, False) doError('%s does not exist' % sysvals.ftracefile)
sysvals.setupAllKprobes() sysvals.setupAllKprobes()
tracewatch = ['suspend_enter'] tracewatch = []
if sysvals.usekprobes: if sysvals.usekprobes:
tracewatch += ['sync_filesystems', 'freeze_processes', 'syscore_suspend', tracewatch += ['sync_filesystems', 'freeze_processes', 'syscore_suspend',
'syscore_resume', 'resume_console', 'thaw_processes', 'CPU_ON', 'CPU_OFF'] 'syscore_resume', 'resume_console', 'thaw_processes', 'CPU_ON', 'CPU_OFF']
...@@ -2102,17 +2376,13 @@ def parseTraceLog(): ...@@ -2102,17 +2376,13 @@ def parseTraceLog():
if(m): if(m):
tp.setTracerType(m.group('t')) tp.setTracerType(m.group('t'))
continue continue
# post resume time line: did this test run include post-resume data
m = re.match(sysvals.postresumefmt, line)
if(m):
t = int(m.group('t'))
if(t > 0):
sysvals.postresumetime = t
continue
# device properties line # device properties line
if(re.match(sysvals.devpropfmt, line)): if(re.match(sysvals.devpropfmt, line)):
devProps(line) devProps(line)
continue continue
# ignore all other commented lines
if line[0] == '#':
continue
# ftrace line: parse only valid lines # ftrace line: parse only valid lines
m = re.match(tp.ftrace_line_fmt, line) m = re.match(tp.ftrace_line_fmt, line)
if(not m): if(not m):
...@@ -2142,20 +2412,36 @@ def parseTraceLog(): ...@@ -2142,20 +2412,36 @@ def parseTraceLog():
testrun = TestRun(data) testrun = TestRun(data)
testruns.append(testrun) testruns.append(testrun)
parseStamp(tp.stamp, data) parseStamp(tp.stamp, data)
if len(tp.fwdata) > data.testnumber:
data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber]
if(data.fwSuspend > 0 or data.fwResume > 0):
data.fwValid = True
data.setStart(t.time) data.setStart(t.time)
data.tKernSus = t.time
continue continue
if(not data): if(not data):
continue continue
# process cpu exec line
if t.type == 'tracing_mark_write':
m = re.match(sysvals.procexecfmt, t.name)
if(m):
proclist = dict()
for ps in m.group('ps').split(','):
val = ps.split()
if not val:
continue
name = val[0].replace('--', '-')
proclist[name] = int(val[1])
data.pstl[t.time] = proclist
continue
# find the end of resume # find the end of resume
if(t.endMarker()): if(t.endMarker()):
if(sysvals.usetracemarkers and sysvals.postresumetime > 0):
phase = 'post_resume'
data.newPhase(phase, t.time, t.time, '#F0F0F0', -1)
data.setEnd(t.time) data.setEnd(t.time)
if data.tKernRes == 0.0:
data.tKernRes = t.time
if data.dmesg['resume_complete']['end'] < 0:
data.dmesg['resume_complete']['end'] = t.time
if sysvals.suspendmode == 'mem' and len(tp.fwdata) > data.testnumber:
data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber]
if(data.tSuspended != 0 and data.tResumed != 0 and \
(data.fwSuspend > 0 or data.fwResume > 0)):
data.fwValid = True
if(not sysvals.usetracemarkers): if(not sysvals.usetracemarkers):
# no trace markers? then quit and be sure to finish recording # no trace markers? then quit and be sure to finish recording
# the event we used to trigger resume end # the event we used to trigger resume end
...@@ -2190,8 +2476,14 @@ def parseTraceLog(): ...@@ -2190,8 +2476,14 @@ def parseTraceLog():
if(name.split('[')[0] in tracewatch): if(name.split('[')[0] in tracewatch):
continue continue
# -- phase changes -- # -- phase changes --
# start of kernel suspend
if(re.match('suspend_enter\[.*', t.name)):
if(isbegin):
data.dmesg[phase]['start'] = t.time
data.tKernSus = t.time
continue
# suspend_prepare start # suspend_prepare start
if(re.match('dpm_prepare\[.*', t.name)): elif(re.match('dpm_prepare\[.*', t.name)):
phase = 'suspend_prepare' phase = 'suspend_prepare'
if(not isbegin): if(not isbegin):
data.dmesg[phase]['end'] = t.time data.dmesg[phase]['end'] = t.time
...@@ -2291,6 +2583,8 @@ def parseTraceLog(): ...@@ -2291,6 +2583,8 @@ def parseTraceLog():
p = m.group('p') p = m.group('p')
if(n and p): if(n and p):
data.newAction(phase, n, pid, p, t.time, -1, drv) data.newAction(phase, n, pid, p, t.time, -1, drv)
if pid not in data.devpids:
data.devpids.append(pid)
# device callback finish # device callback finish
elif(t.type == 'device_pm_callback_end'): elif(t.type == 'device_pm_callback_end'):
m = re.match('(?P<drv>.*) (?P<d>.*), err.*', t.name); m = re.match('(?P<drv>.*) (?P<d>.*), err.*', t.name);
...@@ -2332,6 +2626,12 @@ def parseTraceLog(): ...@@ -2332,6 +2626,12 @@ def parseTraceLog():
else: else:
e['end'] = t.time e['end'] = t.time
e['rdata'] = kprobedata e['rdata'] = kprobedata
# end of kernel resume
if(kprobename == 'pm_notifier_call_chain' or \
kprobename == 'pm_restore_console'):
data.dmesg[phase]['end'] = t.time
data.tKernRes = t.time
# callgraph processing # callgraph processing
elif sysvals.usecallgraph: elif sysvals.usecallgraph:
# create a callgraph object for the data # create a callgraph object for the data
...@@ -2348,24 +2648,37 @@ def parseTraceLog(): ...@@ -2348,24 +2648,37 @@ def parseTraceLog():
if sysvals.suspendmode == 'command': if sysvals.suspendmode == 'command':
for test in testruns: for test in testruns:
for p in test.data.phases: for p in test.data.phases:
if p == 'resume_complete': if p == 'suspend_prepare':
test.data.dmesg[p]['start'] = test.data.start test.data.dmesg[p]['start'] = test.data.start
test.data.dmesg[p]['end'] = test.data.end test.data.dmesg[p]['end'] = test.data.end
else: else:
test.data.dmesg[p]['start'] = test.data.start test.data.dmesg[p]['start'] = test.data.end
test.data.dmesg[p]['end'] = test.data.start test.data.dmesg[p]['end'] = test.data.end
test.data.tSuspended = test.data.start test.data.tSuspended = test.data.end
test.data.tResumed = test.data.start test.data.tResumed = test.data.end
test.data.tLow = 0 test.data.tLow = 0
test.data.fwValid = False test.data.fwValid = False
for test in testruns: # dev source and procmon events can be unreadable with mixed phase height
if sysvals.usedevsrc or sysvals.useprocmon:
sysvals.mixedphaseheight = False
for i in range(len(testruns)):
test = testruns[i]
data = test.data
# find the total time range for this test (begin, end)
tlb, tle = data.start, data.end
if i < len(testruns) - 1:
tle = testruns[i+1].data.start
# add the process usage data to the timeline
if sysvals.useprocmon:
data.createProcessUsageEvents()
# add the traceevent data to the device hierarchy # add the traceevent data to the device hierarchy
if(sysvals.usetraceevents): if(sysvals.usetraceevents):
# add actual trace funcs # add actual trace funcs
for name in test.ttemp: for name in test.ttemp:
for event in test.ttemp[name]: for event in test.ttemp[name]:
test.data.newActionGlobal(name, event['begin'], event['end'], event['pid']) data.newActionGlobal(name, event['begin'], event['end'], event['pid'])
# add the kprobe based virtual tracefuncs as actual devices # add the kprobe based virtual tracefuncs as actual devices
for key in tp.ktemp: for key in tp.ktemp:
name, pid = key name, pid = key
...@@ -2373,24 +2686,20 @@ def parseTraceLog(): ...@@ -2373,24 +2686,20 @@ def parseTraceLog():
continue continue
for e in tp.ktemp[key]: for e in tp.ktemp[key]:
kb, ke = e['begin'], e['end'] kb, ke = e['begin'], e['end']
if kb == ke or not test.data.isInsideTimeline(kb, ke): if kb == ke or tlb > kb or tle <= kb:
continue continue
test.data.newActionGlobal(e['name'], kb, ke, pid) color = sysvals.kprobeColor(name)
data.newActionGlobal(e['name'], kb, ke, pid, color)
# add config base kprobes and dev kprobes # add config base kprobes and dev kprobes
if sysvals.usedevsrc:
for key in tp.ktemp: for key in tp.ktemp:
name, pid = key name, pid = key
if name in sysvals.tracefuncs: if name in sysvals.tracefuncs or name not in sysvals.dev_tracefuncs:
continue continue
for e in tp.ktemp[key]: for e in tp.ktemp[key]:
kb, ke = e['begin'], e['end'] kb, ke = e['begin'], e['end']
if kb == ke or not test.data.isInsideTimeline(kb, ke): if kb == ke or tlb > kb or tle <= kb:
continue continue
color = sysvals.kprobeColor(e['name'])
if name not in sysvals.dev_tracefuncs:
# config base kprobe
test.data.newActionGlobal(e['name'], kb, ke, -2, color)
elif sysvals.usedevsrc:
# dev kprobe
data.addDeviceFunctionCall(e['name'], name, e['proc'], pid, kb, data.addDeviceFunctionCall(e['name'], name, e['proc'], pid, kb,
ke, e['cdata'], e['rdata']) ke, e['cdata'], e['rdata'])
if sysvals.usecallgraph: if sysvals.usecallgraph:
...@@ -2407,7 +2716,7 @@ def parseTraceLog(): ...@@ -2407,7 +2716,7 @@ def parseTraceLog():
id+', ignoring this callback') id+', ignoring this callback')
continue continue
# match cg data to devices # match cg data to devices
if sysvals.suspendmode == 'command' or not cg.deviceMatch(pid, test.data): if sysvals.suspendmode == 'command' or not cg.deviceMatch(pid, data):
sortkey = '%f%f%d' % (cg.start, cg.end, pid) sortkey = '%f%f%d' % (cg.start, cg.end, pid)
sortlist[sortkey] = cg sortlist[sortkey] = cg
# create blocks for orphan cg data # create blocks for orphan cg data
...@@ -2416,10 +2725,9 @@ def parseTraceLog(): ...@@ -2416,10 +2725,9 @@ def parseTraceLog():
name = cg.list[0].name name = cg.list[0].name
if sysvals.isCallgraphFunc(name): if sysvals.isCallgraphFunc(name):
vprint('Callgraph found for task %d: %.3fms, %s' % (cg.pid, (cg.end - cg.start)*1000, name)) vprint('Callgraph found for task %d: %.3fms, %s' % (cg.pid, (cg.end - cg.start)*1000, name))
cg.newActionFromFunction(test.data) cg.newActionFromFunction(data)
if sysvals.suspendmode == 'command': if sysvals.suspendmode == 'command':
if(sysvals.verbose):
for data in testdata: for data in testdata:
data.printDetails() data.printDetails()
return testdata return testdata
...@@ -2429,7 +2737,7 @@ def parseTraceLog(): ...@@ -2429,7 +2737,7 @@ def parseTraceLog():
lp = data.phases[0] lp = data.phases[0]
for p in data.phases: for p in data.phases:
if(data.dmesg[p]['start'] < 0 and data.dmesg[p]['end'] < 0): if(data.dmesg[p]['start'] < 0 and data.dmesg[p]['end'] < 0):
print('WARNING: phase "%s" is missing!' % p) vprint('WARNING: phase "%s" is missing!' % p)
if(data.dmesg[p]['start'] < 0): if(data.dmesg[p]['start'] < 0):
data.dmesg[p]['start'] = data.dmesg[lp]['end'] data.dmesg[p]['start'] = data.dmesg[lp]['end']
if(p == 'resume_machine'): if(p == 'resume_machine'):
...@@ -2438,60 +2746,27 @@ def parseTraceLog(): ...@@ -2438,60 +2746,27 @@ def parseTraceLog():
data.tLow = 0 data.tLow = 0
if(data.dmesg[p]['end'] < 0): if(data.dmesg[p]['end'] < 0):
data.dmesg[p]['end'] = data.dmesg[p]['start'] data.dmesg[p]['end'] = data.dmesg[p]['start']
if(p != lp and not ('machine' in p and 'machine' in lp)):
data.dmesg[lp]['end'] = data.dmesg[p]['start']
lp = p lp = p
if(len(sysvals.devicefilter) > 0): if(len(sysvals.devicefilter) > 0):
data.deviceFilter(sysvals.devicefilter) data.deviceFilter(sysvals.devicefilter)
data.fixupInitcallsThatDidntReturn() data.fixupInitcallsThatDidntReturn()
if(sysvals.verbose): if sysvals.usedevsrc:
data.optimizeDevSrc()
data.printDetails() data.printDetails()
# x2: merge any overlapping devices between test runs
if sysvals.usedevsrc and len(testdata) > 1:
tc = len(testdata)
for i in range(tc - 1):
devlist = testdata[i].overflowDevices()
for j in range(i + 1, tc):
testdata[j].mergeOverlapDevices(devlist)
testdata[0].stitchTouchingThreads(testdata[1:])
return testdata return testdata
# Function: loadRawKernelLog
# Description:
# Load a raw kernel log that wasn't created by this tool, it might be
# possible to extract a valid suspend/resume log
def loadRawKernelLog(dmesgfile):
global sysvals
stamp = {'time': '', 'host': '', 'mode': 'mem', 'kernel': ''}
stamp['time'] = datetime.now().strftime('%B %d %Y, %I:%M:%S %p')
stamp['host'] = sysvals.hostname
testruns = []
data = 0
lf = open(dmesgfile, 'r')
for line in lf:
line = line.replace('\r\n', '')
idx = line.find('[')
if idx > 1:
line = line[idx:]
m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line)
if(not m):
continue
msg = m.group("msg")
m = re.match('PM: Syncing filesystems.*', msg)
if(m):
if(data):
testruns.append(data)
data = Data(len(testruns))
data.stamp = stamp
if(data):
m = re.match('.* *(?P<k>[0-9]\.[0-9]{2}\.[0-9]-.*) .*', msg)
if(m):
stamp['kernel'] = m.group('k')
m = re.match('PM: Preparing system for (?P<m>.*) sleep', msg)
if(m):
stamp['mode'] = m.group('m')
data.dmesgtext.append(line)
if(data):
testruns.append(data)
sysvals.stamp = stamp
sysvals.suspendmode = stamp['mode']
lf.close()
return testruns
# Function: loadKernelLog # Function: loadKernelLog
# Description: # Description:
# [deprecated for kernel 3.15.0 or newer] # [deprecated for kernel 3.15.0 or newer]
...@@ -2499,15 +2774,16 @@ def loadRawKernelLog(dmesgfile): ...@@ -2499,15 +2774,16 @@ def loadRawKernelLog(dmesgfile):
# The dmesg filename is taken from sysvals # The dmesg filename is taken from sysvals
# Output: # Output:
# An array of empty Data objects with only their dmesgtext attributes set # An array of empty Data objects with only their dmesgtext attributes set
def loadKernelLog(): def loadKernelLog(justtext=False):
global sysvals
vprint('Analyzing the dmesg data...') vprint('Analyzing the dmesg data...')
if(os.path.exists(sysvals.dmesgfile) == False): if(os.path.exists(sysvals.dmesgfile) == False):
doError('%s does not exist' % sysvals.dmesgfile, False) doError('%s does not exist' % sysvals.dmesgfile)
if justtext:
dmesgtext = []
# there can be multiple test runs in a single file # there can be multiple test runs in a single file
tp = TestProps() tp = TestProps()
tp.stamp = datetime.now().strftime('# suspend-%m%d%y-%H%M%S localhost mem unknown')
testruns = [] testruns = []
data = 0 data = 0
lf = open(sysvals.dmesgfile, 'r') lf = open(sysvals.dmesgfile, 'r')
...@@ -2528,6 +2804,9 @@ def loadKernelLog(): ...@@ -2528,6 +2804,9 @@ def loadKernelLog():
if(not m): if(not m):
continue continue
msg = m.group("msg") msg = m.group("msg")
if justtext:
dmesgtext.append(line)
continue
if(re.match('PM: Syncing filesystems.*', msg)): if(re.match('PM: Syncing filesystems.*', msg)):
if(data): if(data):
testruns.append(data) testruns.append(data)
...@@ -2537,24 +2816,24 @@ def loadKernelLog(): ...@@ -2537,24 +2816,24 @@ def loadKernelLog():
data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber] data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber]
if(data.fwSuspend > 0 or data.fwResume > 0): if(data.fwSuspend > 0 or data.fwResume > 0):
data.fwValid = True data.fwValid = True
if(re.match('ACPI: resume from mwait', msg)):
print('NOTE: This suspend appears to be freeze rather than'+\
' %s, it will be treated as such' % sysvals.suspendmode)
sysvals.suspendmode = 'freeze'
if(not data): if(not data):
continue continue
m = re.match('.* *(?P<k>[0-9]\.[0-9]{2}\.[0-9]-.*) .*', msg)
if(m):
sysvals.stamp['kernel'] = m.group('k')
m = re.match('PM: Preparing system for (?P<m>.*) sleep', msg)
if(m):
sysvals.stamp['mode'] = sysvals.suspendmode = m.group('m')
data.dmesgtext.append(line) data.dmesgtext.append(line)
if(data):
testruns.append(data)
lf.close() lf.close()
if(len(testruns) < 1): if justtext:
# bad log, but see if you can extract something meaningful anyway return dmesgtext
testruns = loadRawKernelLog(sysvals.dmesgfile) if data:
testruns.append(data)
if(len(testruns) < 1): if len(testruns) < 1:
doError(' dmesg log is completely unreadable: %s' \ doError(' dmesg log has no suspend/resume data: %s' \
% sysvals.dmesgfile, False) % sysvals.dmesgfile)
# fix lines with same timestamp/function with the call and return swapped # fix lines with same timestamp/function with the call and return swapped
for data in testruns: for data in testruns:
...@@ -2586,8 +2865,6 @@ def loadKernelLog(): ...@@ -2586,8 +2865,6 @@ def loadKernelLog():
# Output: # Output:
# The filled Data object # The filled Data object
def parseKernelLog(data): def parseKernelLog(data):
global sysvals
phase = 'suspend_runtime' phase = 'suspend_runtime'
if(data.fwValid): if(data.fwValid):
...@@ -2645,7 +2922,6 @@ def parseKernelLog(data): ...@@ -2645,7 +2922,6 @@ def parseKernelLog(data):
prevktime = -1.0 prevktime = -1.0
actions = dict() actions = dict()
for line in data.dmesgtext: for line in data.dmesgtext:
# -- preprocessing --
# parse each dmesg line into the time and message # parse each dmesg line into the time and message
m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line) m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line)
if(m): if(m):
...@@ -2653,8 +2929,6 @@ def parseKernelLog(data): ...@@ -2653,8 +2929,6 @@ def parseKernelLog(data):
try: try:
ktime = float(val) ktime = float(val)
except: except:
doWarning('INVALID DMESG LINE: '+\
line.replace('\n', ''), 'dmesg')
continue continue
msg = m.group('msg') msg = m.group('msg')
# initialize data start to first line time # initialize data start to first line time
...@@ -2672,12 +2946,12 @@ def parseKernelLog(data): ...@@ -2672,12 +2946,12 @@ def parseKernelLog(data):
phase = 'resume_noirq' phase = 'resume_noirq'
data.dmesg[phase]['start'] = ktime data.dmesg[phase]['start'] = ktime
# -- phase changes --
# suspend start # suspend start
if(re.match(dm['suspend_prepare'], msg)): if(re.match(dm['suspend_prepare'], msg)):
phase = 'suspend_prepare' phase = 'suspend_prepare'
data.dmesg[phase]['start'] = ktime data.dmesg[phase]['start'] = ktime
data.setStart(ktime) data.setStart(ktime)
data.tKernSus = ktime
# suspend start # suspend start
elif(re.match(dm['suspend'], msg)): elif(re.match(dm['suspend'], msg)):
data.dmesg['suspend_prepare']['end'] = ktime data.dmesg['suspend_prepare']['end'] = ktime
...@@ -2734,7 +3008,7 @@ def parseKernelLog(data): ...@@ -2734,7 +3008,7 @@ def parseKernelLog(data):
elif(re.match(dm['post_resume'], msg)): elif(re.match(dm['post_resume'], msg)):
data.dmesg['resume_complete']['end'] = ktime data.dmesg['resume_complete']['end'] = ktime
data.setEnd(ktime) data.setEnd(ktime)
phase = 'post_resume' data.tKernRes = ktime
break break
# -- device callbacks -- # -- device callbacks --
...@@ -2761,7 +3035,6 @@ def parseKernelLog(data): ...@@ -2761,7 +3035,6 @@ def parseKernelLog(data):
dev['length'] = int(t) dev['length'] = int(t)
dev['end'] = ktime dev['end'] = ktime
# -- non-devicecallback actions --
# if trace events are not available, these are better than nothing # if trace events are not available, these are better than nothing
if(not sysvals.usetraceevents): if(not sysvals.usetraceevents):
# look for known actions # look for known actions
...@@ -2821,7 +3094,6 @@ def parseKernelLog(data): ...@@ -2821,7 +3094,6 @@ def parseKernelLog(data):
for event in actions[name]: for event in actions[name]:
data.newActionGlobal(name, event['begin'], event['end']) data.newActionGlobal(name, event['begin'], event['end'])
if(sysvals.verbose):
data.printDetails() data.printDetails()
if(len(sysvals.devicefilter) > 0): if(len(sysvals.devicefilter) > 0):
data.deviceFilter(sysvals.devicefilter) data.deviceFilter(sysvals.devicefilter)
...@@ -2834,8 +3106,6 @@ def parseKernelLog(data): ...@@ -2834,8 +3106,6 @@ def parseKernelLog(data):
# Arguments: # Arguments:
# testruns: array of Data objects from parseTraceLog # testruns: array of Data objects from parseTraceLog
def createHTMLSummarySimple(testruns, htmlfile): def createHTMLSummarySimple(testruns, htmlfile):
global sysvals
# print out the basic summary of all the tests # print out the basic summary of all the tests
hf = open(htmlfile, 'w') hf = open(htmlfile, 'w')
...@@ -2960,7 +3230,6 @@ def createHTMLSummarySimple(testruns, htmlfile): ...@@ -2960,7 +3230,6 @@ def createHTMLSummarySimple(testruns, htmlfile):
hf.close() hf.close()
def htmlTitle(): def htmlTitle():
global sysvals
modename = { modename = {
'freeze': 'Freeze (S0)', 'freeze': 'Freeze (S0)',
'standby': 'Standby (S1)', 'standby': 'Standby (S1)',
...@@ -2993,13 +3262,14 @@ def ordinal(value): ...@@ -2993,13 +3262,14 @@ def ordinal(value):
# Output: # Output:
# True if the html file was created, false if it failed # True if the html file was created, false if it failed
def createHTML(testruns): def createHTML(testruns):
global sysvals
if len(testruns) < 1: if len(testruns) < 1:
print('ERROR: Not enough test data to build a timeline') print('ERROR: Not enough test data to build a timeline')
return return
kerror = False
for data in testruns: for data in testruns:
if data.kerror:
kerror = True
data.normalizeTime(testruns[-1].tSuspended) data.normalizeTime(testruns[-1].tSuspended)
x2changes = ['', 'absolute'] x2changes = ['', 'absolute']
...@@ -3009,53 +3279,59 @@ def createHTML(testruns): ...@@ -3009,53 +3279,59 @@ def createHTML(testruns):
headline_version = '<div class="version"><a href="https://01.org/suspendresume">AnalyzeSuspend v%s</a></div>' % sysvals.version headline_version = '<div class="version"><a href="https://01.org/suspendresume">AnalyzeSuspend v%s</a></div>' % sysvals.version
headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n' headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n'
html_devlist1 = '<button id="devlist1" class="devlist" style="float:left;">Device Detail%s</button>' % x2changes[0] html_devlist1 = '<button id="devlist1" class="devlist" style="float:left;">Device Detail%s</button>' % x2changes[0]
html_zoombox = '<center><button id="zoomin">ZOOM IN</button><button id="zoomout">ZOOM OUT</button><button id="zoomdef">ZOOM 1:1</button></center>\n' html_zoombox = '<center><button id="zoomin">ZOOM IN +</button><button id="zoomout">ZOOM OUT -</button><button id="zoomdef">ZOOM 1:1</button></center>\n'
html_devlist2 = '<button id="devlist2" class="devlist" style="float:right;">Device Detail2</button>\n' html_devlist2 = '<button id="devlist2" class="devlist" style="float:right;">Device Detail2</button>\n'
html_timeline = '<div id="dmesgzoombox" class="zoombox">\n<div id="{0}" class="timeline" style="height:{1}px">\n' html_timeline = '<div id="dmesgzoombox" class="zoombox">\n<div id="{0}" class="timeline" style="height:{1}px">\n'
html_tblock = '<div id="block{0}" class="tblock" style="left:{1}%;width:{2}%;">\n' html_tblock = '<div id="block{0}" class="tblock" style="left:{1}%;width:{2}%;"><div class="tback" style="height:{3}px"></div>\n'
html_device = '<div id="{0}" title="{1}" class="thread{7}" style="left:{2}%;top:{3}px;height:{4}px;width:{5}%;{8}">{6}</div>\n' html_device = '<div id="{0}" title="{1}" class="thread{7}" style="left:{2}%;top:{3}px;height:{4}px;width:{5}%;{8}">{6}</div>\n'
html_traceevent = '<div title="{0}" class="traceevent" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;">{5}</div>\n' html_error = '<div id="{1}" title="kernel error/warning" class="err" style="right:{0}%">ERROR&rarr;</div>\n'
html_traceevent = '<div title="{0}" class="traceevent{6}" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;{7}">{5}</div>\n'
html_cpuexec = '<div class="jiffie" style="left:{0}%;top:{1}px;height:{2}px;width:{3}%;background:{4};"></div>\n'
html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}px;height:{3}px;background-color:{4}">{5}</div>\n' html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}px;height:{3}px;background-color:{4}">{5}</div>\n'
html_phaselet = '<div id="{0}" class="phaselet" style="left:{1}%;width:{2}%;background-color:{3}"></div>\n' html_phaselet = '<div id="{0}" class="phaselet" style="left:{1}%;width:{2}%;background:{3}"></div>\n'
html_legend = '<div id="p{3}" class="square" style="left:{0}%;background-color:{1}">&nbsp;{2}</div>\n' html_legend = '<div id="p{3}" class="square" style="left:{0}%;background-color:{1}">&nbsp;{2}</div>\n'
html_timetotal = '<table class="time1">\n<tr>'\ html_timetotal = '<table class="time1">\n<tr>'\
'<td class="green">{2} Suspend Time: <b>{0} ms</b></td>'\ '<td class="green" title="{3}">{2} Suspend Time: <b>{0} ms</b></td>'\
'<td class="yellow">{2} Resume Time: <b>{1} ms</b></td>'\ '<td class="yellow" title="{4}">{2} Resume Time: <b>{1} ms</b></td>'\
'</tr>\n</table>\n' '</tr>\n</table>\n'
html_timetotal2 = '<table class="time1">\n<tr>'\ html_timetotal2 = '<table class="time1">\n<tr>'\
'<td class="green">{3} Suspend Time: <b>{0} ms</b></td>'\ '<td class="green" title="{4}">{3} Suspend Time: <b>{0} ms</b></td>'\
'<td class="gray">'+sysvals.suspendmode+' time: <b>{1} ms</b></td>'\ '<td class="gray" title="time spent in low-power mode with clock running">'+sysvals.suspendmode+' time: <b>{1} ms</b></td>'\
'<td class="yellow">{3} Resume Time: <b>{2} ms</b></td>'\ '<td class="yellow" title="{5}">{3} Resume Time: <b>{2} ms</b></td>'\
'</tr>\n</table>\n' '</tr>\n</table>\n'
html_timetotal3 = '<table class="time1">\n<tr>'\ html_timetotal3 = '<table class="time1">\n<tr>'\
'<td class="green">Execution Time: <b>{0} ms</b></td>'\ '<td class="green">Execution Time: <b>{0} ms</b></td>'\
'<td class="yellow">Command: <b>{1}</b></td>'\ '<td class="yellow">Command: <b>{1}</b></td>'\
'</tr>\n</table>\n' '</tr>\n</table>\n'
html_timegroups = '<table class="time2">\n<tr>'\ html_timegroups = '<table class="time2">\n<tr>'\
'<td class="green">{4}Kernel Suspend: {0} ms</td>'\ '<td class="green" title="time from kernel enter_state({5}) to firmware mode [kernel time only]">{4}Kernel Suspend: {0} ms</td>'\
'<td class="purple">{4}Firmware Suspend: {1} ms</td>'\ '<td class="purple">{4}Firmware Suspend: {1} ms</td>'\
'<td class="purple">{4}Firmware Resume: {2} ms</td>'\ '<td class="purple">{4}Firmware Resume: {2} ms</td>'\
'<td class="yellow">{4}Kernel Resume: {3} ms</td>'\ '<td class="yellow" title="time from firmware mode to return from kernel enter_state({5}) [kernel time only]">{4}Kernel Resume: {3} ms</td>'\
'</tr>\n</table>\n' '</tr>\n</table>\n'
# html format variables # html format variables
rowheight = 30 hoverZ = 'z-index:8;'
devtextS = '14px'
devtextH = '30px'
hoverZ = 'z-index:10;'
if sysvals.usedevsrc: if sysvals.usedevsrc:
hoverZ = '' hoverZ = ''
scaleH = 20
scaleTH = 20
if kerror:
scaleH = 40
scaleTH = 60
# device timeline # device timeline
vprint('Creating Device Timeline...') vprint('Creating Device Timeline...')
devtl = Timeline(rowheight) devtl = Timeline(30, scaleH)
# Generate the header for this timeline # Generate the header for this timeline
for data in testruns: for data in testruns:
tTotal = data.end - data.start tTotal = data.end - data.start
tEnd = data.dmesg['resume_complete']['end'] sktime = (data.dmesg['suspend_machine']['end'] - \
data.tKernSus) * 1000
rktime = (data.dmesg['resume_complete']['end'] - \
data.dmesg['resume_machine']['start']) * 1000
if(tTotal == 0): if(tTotal == 0):
print('ERROR: No timeline data') print('ERROR: No timeline data')
sys.exit() sys.exit()
...@@ -3072,59 +3348,85 @@ def createHTML(testruns): ...@@ -3072,59 +3348,85 @@ def createHTML(testruns):
thtml = html_timetotal3.format(run_time, testdesc) thtml = html_timetotal3.format(run_time, testdesc)
devtl.html['header'] += thtml devtl.html['header'] += thtml
elif data.fwValid: elif data.fwValid:
suspend_time = '%.0f'%((data.tSuspended-data.start)*1000 + \ suspend_time = '%.0f'%(sktime + (data.fwSuspend/1000000.0))
(data.fwSuspend/1000000.0)) resume_time = '%.0f'%(rktime + (data.fwResume/1000000.0))
resume_time = '%.0f'%((tEnd-data.tSuspended)*1000 + \
(data.fwResume/1000000.0))
testdesc1 = 'Total' testdesc1 = 'Total'
testdesc2 = '' testdesc2 = ''
stitle = 'time from kernel enter_state(%s) to low-power mode [kernel & firmware time]' % sysvals.suspendmode
rtitle = 'time from low-power mode to return from kernel enter_state(%s) [firmware & kernel time]' % sysvals.suspendmode
if(len(testruns) > 1): if(len(testruns) > 1):
testdesc1 = testdesc2 = ordinal(data.testnumber+1) testdesc1 = testdesc2 = ordinal(data.testnumber+1)
testdesc2 += ' ' testdesc2 += ' '
if(data.tLow == 0): if(data.tLow == 0):
thtml = html_timetotal.format(suspend_time, \ thtml = html_timetotal.format(suspend_time, \
resume_time, testdesc1) resume_time, testdesc1, stitle, rtitle)
else: else:
thtml = html_timetotal2.format(suspend_time, low_time, \ thtml = html_timetotal2.format(suspend_time, low_time, \
resume_time, testdesc1) resume_time, testdesc1, stitle, rtitle)
devtl.html['header'] += thtml devtl.html['header'] += thtml
sktime = '%.3f'%((data.dmesg['suspend_machine']['end'] - \
data.getStart())*1000)
sftime = '%.3f'%(data.fwSuspend / 1000000.0) sftime = '%.3f'%(data.fwSuspend / 1000000.0)
rftime = '%.3f'%(data.fwResume / 1000000.0) rftime = '%.3f'%(data.fwResume / 1000000.0)
rktime = '%.3f'%((data.dmesg['resume_complete']['end'] - \ devtl.html['header'] += html_timegroups.format('%.3f'%sktime, \
data.dmesg['resume_machine']['start'])*1000) sftime, rftime, '%.3f'%rktime, testdesc2, sysvals.suspendmode)
devtl.html['header'] += html_timegroups.format(sktime, \
sftime, rftime, rktime, testdesc2)
else: else:
suspend_time = '%.0f'%((data.tSuspended-data.start)*1000) suspend_time = '%.3f' % sktime
resume_time = '%.0f'%((tEnd-data.tSuspended)*1000) resume_time = '%.3f' % rktime
testdesc = 'Kernel' testdesc = 'Kernel'
stitle = 'time from kernel enter_state(%s) to firmware mode [kernel time only]' % sysvals.suspendmode
rtitle = 'time from firmware mode to return from kernel enter_state(%s) [kernel time only]' % sysvals.suspendmode
if(len(testruns) > 1): if(len(testruns) > 1):
testdesc = ordinal(data.testnumber+1)+' '+testdesc testdesc = ordinal(data.testnumber+1)+' '+testdesc
if(data.tLow == 0): if(data.tLow == 0):
thtml = html_timetotal.format(suspend_time, \ thtml = html_timetotal.format(suspend_time, \
resume_time, testdesc) resume_time, testdesc, stitle, rtitle)
else: else:
thtml = html_timetotal2.format(suspend_time, low_time, \ thtml = html_timetotal2.format(suspend_time, low_time, \
resume_time, testdesc) resume_time, testdesc, stitle, rtitle)
devtl.html['header'] += thtml devtl.html['header'] += thtml
# time scale for potentially multiple datasets # time scale for potentially multiple datasets
t0 = testruns[0].start t0 = testruns[0].start
tMax = testruns[-1].end tMax = testruns[-1].end
tSuspended = testruns[-1].tSuspended
tTotal = tMax - t0 tTotal = tMax - t0
# determine the maximum number of rows we need to draw # determine the maximum number of rows we need to draw
fulllist = []
threadlist = []
pscnt = 0
devcnt = 0
for data in testruns: for data in testruns:
data.selectTimelineDevices('%f', tTotal, sysvals.mindevlen) data.selectTimelineDevices('%f', tTotal, sysvals.mindevlen)
for group in data.devicegroups: for group in data.devicegroups:
devlist = [] devlist = []
for phase in group: for phase in group:
for devname in data.tdevlist[phase]: for devname in data.tdevlist[phase]:
devlist.append((phase,devname)) d = DevItem(data.testnumber, phase, data.dmesg[phase]['list'][devname])
devtl.getPhaseRows(data.dmesg, devlist) devlist.append(d)
if d.isa('kth'):
threadlist.append(d)
else:
if d.isa('ps'):
pscnt += 1
else:
devcnt += 1
fulllist.append(d)
if sysvals.mixedphaseheight:
devtl.getPhaseRows(devlist)
if not sysvals.mixedphaseheight:
if len(threadlist) > 0 and len(fulllist) > 0:
if pscnt > 0 and devcnt > 0:
msg = 'user processes & device pm callbacks'
elif pscnt > 0:
msg = 'user processes'
else:
msg = 'device pm callbacks'
d = testruns[0].addHorizontalDivider(msg, testruns[-1].end)
fulllist.insert(0, d)
devtl.getPhaseRows(fulllist)
if len(threadlist) > 0:
d = testruns[0].addHorizontalDivider('asynchronous kernel threads', testruns[-1].end)
threadlist.insert(0, d)
devtl.getPhaseRows(threadlist, devtl.rows)
devtl.calcTotalRows() devtl.calcTotalRows()
# create bounding box, add buttons # create bounding box, add buttons
...@@ -3145,18 +3447,6 @@ def createHTML(testruns): ...@@ -3145,18 +3447,6 @@ def createHTML(testruns):
# draw each test run chronologically # draw each test run chronologically
for data in testruns: for data in testruns:
# if nore than one test, draw a block to represent user mode
if(data.testnumber > 0):
m0 = testruns[data.testnumber-1].end
mMax = testruns[data.testnumber].start
mTotal = mMax - m0
name = 'usermode%d' % data.testnumber
top = '%d' % devtl.scaleH
left = '%f' % (((m0-t0)*100.0)/tTotal)
width = '%f' % ((mTotal*100.0)/tTotal)
title = 'user mode (%0.3f ms) ' % (mTotal*1000)
devtl.html['timeline'] += html_device.format(name, \
title, left, top, '%d'%devtl.bodyH, width, '', '', '')
# now draw the actual timeline blocks # now draw the actual timeline blocks
for dir in phases: for dir in phases:
# draw suspend and resume blocks separately # draw suspend and resume blocks separately
...@@ -3169,13 +3459,16 @@ def createHTML(testruns): ...@@ -3169,13 +3459,16 @@ def createHTML(testruns):
else: else:
m0 = testruns[data.testnumber].tSuspended m0 = testruns[data.testnumber].tSuspended
mMax = testruns[data.testnumber].end mMax = testruns[data.testnumber].end
# in an x2 run, remove any gap between blocks
if len(testruns) > 1 and data.testnumber == 0:
mMax = testruns[1].start
mTotal = mMax - m0 mTotal = mMax - m0
left = '%f' % ((((m0-t0)*100.0)+sysvals.srgap/2)/tTotal) left = '%f' % ((((m0-t0)*100.0)+sysvals.srgap/2)/tTotal)
# if a timeline block is 0 length, skip altogether # if a timeline block is 0 length, skip altogether
if mTotal == 0: if mTotal == 0:
continue continue
width = '%f' % (((mTotal*100.0)-sysvals.srgap/2)/tTotal) width = '%f' % (((mTotal*100.0)-sysvals.srgap/2)/tTotal)
devtl.html['timeline'] += html_tblock.format(bname, left, width) devtl.html['timeline'] += html_tblock.format(bname, left, width, devtl.scaleH)
for b in sorted(phases[dir]): for b in sorted(phases[dir]):
# draw the phase color background # draw the phase color background
phase = data.dmesg[b] phase = data.dmesg[b]
...@@ -3185,6 +3478,12 @@ def createHTML(testruns): ...@@ -3185,6 +3478,12 @@ def createHTML(testruns):
devtl.html['timeline'] += html_phase.format(left, width, \ devtl.html['timeline'] += html_phase.format(left, width, \
'%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \ '%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \
data.dmesg[b]['color'], '') data.dmesg[b]['color'], '')
for e in data.errorinfo[dir]:
# draw red lines for any kernel errors found
t, err = e
right = '%f' % (((mMax-t)*100.0)/mTotal)
devtl.html['timeline'] += html_error.format(right, err)
for b in sorted(phases[dir]):
# draw the devices for this phase # draw the devices for this phase
phaselist = data.dmesg[b]['list'] phaselist = data.dmesg[b]['list']
for d in data.tdevlist[b]: for d in data.tdevlist[b]:
...@@ -3196,46 +3495,62 @@ def createHTML(testruns): ...@@ -3196,46 +3495,62 @@ def createHTML(testruns):
xtrastyle = '' xtrastyle = ''
if 'htmlclass' in dev: if 'htmlclass' in dev:
xtraclass = dev['htmlclass'] xtraclass = dev['htmlclass']
xtrainfo = dev['htmlclass']
if 'color' in dev: if 'color' in dev:
xtrastyle = 'background-color:%s;' % dev['color'] xtrastyle = 'background-color:%s;' % dev['color']
if(d in sysvals.devprops): if(d in sysvals.devprops):
name = sysvals.devprops[d].altName(d) name = sysvals.devprops[d].altName(d)
xtraclass = sysvals.devprops[d].xtraClass() xtraclass = sysvals.devprops[d].xtraClass()
xtrainfo = sysvals.devprops[d].xtraInfo() xtrainfo = sysvals.devprops[d].xtraInfo()
elif xtraclass == ' kth':
xtrainfo = ' kernel_thread'
if('drv' in dev and dev['drv']): if('drv' in dev and dev['drv']):
drv = ' {%s}' % dev['drv'] drv = ' {%s}' % dev['drv']
rowheight = devtl.phaseRowHeight(b, dev['row']) rowheight = devtl.phaseRowHeight(data.testnumber, b, dev['row'])
rowtop = devtl.phaseRowTop(b, dev['row']) rowtop = devtl.phaseRowTop(data.testnumber, b, dev['row'])
top = '%.3f' % (rowtop + devtl.scaleH) top = '%.3f' % (rowtop + devtl.scaleH)
left = '%f' % (((dev['start']-m0)*100)/mTotal) left = '%f' % (((dev['start']-m0)*100)/mTotal)
width = '%f' % (((dev['end']-dev['start'])*100)/mTotal) width = '%f' % (((dev['end']-dev['start'])*100)/mTotal)
length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000) length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000)
title = name+drv+xtrainfo+length
if sysvals.suspendmode == 'command': if sysvals.suspendmode == 'command':
title = name+drv+xtrainfo+length+'cmdexec' title += sysvals.testcommand
elif xtraclass == ' ps':
if 'suspend' in b:
title += 'pre_suspend_process'
else:
title += 'post_resume_process'
else: else:
title = name+drv+xtrainfo+length+b title += b
devtl.html['timeline'] += html_device.format(dev['id'], \ devtl.html['timeline'] += html_device.format(dev['id'], \
title, left, top, '%.3f'%rowheight, width, \ title, left, top, '%.3f'%rowheight, width, \
d+drv, xtraclass, xtrastyle) d+drv, xtraclass, xtrastyle)
if('cpuexec' in dev):
for t in sorted(dev['cpuexec']):
start, end = t
j = float(dev['cpuexec'][t]) / 5
if j > 1.0:
j = 1.0
height = '%.3f' % (rowheight/3)
top = '%.3f' % (rowtop + devtl.scaleH + 2*rowheight/3)
left = '%f' % (((start-m0)*100)/mTotal)
width = '%f' % ((end-start)*100/mTotal)
color = 'rgba(255, 0, 0, %f)' % j
devtl.html['timeline'] += \
html_cpuexec.format(left, top, height, width, color)
if('src' not in dev): if('src' not in dev):
continue continue
# draw any trace events for this device # draw any trace events for this device
vprint('Debug trace events found for device %s' % d)
vprint('%20s %20s %10s %8s' % ('title', \
'name', 'time(ms)', 'length(ms)'))
for e in dev['src']: for e in dev['src']:
vprint('%20s %20s %10.3f %8.3f' % (e.title, \ height = '%.3f' % devtl.rowH
e.text, e.time*1000, e.length*1000))
height = devtl.rowH
top = '%.3f' % (rowtop + devtl.scaleH + (e.row*devtl.rowH)) top = '%.3f' % (rowtop + devtl.scaleH + (e.row*devtl.rowH))
left = '%f' % (((e.time-m0)*100)/mTotal) left = '%f' % (((e.time-m0)*100)/mTotal)
width = '%f' % (e.length*100/mTotal) width = '%f' % (e.length*100/mTotal)
color = 'rgba(204,204,204,0.5)' xtrastyle = ''
if e.color:
xtrastyle = 'background:%s;' % e.color
devtl.html['timeline'] += \ devtl.html['timeline'] += \
html_traceevent.format(e.title, \ html_traceevent.format(e.title(), \
left, top, '%.3f'%height, \ left, top, height, width, e.text(), '', xtrastyle)
width, e.text)
# draw the time scale, try to make the number of labels readable # draw the time scale, try to make the number of labels readable
devtl.html['timeline'] += devtl.createTimeScale(m0, mMax, tTotal, dir) devtl.html['timeline'] += devtl.createTimeScale(m0, mMax, tTotal, dir)
devtl.html['timeline'] += '</div>\n' devtl.html['timeline'] += '</div>\n'
...@@ -3284,8 +3599,7 @@ def createHTML(testruns): ...@@ -3284,8 +3599,7 @@ def createHTML(testruns):
t2 {color:black;font:25px Times;}\n\ t2 {color:black;font:25px Times;}\n\
t3 {color:black;font:20px Times;white-space:nowrap;}\n\ t3 {color:black;font:20px Times;white-space:nowrap;}\n\
t4 {color:black;font:bold 30px Times;line-height:60px;white-space:nowrap;}\n\ t4 {color:black;font:bold 30px Times;line-height:60px;white-space:nowrap;}\n\
cS {color:blue;font:bold 11px Times;}\n\ cS {font:bold 13px Times;}\n\
cR {color:red;font:bold 11px Times;}\n\
table {width:100%;}\n\ table {width:100%;}\n\
.gray {background-color:rgba(80,80,80,0.1);}\n\ .gray {background-color:rgba(80,80,80,0.1);}\n\
.green {background-color:rgba(204,255,204,0.4);}\n\ .green {background-color:rgba(204,255,204,0.4);}\n\
...@@ -3302,20 +3616,22 @@ def createHTML(testruns): ...@@ -3302,20 +3616,22 @@ def createHTML(testruns):
.pf:'+cgchk+' + label {background:url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/><rect x="8" y="4" width="2" height="10" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\ .pf:'+cgchk+' + label {background:url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/><rect x="8" y="4" width="2" height="10" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\
.pf:'+cgnchk+' ~ label {background:url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\ .pf:'+cgnchk+' ~ label {background:url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\
.pf:'+cgchk+' ~ *:not(:nth-child(2)) {display:none;}\n\ .pf:'+cgchk+' ~ *:not(:nth-child(2)) {display:none;}\n\
.zoombox {position:relative;width:100%;overflow-x:scroll;}\n\ .zoombox {position:relative;width:100%;overflow-x:scroll;-webkit-user-select:none;-moz-user-select:none;user-select:none;}\n\
.timeline {position:relative;font-size:14px;cursor:pointer;width:100%; overflow:hidden;background:linear-gradient(#cccccc, white);}\n\ .timeline {position:relative;font-size:14px;cursor:pointer;width:100%; overflow:hidden;background:linear-gradient(#cccccc, white);}\n\
.thread {position:absolute;height:0%;overflow:hidden;line-height:'+devtextH+';font-size:'+devtextS+';border:1px solid;text-align:center;white-space:nowrap;background-color:rgba(204,204,204,0.5);}\n\ .thread {position:absolute;height:0%;overflow:hidden;z-index:7;line-height:30px;font-size:14px;border:1px solid;text-align:center;white-space:nowrap;}\n\
.thread.sync {background-color:'+sysvals.synccolor+';}\n\ .thread.ps {border-radius:3px;background:linear-gradient(to top, #ccc, #eee);}\n\
.thread.bg {background-color:'+sysvals.kprobecolor+';}\n\
.thread:hover {background-color:white;border:1px solid red;'+hoverZ+'}\n\ .thread:hover {background-color:white;border:1px solid red;'+hoverZ+'}\n\
.thread.sec,.thread.sec:hover {background-color:black;border:0;color:white;line-height:15px;font-size:10px;}\n\
.hover {background-color:white;border:1px solid red;'+hoverZ+'}\n\ .hover {background-color:white;border:1px solid red;'+hoverZ+'}\n\
.hover.sync {background-color:white;}\n\ .hover.sync {background-color:white;}\n\
.hover.bg {background-color:white;}\n\ .hover.bg,.hover.kth,.hover.sync,.hover.ps {background-color:white;}\n\
.traceevent {position:absolute;font-size:10px;overflow:hidden;color:black;text-align:center;white-space:nowrap;border-radius:5px;border:1px solid black;background:linear-gradient(to bottom right,rgba(204,204,204,1),rgba(150,150,150,1));}\n\ .jiffie {position:absolute;pointer-events: none;z-index:8;}\n\
.traceevent:hover {background:white;}\n\ .traceevent {position:absolute;font-size:10px;z-index:7;overflow:hidden;color:black;text-align:center;white-space:nowrap;border-radius:5px;border:1px solid black;background:linear-gradient(to bottom right,#CCC,#969696);}\n\
.traceevent:hover {color:white;font-weight:bold;border:1px solid white;}\n\
.phase {position:absolute;overflow:hidden;border:0px;text-align:center;}\n\ .phase {position:absolute;overflow:hidden;border:0px;text-align:center;}\n\
.phaselet {position:absolute;overflow:hidden;border:0px;text-align:center;height:100px;font-size:24px;}\n\ .phaselet {position:absolute;overflow:hidden;border:0px;text-align:center;height:100px;font-size:24px;}\n\
.t {z-index:2;position:absolute;pointer-events:none;top:0%;height:100%;border-right:1px solid black;}\n\ .t {position:absolute;line-height:'+('%d'%scaleTH)+'px;pointer-events:none;top:0;height:100%;border-right:1px solid black;z-index:6;}\n\
.err {position:absolute;top:0%;height:100%;border-right:3px solid red;color:red;font:bold 14px Times;line-height:18px;}\n\
.legend {position:relative; width:100%; height:40px; text-align:center;margin-bottom:20px}\n\ .legend {position:relative; width:100%; height:40px; text-align:center;margin-bottom:20px}\n\
.legend .square {position:absolute;cursor:pointer;top:10px; width:0px;height:20px;border:1px solid;padding-left:20px;}\n\ .legend .square {position:absolute;cursor:pointer;top:10px; width:0px;height:20px;border:1px solid;padding-left:20px;}\n\
button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\ button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\
...@@ -3327,7 +3643,8 @@ def createHTML(testruns): ...@@ -3327,7 +3643,8 @@ def createHTML(testruns):
a:active {color:white;}\n\ a:active {color:white;}\n\
.version {position:relative;float:left;color:white;font-size:10px;line-height:30px;margin-left:10px;}\n\ .version {position:relative;float:left;color:white;font-size:10px;line-height:30px;margin-left:10px;}\n\
#devicedetail {height:100px;box-shadow:5px 5px 20px black;}\n\ #devicedetail {height:100px;box-shadow:5px 5px 20px black;}\n\
.tblock {position:absolute;height:100%;}\n\ .tblock {position:absolute;height:100%;background-color:#ddd;}\n\
.tback {position:absolute;width:100%;background:linear-gradient(#ccc, #ddd);}\n\
.bg {z-index:1;}\n\ .bg {z-index:1;}\n\
</style>\n</head>\n<body>\n' </style>\n</head>\n<body>\n'
...@@ -3342,6 +3659,8 @@ def createHTML(testruns): ...@@ -3342,6 +3659,8 @@ def createHTML(testruns):
# write the test title and general info header # write the test title and general info header
if(sysvals.stamp['time'] != ""): if(sysvals.stamp['time'] != ""):
hf.write(headline_version) hf.write(headline_version)
if sysvals.logmsg:
hf.write('<button id="showtest" class="logbtn">log</button>')
if sysvals.addlogs and sysvals.dmesgfile: if sysvals.addlogs and sysvals.dmesgfile:
hf.write('<button id="showdmesg" class="logbtn">dmesg</button>') hf.write('<button id="showdmesg" class="logbtn">dmesg</button>')
if sysvals.addlogs and sysvals.ftracefile: if sysvals.addlogs and sysvals.ftracefile:
...@@ -3359,6 +3678,9 @@ def createHTML(testruns): ...@@ -3359,6 +3678,9 @@ def createHTML(testruns):
# draw the colored boxes for the device detail section # draw the colored boxes for the device detail section
for data in testruns: for data in testruns:
hf.write('<div id="devicedetail%d">\n' % data.testnumber) hf.write('<div id="devicedetail%d">\n' % data.testnumber)
pscolor = 'linear-gradient(to top left, #ccc, #eee)'
hf.write(html_phaselet.format('pre_suspend_process', \
'0', '0', pscolor))
for b in data.phases: for b in data.phases:
phase = data.dmesg[b] phase = data.dmesg[b]
length = phase['end']-phase['start'] length = phase['end']-phase['start']
...@@ -3366,13 +3688,17 @@ def createHTML(testruns): ...@@ -3366,13 +3688,17 @@ def createHTML(testruns):
width = '%.3f' % ((length*100.0)/tTotal) width = '%.3f' % ((length*100.0)/tTotal)
hf.write(html_phaselet.format(b, left, width, \ hf.write(html_phaselet.format(b, left, width, \
data.dmesg[b]['color'])) data.dmesg[b]['color']))
hf.write(html_phaselet.format('post_resume_process', \
'0', '0', pscolor))
if sysvals.suspendmode == 'command': if sysvals.suspendmode == 'command':
hf.write(html_phaselet.format('cmdexec', '0', '0', \ hf.write(html_phaselet.format('cmdexec', '0', '0', pscolor))
data.dmesg['resume_complete']['color']))
hf.write('</div>\n') hf.write('</div>\n')
hf.write('</div>\n') hf.write('</div>\n')
# write the ftrace data (callgraph) # write the ftrace data (callgraph)
if sysvals.cgtest >= 0 and len(testruns) > sysvals.cgtest:
data = testruns[sysvals.cgtest]
else:
data = testruns[-1] data = testruns[-1]
if(sysvals.usecallgraph and not sysvals.embedded): if(sysvals.usecallgraph and not sysvals.embedded):
hf.write('<section id="callgraphs" class="callgraph">\n') hf.write('<section id="callgraphs" class="callgraph">\n')
...@@ -3383,6 +3709,8 @@ def createHTML(testruns): ...@@ -3383,6 +3709,8 @@ def createHTML(testruns):
html_func_leaf = '<article>{0} {1}</article>\n' html_func_leaf = '<article>{0} {1}</article>\n'
num = 0 num = 0
for p in data.phases: for p in data.phases:
if sysvals.cgphase and p != sysvals.cgphase:
continue
list = data.dmesg[p]['list'] list = data.dmesg[p]['list']
for devname in data.sortedDevices(p): for devname in data.sortedDevices(p):
if('ftrace' not in list[devname]): if('ftrace' not in list[devname]):
...@@ -3420,11 +3748,15 @@ def createHTML(testruns): ...@@ -3420,11 +3748,15 @@ def createHTML(testruns):
hf.write(html_func_end) hf.write(html_func_end)
hf.write('\n\n </section>\n') hf.write('\n\n </section>\n')
# add the test log as a hidden div
if sysvals.logmsg:
hf.write('<div id="testlog" style="display:none;">\n'+sysvals.logmsg+'</div>\n')
# add the dmesg log as a hidden div # add the dmesg log as a hidden div
if sysvals.addlogs and sysvals.dmesgfile: if sysvals.addlogs and sysvals.dmesgfile:
hf.write('<div id="dmesglog" style="display:none;">\n') hf.write('<div id="dmesglog" style="display:none;">\n')
lf = open(sysvals.dmesgfile, 'r') lf = open(sysvals.dmesgfile, 'r')
for line in lf: for line in lf:
line = line.replace('<', '&lt').replace('>', '&gt')
hf.write(line) hf.write(line)
lf.close() lf.close()
hf.write('</div>\n') hf.write('</div>\n')
...@@ -3475,8 +3807,9 @@ def addScriptCode(hf, testruns): ...@@ -3475,8 +3807,9 @@ def addScriptCode(hf, testruns):
script_code = \ script_code = \
'<script type="text/javascript">\n'+detail+\ '<script type="text/javascript">\n'+detail+\
' var resolution = -1;\n'\ ' var resolution = -1;\n'\
' var dragval = [0, 0];\n'\
' function redrawTimescale(t0, tMax, tS) {\n'\ ' function redrawTimescale(t0, tMax, tS) {\n'\
' var rline = \'<div class="t" style="left:0;border-left:1px solid black;border-right:0;"><cR><-R</cR></div>\';\n'\ ' var rline = \'<div class="t" style="left:0;border-left:1px solid black;border-right:0;"><cS>&larr;R</cS></div>\';\n'\
' var tTotal = tMax - t0;\n'\ ' var tTotal = tMax - t0;\n'\
' var list = document.getElementsByClassName("tblock");\n'\ ' var list = document.getElementsByClassName("tblock");\n'\
' for (var i = 0; i < list.length; i++) {\n'\ ' for (var i = 0; i < list.length; i++) {\n'\
...@@ -3501,7 +3834,7 @@ def addScriptCode(hf, testruns): ...@@ -3501,7 +3834,7 @@ def addScriptCode(hf, testruns):
' pos = 100 - (((j)*tS*100)/mTotal) - divEdge;\n'\ ' pos = 100 - (((j)*tS*100)/mTotal) - divEdge;\n'\
' val = (j-divTotal+1)*tS;\n'\ ' val = (j-divTotal+1)*tS;\n'\
' if(j == divTotal - 1)\n'\ ' if(j == divTotal - 1)\n'\
' htmlline = \'<div class="t" style="right:\'+pos+\'%"><cS>S-></cS></div>\';\n'\ ' htmlline = \'<div class="t" style="right:\'+pos+\'%"><cS>S&rarr;</cS></div>\';\n'\
' else\n'\ ' else\n'\
' htmlline = \'<div class="t" style="right:\'+pos+\'%">\'+val+\'ms</div>\';\n'\ ' htmlline = \'<div class="t" style="right:\'+pos+\'%">\'+val+\'ms</div>\';\n'\
' }\n'\ ' }\n'\
...@@ -3513,6 +3846,7 @@ def addScriptCode(hf, testruns): ...@@ -3513,6 +3846,7 @@ def addScriptCode(hf, testruns):
' function zoomTimeline() {\n'\ ' function zoomTimeline() {\n'\
' var dmesg = document.getElementById("dmesg");\n'\ ' var dmesg = document.getElementById("dmesg");\n'\
' var zoombox = document.getElementById("dmesgzoombox");\n'\ ' var zoombox = document.getElementById("dmesgzoombox");\n'\
' var left = zoombox.scrollLeft;\n'\
' var val = parseFloat(dmesg.style.width);\n'\ ' var val = parseFloat(dmesg.style.width);\n'\
' var newval = 100;\n'\ ' var newval = 100;\n'\
' var sh = window.outerWidth / 2;\n'\ ' var sh = window.outerWidth / 2;\n'\
...@@ -3520,12 +3854,12 @@ def addScriptCode(hf, testruns): ...@@ -3520,12 +3854,12 @@ def addScriptCode(hf, testruns):
' newval = val * 1.2;\n'\ ' newval = val * 1.2;\n'\
' if(newval > 910034) newval = 910034;\n'\ ' if(newval > 910034) newval = 910034;\n'\
' dmesg.style.width = newval+"%";\n'\ ' dmesg.style.width = newval+"%";\n'\
' zoombox.scrollLeft = ((zoombox.scrollLeft + sh) * newval / val) - sh;\n'\ ' zoombox.scrollLeft = ((left + sh) * newval / val) - sh;\n'\
' } else if (this.id == "zoomout") {\n'\ ' } else if (this.id == "zoomout") {\n'\
' newval = val / 1.2;\n'\ ' newval = val / 1.2;\n'\
' if(newval < 100) newval = 100;\n'\ ' if(newval < 100) newval = 100;\n'\
' dmesg.style.width = newval+"%";\n'\ ' dmesg.style.width = newval+"%";\n'\
' zoombox.scrollLeft = ((zoombox.scrollLeft + sh) * newval / val) - sh;\n'\ ' zoombox.scrollLeft = ((left + sh) * newval / val) - sh;\n'\
' } else {\n'\ ' } else {\n'\
' zoombox.scrollLeft = 0;\n'\ ' zoombox.scrollLeft = 0;\n'\
' dmesg.style.width = "100%";\n'\ ' dmesg.style.width = "100%";\n'\
...@@ -3542,8 +3876,12 @@ def addScriptCode(hf, testruns): ...@@ -3542,8 +3876,12 @@ def addScriptCode(hf, testruns):
' resolution = tS[i];\n'\ ' resolution = tS[i];\n'\
' redrawTimescale(t0, tMax, tS[i]);\n'\ ' redrawTimescale(t0, tMax, tS[i]);\n'\
' }\n'\ ' }\n'\
' function deviceName(title) {\n'\
' var name = title.slice(0, title.indexOf(" ("));\n'\
' return name;\n'\
' }\n'\
' function deviceHover() {\n'\ ' function deviceHover() {\n'\
' var name = this.title.slice(0, this.title.indexOf(" ("));\n'\ ' var name = deviceName(this.title);\n'\
' var dmesg = document.getElementById("dmesg");\n'\ ' var dmesg = document.getElementById("dmesg");\n'\
' var dev = dmesg.getElementsByClassName("thread");\n'\ ' var dev = dmesg.getElementsByClassName("thread");\n'\
' var cpu = -1;\n'\ ' var cpu = -1;\n'\
...@@ -3552,7 +3890,7 @@ def addScriptCode(hf, testruns): ...@@ -3552,7 +3890,7 @@ def addScriptCode(hf, testruns):
' else if(name.match("CPU_OFF\[[0-9]*\]"))\n'\ ' else if(name.match("CPU_OFF\[[0-9]*\]"))\n'\
' cpu = parseInt(name.slice(8));\n'\ ' cpu = parseInt(name.slice(8));\n'\
' for (var i = 0; i < dev.length; i++) {\n'\ ' for (var i = 0; i < dev.length; i++) {\n'\
' dname = dev[i].title.slice(0, dev[i].title.indexOf(" ("));\n'\ ' dname = deviceName(dev[i].title);\n'\
' var cname = dev[i].className.slice(dev[i].className.indexOf("thread"));\n'\ ' var cname = dev[i].className.slice(dev[i].className.indexOf("thread"));\n'\
' if((cpu >= 0 && dname.match("CPU_O[NF]*\\\[*"+cpu+"\\\]")) ||\n'\ ' if((cpu >= 0 && dname.match("CPU_O[NF]*\\\[*"+cpu+"\\\]")) ||\n'\
' (name == dname))\n'\ ' (name == dname))\n'\
...@@ -3578,7 +3916,7 @@ def addScriptCode(hf, testruns): ...@@ -3578,7 +3916,7 @@ def addScriptCode(hf, testruns):
' total[2] = (total[2]+total[4])/2;\n'\ ' total[2] = (total[2]+total[4])/2;\n'\
' }\n'\ ' }\n'\
' var devtitle = document.getElementById("devicedetailtitle");\n'\ ' var devtitle = document.getElementById("devicedetailtitle");\n'\
' var name = title.slice(0, title.indexOf(" ("));\n'\ ' var name = deviceName(title);\n'\
' if(cpu >= 0) name = "CPU"+cpu;\n'\ ' if(cpu >= 0) name = "CPU"+cpu;\n'\
' var driver = "";\n'\ ' var driver = "";\n'\
' var tS = "<t2>(</t2>";\n'\ ' var tS = "<t2>(</t2>";\n'\
...@@ -3600,7 +3938,7 @@ def addScriptCode(hf, testruns): ...@@ -3600,7 +3938,7 @@ def addScriptCode(hf, testruns):
' function deviceDetail() {\n'\ ' function deviceDetail() {\n'\
' var devinfo = document.getElementById("devicedetail");\n'\ ' var devinfo = document.getElementById("devicedetail");\n'\
' devinfo.style.display = "block";\n'\ ' devinfo.style.display = "block";\n'\
' var name = this.title.slice(0, this.title.indexOf(" ("));\n'\ ' var name = deviceName(this.title);\n'\
' var cpu = -1;\n'\ ' var cpu = -1;\n'\
' if(name.match("CPU_ON\[[0-9]*\]"))\n'\ ' if(name.match("CPU_ON\[[0-9]*\]"))\n'\
' cpu = parseInt(name.slice(7));\n'\ ' cpu = parseInt(name.slice(7));\n'\
...@@ -3615,7 +3953,7 @@ def addScriptCode(hf, testruns): ...@@ -3615,7 +3953,7 @@ def addScriptCode(hf, testruns):
' var pd = pdata[0];\n'\ ' var pd = pdata[0];\n'\
' var total = [0.0, 0.0, 0.0];\n'\ ' var total = [0.0, 0.0, 0.0];\n'\
' for (var i = 0; i < dev.length; i++) {\n'\ ' for (var i = 0; i < dev.length; i++) {\n'\
' dname = dev[i].title.slice(0, dev[i].title.indexOf(" ("));\n'\ ' dname = deviceName(dev[i].title);\n'\
' if((cpu >= 0 && dname.match("CPU_O[NF]*\\\[*"+cpu+"\\\]")) ||\n'\ ' if((cpu >= 0 && dname.match("CPU_O[NF]*\\\[*"+cpu+"\\\]")) ||\n'\
' (name == dname))\n'\ ' (name == dname))\n'\
' {\n'\ ' {\n'\
...@@ -3656,7 +3994,7 @@ def addScriptCode(hf, testruns): ...@@ -3656,7 +3994,7 @@ def addScriptCode(hf, testruns):
' phases[i].title = phases[i].id+" "+pd[phases[i].id]+" ms";\n'\ ' phases[i].title = phases[i].id+" "+pd[phases[i].id]+" ms";\n'\
' left += w;\n'\ ' left += w;\n'\
' var time = "<t4 style=\\"font-size:"+fs+"px\\">"+pd[phases[i].id]+" ms<br></t4>";\n'\ ' var time = "<t4 style=\\"font-size:"+fs+"px\\">"+pd[phases[i].id]+" ms<br></t4>";\n'\
' var pname = "<t3 style=\\"font-size:"+fs2+"px\\">"+phases[i].id.replace("_", " ")+"</t3>";\n'\ ' var pname = "<t3 style=\\"font-size:"+fs2+"px\\">"+phases[i].id.replace(new RegExp("_", "g"), " ")+"</t3>";\n'\
' phases[i].innerHTML = time+pname;\n'\ ' phases[i].innerHTML = time+pname;\n'\
' } else {\n'\ ' } else {\n'\
' phases[i].style.width = "0%";\n'\ ' phases[i].style.width = "0%";\n'\
...@@ -3677,12 +4015,7 @@ def addScriptCode(hf, testruns): ...@@ -3677,12 +4015,7 @@ def addScriptCode(hf, testruns):
' }\n'\ ' }\n'\
' }\n'\ ' }\n'\
' function devListWindow(e) {\n'\ ' function devListWindow(e) {\n'\
' var sx = e.clientX;\n'\ ' var win = window.open();\n'\
' if(sx > window.innerWidth - 440)\n'\
' sx = window.innerWidth - 440;\n'\
' var cfg="top="+e.screenY+", left="+sx+", width=440, height=720, scrollbars=yes";\n'\
' var win = window.open("", "_blank", cfg);\n'\
' if(window.chrome) win.moveBy(sx, 0);\n'\
' var html = "<title>"+e.target.innerHTML+"</title>"+\n'\ ' var html = "<title>"+e.target.innerHTML+"</title>"+\n'\
' "<style type=\\"text/css\\">"+\n'\ ' "<style type=\\"text/css\\">"+\n'\
' " ul {list-style-type:circle;padding-left:10px;margin-left:10px;}"+\n'\ ' " ul {list-style-type:circle;padding-left:10px;margin-left:10px;}"+\n'\
...@@ -3692,6 +4025,12 @@ def addScriptCode(hf, testruns): ...@@ -3692,6 +4025,12 @@ def addScriptCode(hf, testruns):
' dt = devtable[1];\n'\ ' dt = devtable[1];\n'\
' win.document.write(html+dt);\n'\ ' win.document.write(html+dt);\n'\
' }\n'\ ' }\n'\
' function errWindow() {\n'\
' var text = this.id;\n'\
' var win = window.open();\n'\
' win.document.write("<pre>"+text+"</pre>");\n'\
' win.document.close();\n'\
' }\n'\
' function logWindow(e) {\n'\ ' function logWindow(e) {\n'\
' var name = e.target.id.slice(4);\n'\ ' var name = e.target.id.slice(4);\n'\
' var win = window.open();\n'\ ' var win = window.open();\n'\
...@@ -3702,16 +4041,46 @@ def addScriptCode(hf, testruns): ...@@ -3702,16 +4041,46 @@ def addScriptCode(hf, testruns):
' }\n'\ ' }\n'\
' function onClickPhase(e) {\n'\ ' function onClickPhase(e) {\n'\
' }\n'\ ' }\n'\
' function onMouseDown(e) {\n'\
' dragval[0] = e.clientX;\n'\
' dragval[1] = document.getElementById("dmesgzoombox").scrollLeft;\n'\
' document.onmousemove = onMouseMove;\n'\
' }\n'\
' function onMouseMove(e) {\n'\
' var zoombox = document.getElementById("dmesgzoombox");\n'\
' zoombox.scrollLeft = dragval[1] + dragval[0] - e.clientX;\n'\
' }\n'\
' function onMouseUp(e) {\n'\
' document.onmousemove = null;\n'\
' }\n'\
' function onKeyPress(e) {\n'\
' var c = e.charCode;\n'\
' if(c != 42 && c != 43 && c != 45) return;\n'\
' var click = document.createEvent("Events");\n'\
' click.initEvent("click", true, false);\n'\
' if(c == 43) \n'\
' document.getElementById("zoomin").dispatchEvent(click);\n'\
' else if(c == 45)\n'\
' document.getElementById("zoomout").dispatchEvent(click);\n'\
' else if(c == 42)\n'\
' document.getElementById("zoomdef").dispatchEvent(click);\n'\
' }\n'\
' window.addEventListener("resize", function () {zoomTimeline();});\n'\ ' window.addEventListener("resize", function () {zoomTimeline();});\n'\
' window.addEventListener("load", function () {\n'\ ' window.addEventListener("load", function () {\n'\
' var dmesg = document.getElementById("dmesg");\n'\ ' var dmesg = document.getElementById("dmesg");\n'\
' dmesg.style.width = "100%"\n'\ ' dmesg.style.width = "100%"\n'\
' dmesg.onmousedown = onMouseDown;\n'\
' document.onmouseup = onMouseUp;\n'\
' document.onkeypress = onKeyPress;\n'\
' document.getElementById("zoomin").onclick = zoomTimeline;\n'\ ' document.getElementById("zoomin").onclick = zoomTimeline;\n'\
' document.getElementById("zoomout").onclick = zoomTimeline;\n'\ ' document.getElementById("zoomout").onclick = zoomTimeline;\n'\
' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\ ' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\
' var list = document.getElementsByClassName("square");\n'\ ' var list = document.getElementsByClassName("square");\n'\
' for (var i = 0; i < list.length; i++)\n'\ ' for (var i = 0; i < list.length; i++)\n'\
' list[i].onclick = onClickPhase;\n'\ ' list[i].onclick = onClickPhase;\n'\
' var list = document.getElementsByClassName("err");\n'\
' for (var i = 0; i < list.length; i++)\n'\
' list[i].onclick = errWindow;\n'\
' var list = document.getElementsByClassName("logbtn");\n'\ ' var list = document.getElementsByClassName("logbtn");\n'\
' for (var i = 0; i < list.length; i++)\n'\ ' for (var i = 0; i < list.length; i++)\n'\
' list[i].onclick = logWindow;\n'\ ' list[i].onclick = logWindow;\n'\
...@@ -3734,9 +4103,7 @@ def addScriptCode(hf, testruns): ...@@ -3734,9 +4103,7 @@ def addScriptCode(hf, testruns):
# Execute system suspend through the sysfs interface, then copy the output # Execute system suspend through the sysfs interface, then copy the output
# dmesg and ftrace files to the test output directory. # dmesg and ftrace files to the test output directory.
def executeSuspend(): def executeSuspend():
global sysvals pm = ProcessMonitor()
t0 = time.time()*1000
tp = sysvals.tpath tp = sysvals.tpath
fwdata = [] fwdata = []
# mark the start point in the kernel ring buffer just as we start # mark the start point in the kernel ring buffer just as we start
...@@ -3745,30 +4112,39 @@ def executeSuspend(): ...@@ -3745,30 +4112,39 @@ def executeSuspend():
if(sysvals.usecallgraph or sysvals.usetraceevents): if(sysvals.usecallgraph or sysvals.usetraceevents):
print('START TRACING') print('START TRACING')
sysvals.fsetVal('1', 'tracing_on') sysvals.fsetVal('1', 'tracing_on')
if sysvals.useprocmon:
pm.start()
# execute however many s/r runs requested # execute however many s/r runs requested
for count in range(1,sysvals.execcount+1): for count in range(1,sysvals.execcount+1):
# if this is test2 and there's a delay, start here # x2delay in between test runs
if(count > 1 and sysvals.x2delay > 0): if(count > 1 and sysvals.x2delay > 0):
tN = time.time()*1000 sysvals.fsetVal('WAIT %d' % sysvals.x2delay, 'trace_marker')
while (tN - t0) < sysvals.x2delay: time.sleep(sysvals.x2delay/1000.0)
tN = time.time()*1000 sysvals.fsetVal('WAIT END', 'trace_marker')
time.sleep(0.001) # start message
# initiate suspend if sysvals.testcommand != '':
if(sysvals.usecallgraph or sysvals.usetraceevents):
sysvals.fsetVal('SUSPEND START', 'trace_marker')
if sysvals.suspendmode == 'command':
print('COMMAND START') print('COMMAND START')
if(sysvals.rtcwake):
print('will issue an rtcwake in %d seconds' % sysvals.rtcwaketime)
sysvals.rtcWakeAlarmOn()
os.system(sysvals.testcommand)
else: else:
if(sysvals.rtcwake): if(sysvals.rtcwake):
print('SUSPEND START') print('SUSPEND START')
print('will autoresume in %d seconds' % sysvals.rtcwaketime)
sysvals.rtcWakeAlarmOn()
else: else:
print('SUSPEND START (press a key to resume)') print('SUSPEND START (press a key to resume)')
# set rtcwake
if(sysvals.rtcwake):
print('will issue an rtcwake in %d seconds' % sysvals.rtcwaketime)
sysvals.rtcWakeAlarmOn()
# start of suspend trace marker
if(sysvals.usecallgraph or sysvals.usetraceevents):
sysvals.fsetVal('SUSPEND START', 'trace_marker')
# predelay delay
if(count == 1 and sysvals.predelay > 0):
sysvals.fsetVal('WAIT %d' % sysvals.predelay, 'trace_marker')
time.sleep(sysvals.predelay/1000.0)
sysvals.fsetVal('WAIT END', 'trace_marker')
# initiate suspend or command
if sysvals.testcommand != '':
call(sysvals.testcommand+' 2>&1', shell=True);
else:
pf = open(sysvals.powerfile, 'w') pf = open(sysvals.powerfile, 'w')
pf.write(sysvals.suspendmode) pf.write(sysvals.suspendmode)
# execution will pause here # execution will pause here
...@@ -3776,26 +4152,27 @@ def executeSuspend(): ...@@ -3776,26 +4152,27 @@ def executeSuspend():
pf.close() pf.close()
except: except:
pass pass
t0 = time.time()*1000
if(sysvals.rtcwake): if(sysvals.rtcwake):
sysvals.rtcWakeAlarmOff() sysvals.rtcWakeAlarmOff()
# postdelay delay
if(count == sysvals.execcount and sysvals.postdelay > 0):
sysvals.fsetVal('WAIT %d' % sysvals.postdelay, 'trace_marker')
time.sleep(sysvals.postdelay/1000.0)
sysvals.fsetVal('WAIT END', 'trace_marker')
# return from suspend # return from suspend
print('RESUME COMPLETE') print('RESUME COMPLETE')
if(sysvals.usecallgraph or sysvals.usetraceevents): if(sysvals.usecallgraph or sysvals.usetraceevents):
sysvals.fsetVal('RESUME COMPLETE', 'trace_marker') sysvals.fsetVal('RESUME COMPLETE', 'trace_marker')
if(sysvals.suspendmode == 'mem'): if(sysvals.suspendmode == 'mem' or sysvals.suspendmode == 'command'):
fwdata.append(getFPDT(False)) fwdata.append(getFPDT(False))
# look for post resume events after the last test run
t = sysvals.postresumetime
if(t > 0):
print('Waiting %d seconds for POST-RESUME trace events...' % t)
time.sleep(t)
# stop ftrace # stop ftrace
if(sysvals.usecallgraph or sysvals.usetraceevents): if(sysvals.usecallgraph or sysvals.usetraceevents):
if sysvals.useprocmon:
pm.stop()
sysvals.fsetVal('0', 'tracing_on') sysvals.fsetVal('0', 'tracing_on')
print('CAPTURING TRACE') print('CAPTURING TRACE')
writeDatafileHeader(sysvals.ftracefile, fwdata) writeDatafileHeader(sysvals.ftracefile, fwdata)
os.system('cat '+tp+'trace >> '+sysvals.ftracefile) call('cat '+tp+'trace >> '+sysvals.ftracefile, shell=True)
sysvals.fsetVal('', 'trace') sysvals.fsetVal('', 'trace')
devProps() devProps()
# grab a copy of the dmesg output # grab a copy of the dmesg output
...@@ -3804,17 +4181,12 @@ def executeSuspend(): ...@@ -3804,17 +4181,12 @@ def executeSuspend():
sysvals.getdmesg() sysvals.getdmesg()
def writeDatafileHeader(filename, fwdata): def writeDatafileHeader(filename, fwdata):
global sysvals
prt = sysvals.postresumetime
fp = open(filename, 'a') fp = open(filename, 'a')
fp.write(sysvals.teststamp+'\n') fp.write(sysvals.teststamp+'\n')
if(sysvals.suspendmode == 'mem'): if(sysvals.suspendmode == 'mem' or sysvals.suspendmode == 'command'):
for fw in fwdata: for fw in fwdata:
if(fw): if(fw):
fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1])) fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1]))
if(prt > 0):
fp.write('# post resume time %u\n' % prt)
fp.close() fp.close()
# Function: setUSBDevicesAuto # Function: setUSBDevicesAuto
...@@ -3824,18 +4196,16 @@ def writeDatafileHeader(filename, fwdata): ...@@ -3824,18 +4196,16 @@ def writeDatafileHeader(filename, fwdata):
# to always-on since the kernel cant determine if the device can # to always-on since the kernel cant determine if the device can
# properly autosuspend # properly autosuspend
def setUSBDevicesAuto(): def setUSBDevicesAuto():
global sysvals
rootCheck(True) rootCheck(True)
for dirname, dirnames, filenames in os.walk('/sys/devices'): for dirname, dirnames, filenames in os.walk('/sys/devices'):
if(re.match('.*/usb[0-9]*.*', dirname) and if(re.match('.*/usb[0-9]*.*', dirname) and
'idVendor' in filenames and 'idProduct' in filenames): 'idVendor' in filenames and 'idProduct' in filenames):
os.system('echo auto > %s/power/control' % dirname) call('echo auto > %s/power/control' % dirname, shell=True)
name = dirname.split('/')[-1] name = dirname.split('/')[-1]
desc = os.popen('cat %s/product 2>/dev/null' % \ desc = Popen(['cat', '%s/product' % dirname],
dirname).read().replace('\n', '') stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '')
ctrl = os.popen('cat %s/power/control 2>/dev/null' % \ ctrl = Popen(['cat', '%s/power/control' % dirname],
dirname).read().replace('\n', '') stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '')
print('control is %s for %6s: %s' % (ctrl, name, desc)) print('control is %s for %6s: %s' % (ctrl, name, desc))
# Function: yesno # Function: yesno
...@@ -3872,8 +4242,6 @@ def ms2nice(val): ...@@ -3872,8 +4242,6 @@ def ms2nice(val):
# Detect all the USB hosts and devices currently connected and add # Detect all the USB hosts and devices currently connected and add
# a list of USB device names to sysvals for better timeline readability # a list of USB device names to sysvals for better timeline readability
def detectUSB(): def detectUSB():
global sysvals
field = {'idVendor':'', 'idProduct':'', 'product':'', 'speed':''} field = {'idVendor':'', 'idProduct':'', 'product':'', 'speed':''}
power = {'async':'', 'autosuspend':'', 'autosuspend_delay_ms':'', power = {'async':'', 'autosuspend':'', 'autosuspend_delay_ms':'',
'control':'', 'persist':'', 'runtime_enabled':'', 'control':'', 'persist':'', 'runtime_enabled':'',
...@@ -3899,12 +4267,12 @@ def detectUSB(): ...@@ -3899,12 +4267,12 @@ def detectUSB():
if(re.match('.*/usb[0-9]*.*', dirname) and if(re.match('.*/usb[0-9]*.*', dirname) and
'idVendor' in filenames and 'idProduct' in filenames): 'idVendor' in filenames and 'idProduct' in filenames):
for i in field: for i in field:
field[i] = os.popen('cat %s/%s 2>/dev/null' % \ field[i] = Popen(['cat', '%s/%s' % (dirname, i)],
(dirname, i)).read().replace('\n', '') stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '')
name = dirname.split('/')[-1] name = dirname.split('/')[-1]
for i in power: for i in power:
power[i] = os.popen('cat %s/power/%s 2>/dev/null' % \ power[i] = Popen(['cat', '%s/power/%s' % (dirname, i)],
(dirname, i)).read().replace('\n', '') stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '')
if(re.match('usb[0-9]*', name)): if(re.match('usb[0-9]*', name)):
first = '%-8s' % name first = '%-8s' % name
else: else:
...@@ -3928,7 +4296,6 @@ def detectUSB(): ...@@ -3928,7 +4296,6 @@ def detectUSB():
# Description: # Description:
# Retrieve a list of properties for all devices in the trace log # Retrieve a list of properties for all devices in the trace log
def devProps(data=0): def devProps(data=0):
global sysvals
props = dict() props = dict()
if data: if data:
...@@ -3953,7 +4320,7 @@ def devProps(data=0): ...@@ -3953,7 +4320,7 @@ def devProps(data=0):
return return
if(os.path.exists(sysvals.ftracefile) == False): if(os.path.exists(sysvals.ftracefile) == False):
doError('%s does not exist' % sysvals.ftracefile, False) doError('%s does not exist' % sysvals.ftracefile)
# first get the list of devices we need properties for # first get the list of devices we need properties for
msghead = 'Additional data added by AnalyzeSuspend' msghead = 'Additional data added by AnalyzeSuspend'
...@@ -3976,7 +4343,7 @@ def devProps(data=0): ...@@ -3976,7 +4343,7 @@ def devProps(data=0):
m = re.match('.*: (?P<drv>.*) (?P<d>.*), parent: *(?P<p>.*), .*', m.group('msg')); m = re.match('.*: (?P<drv>.*) (?P<d>.*), parent: *(?P<p>.*), .*', m.group('msg'));
if(not m): if(not m):
continue continue
drv, dev, par = m.group('drv'), m.group('d'), m.group('p') dev = m.group('d')
if dev not in props: if dev not in props:
props[dev] = DevProps() props[dev] = DevProps()
tf.close() tf.close()
...@@ -4052,7 +4419,6 @@ def devProps(data=0): ...@@ -4052,7 +4419,6 @@ def devProps(data=0):
# Output: # Output:
# A string list of the available modes # A string list of the available modes
def getModes(): def getModes():
global sysvals
modes = '' modes = ''
if(os.path.exists(sysvals.powerfile)): if(os.path.exists(sysvals.powerfile)):
fp = open(sysvals.powerfile, 'r') fp = open(sysvals.powerfile, 'r')
...@@ -4066,8 +4432,6 @@ def getModes(): ...@@ -4066,8 +4432,6 @@ def getModes():
# Arguments: # Arguments:
# output: True to output the info to stdout, False otherwise # output: True to output the info to stdout, False otherwise
def getFPDT(output): def getFPDT(output):
global sysvals
rectype = {} rectype = {}
rectype[0] = 'Firmware Basic Boot Performance Record' rectype[0] = 'Firmware Basic Boot Performance Record'
rectype[1] = 'S3 Performance Table Record' rectype[1] = 'S3 Performance Table Record'
...@@ -4078,19 +4442,19 @@ def getFPDT(output): ...@@ -4078,19 +4442,19 @@ def getFPDT(output):
rootCheck(True) rootCheck(True)
if(not os.path.exists(sysvals.fpdtpath)): if(not os.path.exists(sysvals.fpdtpath)):
if(output): if(output):
doError('file does not exist: %s' % sysvals.fpdtpath, False) doError('file does not exist: %s' % sysvals.fpdtpath)
return False return False
if(not os.access(sysvals.fpdtpath, os.R_OK)): if(not os.access(sysvals.fpdtpath, os.R_OK)):
if(output): if(output):
doError('file is not readable: %s' % sysvals.fpdtpath, False) doError('file is not readable: %s' % sysvals.fpdtpath)
return False return False
if(not os.path.exists(sysvals.mempath)): if(not os.path.exists(sysvals.mempath)):
if(output): if(output):
doError('file does not exist: %s' % sysvals.mempath, False) doError('file does not exist: %s' % sysvals.mempath)
return False return False
if(not os.access(sysvals.mempath, os.R_OK)): if(not os.access(sysvals.mempath, os.R_OK)):
if(output): if(output):
doError('file is not readable: %s' % sysvals.mempath, False) doError('file is not readable: %s' % sysvals.mempath)
return False return False
fp = open(sysvals.fpdtpath, 'rb') fp = open(sysvals.fpdtpath, 'rb')
...@@ -4100,7 +4464,7 @@ def getFPDT(output): ...@@ -4100,7 +4464,7 @@ def getFPDT(output):
if(len(buf) < 36): if(len(buf) < 36):
if(output): if(output):
doError('Invalid FPDT table data, should '+\ doError('Invalid FPDT table data, should '+\
'be at least 36 bytes', False) 'be at least 36 bytes')
return False return False
table = struct.unpack('4sIBB6s8sI4sI', buf[0:36]) table = struct.unpack('4sIBB6s8sI4sI', buf[0:36])
...@@ -4199,7 +4563,6 @@ def getFPDT(output): ...@@ -4199,7 +4563,6 @@ def getFPDT(output):
# Output: # Output:
# True if the test will work, False if not # True if the test will work, False if not
def statusCheck(probecheck=False): def statusCheck(probecheck=False):
global sysvals
status = True status = True
print('Checking this system (%s)...' % platform.node()) print('Checking this system (%s)...' % platform.node())
...@@ -4282,37 +4645,14 @@ def statusCheck(probecheck=False): ...@@ -4282,37 +4645,14 @@ def statusCheck(probecheck=False):
if not probecheck: if not probecheck:
return status return status
if (sysvals.usecallgraph and len(sysvals.debugfuncs) > 0) or len(sysvals.kprobes) > 0:
sysvals.initFtrace(True)
# verify callgraph debugfuncs
if sysvals.usecallgraph and len(sysvals.debugfuncs) > 0:
print(' verifying these ftrace callgraph functions work:')
sysvals.setFtraceFilterFunctions(sysvals.debugfuncs)
fp = open(sysvals.tpath+'set_graph_function', 'r')
flist = fp.read().split('\n')
fp.close()
for func in sysvals.debugfuncs:
res = sysvals.colorText('NO')
if func in flist:
res = 'YES'
else:
for i in flist:
if ' [' in i and func == i.split(' ')[0]:
res = 'YES'
break
print(' %s: %s' % (func, res))
# verify kprobes # verify kprobes
if len(sysvals.kprobes) > 0: if sysvals.usekprobes:
print(' verifying these kprobes work:') for name in sysvals.tracefuncs:
for name in sorted(sysvals.kprobes): sysvals.defaultKprobe(name, sysvals.tracefuncs[name])
if name in sysvals.tracefuncs: if sysvals.usedevsrc:
continue for name in sysvals.dev_tracefuncs:
res = sysvals.colorText('NO') sysvals.defaultKprobe(name, sysvals.dev_tracefuncs[name])
if sysvals.testKprobe(sysvals.kprobes[name]): sysvals.addKprobes(True)
res = 'YES'
print(' %s: %s' % (name, res))
return status return status
...@@ -4322,33 +4662,20 @@ def statusCheck(probecheck=False): ...@@ -4322,33 +4662,20 @@ def statusCheck(probecheck=False):
# Arguments: # Arguments:
# msg: the error message to print # msg: the error message to print
# help: True if printHelp should be called after, False otherwise # help: True if printHelp should be called after, False otherwise
def doError(msg, help): def doError(msg, help=False):
if(help == True): if(help == True):
printHelp() printHelp()
print('ERROR: %s\n') % msg print('ERROR: %s\n') % msg
sys.exit() sys.exit()
# Function: doWarning
# Description:
# generic warning function for non-catastrophic anomalies
# Arguments:
# msg: the warning message to print
# file: If not empty, a filename to request be sent to the owner for debug
def doWarning(msg, file=''):
print('/* %s */') % msg
if(file):
print('/* For a fix, please send this'+\
' %s file to <todd.e.brandt@intel.com> */' % file)
# Function: rootCheck # Function: rootCheck
# Description: # Description:
# quick check to see if we have root access # quick check to see if we have root access
def rootCheck(fatal): def rootCheck(fatal):
global sysvals
if(os.access(sysvals.powerfile, os.W_OK)): if(os.access(sysvals.powerfile, os.W_OK)):
return True return True
if fatal: if fatal:
doError('This command must be run as root', False) doError('This command must be run as root')
return False return False
# Function: getArgInt # Function: getArgInt
...@@ -4389,71 +4716,61 @@ def getArgFloat(name, args, min, max, main=True): ...@@ -4389,71 +4716,61 @@ def getArgFloat(name, args, min, max, main=True):
doError(name+': value should be between %f and %f' % (min, max), True) doError(name+': value should be between %f and %f' % (min, max), True)
return val return val
# Function: rerunTest def processData():
# Description:
# generate an output from an existing set of ftrace/dmesg logs
def rerunTest():
global sysvals
if(sysvals.ftracefile != ''):
doesTraceLogHaveTraceEvents()
if(sysvals.dmesgfile == '' and not sysvals.usetraceeventsonly):
doError('recreating this html output '+\
'requires a dmesg file', False)
sysvals.setOutputFile()
vprint('Output file: %s' % sysvals.htmlfile)
print('PROCESSING DATA') print('PROCESSING DATA')
if(sysvals.usetraceeventsonly): if(sysvals.usetraceeventsonly):
testruns = parseTraceLog() testruns = parseTraceLog()
if sysvals.dmesgfile:
dmesgtext = loadKernelLog(True)
for data in testruns:
data.extractErrorInfo(dmesgtext)
else: else:
testruns = loadKernelLog() testruns = loadKernelLog()
for data in testruns: for data in testruns:
parseKernelLog(data) parseKernelLog(data)
if(sysvals.ftracefile != ''): if(sysvals.ftracefile and (sysvals.usecallgraph or sysvals.usetraceevents)):
appendIncompleteTraceLog(testruns) appendIncompleteTraceLog(testruns)
createHTML(testruns) createHTML(testruns)
# Function: rerunTest
# Description:
# generate an output from an existing set of ftrace/dmesg logs
def rerunTest():
if sysvals.ftracefile:
doesTraceLogHaveTraceEvents()
if not sysvals.dmesgfile and not sysvals.usetraceeventsonly:
doError('recreating this html output requires a dmesg file')
sysvals.setOutputFile()
vprint('Output file: %s' % sysvals.htmlfile)
if(os.path.exists(sysvals.htmlfile) and not os.access(sysvals.htmlfile, os.W_OK)):
doError('missing permission to write to %s' % sysvals.htmlfile)
processData()
# Function: runTest # Function: runTest
# Description: # Description:
# execute a suspend/resume, gather the logs, and generate the output # execute a suspend/resume, gather the logs, and generate the output
def runTest(subdir, testpath=''): def runTest(subdir, testpath=''):
global sysvals
# prepare for the test # prepare for the test
sysvals.initFtrace() sysvals.initFtrace()
sysvals.initTestOutput(subdir, testpath) sysvals.initTestOutput(subdir, testpath)
vprint('Output files:\n\t%s\n\t%s\n\t%s' % \
vprint('Output files:\n %s' % sysvals.dmesgfile) (sysvals.dmesgfile, sysvals.ftracefile, sysvals.htmlfile))
if(sysvals.usecallgraph or
sysvals.usetraceevents or
sysvals.usetraceeventsonly):
vprint(' %s' % sysvals.ftracefile)
vprint(' %s' % sysvals.htmlfile)
# execute the test # execute the test
executeSuspend() executeSuspend()
sysvals.cleanupFtrace() sysvals.cleanupFtrace()
processData()
# analyze the data and create the html output # if running as root, change output dir owner to sudo_user
print('PROCESSING DATA') if os.path.isdir(sysvals.testdir) and os.getuid() == 0 and \
if(sysvals.usetraceeventsonly): 'SUDO_USER' in os.environ:
# data for kernels 3.15 or newer is entirely in ftrace cmd = 'chown -R {0}:{0} {1} > /dev/null 2>&1'
testruns = parseTraceLog() call(cmd.format(os.environ['SUDO_USER'], sysvals.testdir), shell=True)
else:
# data for kernels older than 3.15 is primarily in dmesg
testruns = loadKernelLog()
for data in testruns:
parseKernelLog(data)
if(sysvals.usecallgraph or sysvals.usetraceevents):
appendIncompleteTraceLog(testruns)
createHTML(testruns)
# Function: runSummary # Function: runSummary
# Description: # Description:
# create a summary of tests in a sub-directory # create a summary of tests in a sub-directory
def runSummary(subdir, output): def runSummary(subdir, output):
global sysvals
# get a list of ftrace output files # get a list of ftrace output files
files = [] files = []
for dirname, dirnames, filenames in os.walk(subdir): for dirname, dirnames, filenames in os.walk(subdir):
...@@ -4509,12 +4826,12 @@ def checkArgBool(value): ...@@ -4509,12 +4826,12 @@ def checkArgBool(value):
# Description: # Description:
# Configure the script via the info in a config file # Configure the script via the info in a config file
def configFromFile(file): def configFromFile(file):
global sysvals
Config = ConfigParser.ConfigParser() Config = ConfigParser.ConfigParser()
ignorekprobes = False
Config.read(file) Config.read(file)
sections = Config.sections() sections = Config.sections()
overridekprobes = False
overridedevkprobes = False
if 'Settings' in sections: if 'Settings' in sections:
for opt in Config.options('Settings'): for opt in Config.options('Settings'):
value = Config.get('Settings', opt).lower() value = Config.get('Settings', opt).lower()
...@@ -4524,19 +4841,19 @@ def configFromFile(file): ...@@ -4524,19 +4841,19 @@ def configFromFile(file):
sysvals.addlogs = checkArgBool(value) sysvals.addlogs = checkArgBool(value)
elif(opt.lower() == 'dev'): elif(opt.lower() == 'dev'):
sysvals.usedevsrc = checkArgBool(value) sysvals.usedevsrc = checkArgBool(value)
elif(opt.lower() == 'ignorekprobes'): elif(opt.lower() == 'proc'):
ignorekprobes = checkArgBool(value) sysvals.useprocmon = checkArgBool(value)
elif(opt.lower() == 'x2'): elif(opt.lower() == 'x2'):
if checkArgBool(value): if checkArgBool(value):
sysvals.execcount = 2 sysvals.execcount = 2
elif(opt.lower() == 'callgraph'): elif(opt.lower() == 'callgraph'):
sysvals.usecallgraph = checkArgBool(value) sysvals.usecallgraph = checkArgBool(value)
elif(opt.lower() == 'callgraphfunc'): elif(opt.lower() == 'override-timeline-functions'):
sysvals.debugfuncs = [] overridekprobes = checkArgBool(value)
if value: elif(opt.lower() == 'override-dev-timeline-functions'):
value = value.split(',') overridedevkprobes = checkArgBool(value)
for i in value: elif(opt.lower() == 'devicefilter'):
sysvals.debugfuncs.append(i.strip()) sysvals.setDeviceFilter(value)
elif(opt.lower() == 'expandcg'): elif(opt.lower() == 'expandcg'):
sysvals.cgexp = checkArgBool(value) sysvals.cgexp = checkArgBool(value)
elif(opt.lower() == 'srgap'): elif(opt.lower() == 'srgap'):
...@@ -4548,8 +4865,10 @@ def configFromFile(file): ...@@ -4548,8 +4865,10 @@ def configFromFile(file):
sysvals.testcommand = value sysvals.testcommand = value
elif(opt.lower() == 'x2delay'): elif(opt.lower() == 'x2delay'):
sysvals.x2delay = getArgInt('-x2delay', value, 0, 60000, False) sysvals.x2delay = getArgInt('-x2delay', value, 0, 60000, False)
elif(opt.lower() == 'postres'): elif(opt.lower() == 'predelay'):
sysvals.postresumetime = getArgInt('-postres', value, 0, 3600, False) sysvals.predelay = getArgInt('-predelay', value, 0, 60000, False)
elif(opt.lower() == 'postdelay'):
sysvals.postdelay = getArgInt('-postdelay', value, 0, 60000, False)
elif(opt.lower() == 'rtcwake'): elif(opt.lower() == 'rtcwake'):
sysvals.rtcwake = True sysvals.rtcwake = True
sysvals.rtcwaketime = getArgInt('-rtcwake', value, 0, 3600, False) sysvals.rtcwaketime = getArgInt('-rtcwake', value, 0, 3600, False)
...@@ -4557,53 +4876,50 @@ def configFromFile(file): ...@@ -4557,53 +4876,50 @@ def configFromFile(file):
sysvals.setPrecision(getArgInt('-timeprec', value, 0, 6, False)) sysvals.setPrecision(getArgInt('-timeprec', value, 0, 6, False))
elif(opt.lower() == 'mindev'): elif(opt.lower() == 'mindev'):
sysvals.mindevlen = getArgFloat('-mindev', value, 0.0, 10000.0, False) sysvals.mindevlen = getArgFloat('-mindev', value, 0.0, 10000.0, False)
elif(opt.lower() == 'callloop-maxgap'):
sysvals.callloopmaxgap = getArgFloat('-callloop-maxgap', value, 0.0, 1.0, False)
elif(opt.lower() == 'callloop-maxlen'):
sysvals.callloopmaxgap = getArgFloat('-callloop-maxlen', value, 0.0, 1.0, False)
elif(opt.lower() == 'mincg'): elif(opt.lower() == 'mincg'):
sysvals.mincglen = getArgFloat('-mincg', value, 0.0, 10000.0, False) sysvals.mincglen = getArgFloat('-mincg', value, 0.0, 10000.0, False)
elif(opt.lower() == 'kprobecolor'):
try:
val = int(value, 16)
sysvals.kprobecolor = '#'+value
except:
sysvals.kprobecolor = value
elif(opt.lower() == 'synccolor'):
try:
val = int(value, 16)
sysvals.synccolor = '#'+value
except:
sysvals.synccolor = value
elif(opt.lower() == 'output-dir'): elif(opt.lower() == 'output-dir'):
args = dict() sysvals.setOutputFolder(value)
n = datetime.now()
args['date'] = n.strftime('%y%m%d')
args['time'] = n.strftime('%H%M%S')
args['hostname'] = sysvals.hostname
sysvals.outdir = value.format(**args)
if sysvals.suspendmode == 'command' and not sysvals.testcommand: if sysvals.suspendmode == 'command' and not sysvals.testcommand:
doError('No command supplied for mode "command"', False) doError('No command supplied for mode "command"')
# compatibility errors
if sysvals.usedevsrc and sysvals.usecallgraph: if sysvals.usedevsrc and sysvals.usecallgraph:
doError('dev and callgraph cannot both be true', False) doError('-dev is not compatible with -f')
if sysvals.usecallgraph and sysvals.execcount > 1: if sysvals.usecallgraph and sysvals.useprocmon:
doError('-x2 is not compatible with -f', False) doError('-proc is not compatible with -f')
if ignorekprobes: if overridekprobes:
return sysvals.tracefuncs = dict()
if overridedevkprobes:
sysvals.dev_tracefuncs = dict()
kprobes = dict() kprobes = dict()
archkprobe = 'Kprobe_'+platform.machine() kprobesec = 'dev_timeline_functions_'+platform.machine()
if archkprobe in sections: if kprobesec in sections:
for name in Config.options(archkprobe): for name in Config.options(kprobesec):
kprobes[name] = Config.get(archkprobe, name) text = Config.get(kprobesec, name)
if 'Kprobe' in sections: kprobes[name] = (text, True)
for name in Config.options('Kprobe'): kprobesec = 'timeline_functions_'+platform.machine()
kprobes[name] = Config.get('Kprobe', name) if kprobesec in sections:
for name in Config.options(kprobesec):
if name in kprobes:
doError('Duplicate timeline function found "%s"' % (name))
text = Config.get(kprobesec, name)
kprobes[name] = (text, False)
for name in kprobes: for name in kprobes:
function = name function = name
format = name format = name
color = '' color = ''
args = dict() args = dict()
data = kprobes[name].split() text, dev = kprobes[name]
data = text.split()
i = 0 i = 0
for val in data: for val in data:
# bracketted strings are special formatting, read them separately # bracketted strings are special formatting, read them separately
...@@ -4626,28 +4942,30 @@ def configFromFile(file): ...@@ -4626,28 +4942,30 @@ def configFromFile(file):
args[d[0]] = d[1] args[d[0]] = d[1]
i += 1 i += 1
if not function or not format: if not function or not format:
doError('Invalid kprobe: %s' % name, False) doError('Invalid kprobe: %s' % name)
for arg in re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', format): for arg in re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', format):
if arg not in args: if arg not in args:
doError('Kprobe "%s" is missing argument "%s"' % (name, arg), False) doError('Kprobe "%s" is missing argument "%s"' % (name, arg))
if name in sysvals.kprobes: if (dev and name in sysvals.dev_tracefuncs) or (not dev and name in sysvals.tracefuncs):
doError('Duplicate kprobe found "%s"' % (name), False) doError('Duplicate timeline function found "%s"' % (name))
vprint('Adding KPROBE: %s %s %s %s' % (name, function, format, args))
sysvals.kprobes[name] = { kp = {
'name': name, 'name': name,
'func': function, 'func': function,
'format': format, 'format': format,
'args': args, sysvals.archargs: args
'mask': re.sub('{(?P<n>[a-z,A-Z,0-9]*)}', '.*', format)
} }
if color: if color:
sysvals.kprobes[name]['color'] = color kp['color'] = color
if dev:
sysvals.dev_tracefuncs[name] = kp
else:
sysvals.tracefuncs[name] = kp
# Function: printHelp # Function: printHelp
# Description: # Description:
# print out the help text # print out the help text
def printHelp(): def printHelp():
global sysvals
modes = getModes() modes = getModes()
print('') print('')
...@@ -4672,34 +4990,37 @@ def printHelp(): ...@@ -4672,34 +4990,37 @@ def printHelp():
print(' [general]') print(' [general]')
print(' -h Print this help text') print(' -h Print this help text')
print(' -v Print the current tool version') print(' -v Print the current tool version')
print(' -config file Pull arguments and config options from a file') print(' -config fn Pull arguments and config options from file fn')
print(' -verbose Print extra information during execution and analysis') print(' -verbose Print extra information during execution and analysis')
print(' -status Test to see if the system is enabled to run this tool') print(' -status Test to see if the system is enabled to run this tool')
print(' -modes List available suspend modes') print(' -modes List available suspend modes')
print(' -m mode Mode to initiate for suspend %s (default: %s)') % (modes, sysvals.suspendmode) print(' -m mode Mode to initiate for suspend %s (default: %s)') % (modes, sysvals.suspendmode)
print(' -o subdir Override the output subdirectory') print(' -o subdir Override the output subdirectory')
print(' [advanced]')
print(' -rtcwake t Use rtcwake to autoresume after <t> seconds (default: disabled)') print(' -rtcwake t Use rtcwake to autoresume after <t> seconds (default: disabled)')
print(' -addlogs Add the dmesg and ftrace logs to the html output') print(' -addlogs Add the dmesg and ftrace logs to the html output')
print(' -multi n d Execute <n> consecutive tests at <d> seconds intervals. The outputs will')
print(' be created in a new subdirectory with a summary page.')
print(' -srgap Add a visible gap in the timeline between sus/res (default: disabled)') print(' -srgap Add a visible gap in the timeline between sus/res (default: disabled)')
print(' -cmd {s} Instead of suspend/resume, run a command, e.g. "sync -d"') print(' [advanced]')
print(' -cmd {s} Run the timeline over a custom command, e.g. "sync -d"')
print(' -proc Add usermode process info into the timeline (default: disabled)')
print(' -dev Add kernel function calls and threads to the timeline (default: disabled)')
print(' -x2 Run two suspend/resumes back to back (default: disabled)')
print(' -x2delay t Include t ms delay between multiple test runs (default: 0 ms)')
print(' -predelay t Include t ms delay before 1st suspend (default: 0 ms)')
print(' -postdelay t Include t ms delay after last resume (default: 0 ms)')
print(' -mindev ms Discard all device blocks shorter than ms milliseconds (e.g. 0.001 for us)') print(' -mindev ms Discard all device blocks shorter than ms milliseconds (e.g. 0.001 for us)')
print(' -mincg ms Discard all callgraphs shorter than ms milliseconds (e.g. 0.001 for us)') print(' -multi n d Execute <n> consecutive tests at <d> seconds intervals. The outputs will')
print(' -timeprec N Number of significant digits in timestamps (0:S, [3:ms], 6:us)') print(' be created in a new subdirectory with a summary page.')
print(' [debug]') print(' [debug]')
print(' -f Use ftrace to create device callgraphs (default: disabled)') print(' -f Use ftrace to create device callgraphs (default: disabled)')
print(' -expandcg pre-expand the callgraph data in the html output (default: disabled)') print(' -expandcg pre-expand the callgraph data in the html output (default: disabled)')
print(' -flist Print the list of functions currently being captured in ftrace') print(' -flist Print the list of functions currently being captured in ftrace')
print(' -flistall Print all functions capable of being captured in ftrace') print(' -flistall Print all functions capable of being captured in ftrace')
print(' -fadd file Add functions to be graphed in the timeline from a list in a text file') print(' -fadd file Add functions to be graphed in the timeline from a list in a text file')
print(' -filter "d1 d2 ..." Filter out all but this list of device names') print(' -filter "d1,d2,..." Filter out all but this comma-delimited list of device names')
print(' -dev Display common low level functions in the timeline') print(' -mincg ms Discard all callgraphs shorter than ms milliseconds (e.g. 0.001 for us)')
print(' [post-resume task analysis]') print(' -cgphase P Only show callgraph data for phase P (e.g. suspend_late)')
print(' -x2 Run two suspend/resumes back to back (default: disabled)') print(' -cgtest N Only show callgraph data for test N (e.g. 0 or 1 in an x2 run)')
print(' -x2delay t Minimum millisecond delay <t> between the two test runs (default: 0 ms)') print(' -timeprec N Number of significant digits in timestamps (0:S, [3:ms], 6:us)')
print(' -postres t Time after resume completion to wait for post-resume events (default: 0 S)')
print(' [utilities]') print(' [utilities]')
print(' -fpdt Print out the contents of the ACPI Firmware Performance Data Table') print(' -fpdt Print out the contents of the ACPI Firmware Performance Data Table')
print(' -usbtopo Print out the current USB topology with power info') print(' -usbtopo Print out the current USB topology with power info')
...@@ -4739,26 +5060,22 @@ if __name__ == '__main__': ...@@ -4739,26 +5060,22 @@ if __name__ == '__main__':
sys.exit() sys.exit()
elif(arg == '-x2'): elif(arg == '-x2'):
sysvals.execcount = 2 sysvals.execcount = 2
if(sysvals.usecallgraph):
doError('-x2 is not compatible with -f', False)
elif(arg == '-x2delay'): elif(arg == '-x2delay'):
sysvals.x2delay = getArgInt('-x2delay', args, 0, 60000) sysvals.x2delay = getArgInt('-x2delay', args, 0, 60000)
elif(arg == '-postres'): elif(arg == '-predelay'):
sysvals.postresumetime = getArgInt('-postres', args, 0, 3600) sysvals.predelay = getArgInt('-predelay', args, 0, 60000)
elif(arg == '-postdelay'):
sysvals.postdelay = getArgInt('-postdelay', args, 0, 60000)
elif(arg == '-f'): elif(arg == '-f'):
sysvals.usecallgraph = True sysvals.usecallgraph = True
if(sysvals.execcount > 1):
doError('-x2 is not compatible with -f', False)
if(sysvals.usedevsrc):
doError('-dev is not compatible with -f', False)
elif(arg == '-addlogs'): elif(arg == '-addlogs'):
sysvals.addlogs = True sysvals.addlogs = True
elif(arg == '-verbose'): elif(arg == '-verbose'):
sysvals.verbose = True sysvals.verbose = True
elif(arg == '-proc'):
sysvals.useprocmon = True
elif(arg == '-dev'): elif(arg == '-dev'):
sysvals.usedevsrc = True sysvals.usedevsrc = True
if(sysvals.usecallgraph):
doError('-dev is not compatible with -f', False)
elif(arg == '-rtcwake'): elif(arg == '-rtcwake'):
sysvals.rtcwake = True sysvals.rtcwake = True
sysvals.rtcwaketime = getArgInt('-rtcwake', args, 0, 3600) sysvals.rtcwaketime = getArgInt('-rtcwake', args, 0, 3600)
...@@ -4768,6 +5085,21 @@ if __name__ == '__main__': ...@@ -4768,6 +5085,21 @@ if __name__ == '__main__':
sysvals.mindevlen = getArgFloat('-mindev', args, 0.0, 10000.0) sysvals.mindevlen = getArgFloat('-mindev', args, 0.0, 10000.0)
elif(arg == '-mincg'): elif(arg == '-mincg'):
sysvals.mincglen = getArgFloat('-mincg', args, 0.0, 10000.0) sysvals.mincglen = getArgFloat('-mincg', args, 0.0, 10000.0)
elif(arg == '-cgtest'):
sysvals.cgtest = getArgInt('-cgtest', args, 0, 1)
elif(arg == '-cgphase'):
try:
val = args.next()
except:
doError('No phase name supplied', True)
d = Data(0)
if val not in d.phases:
doError('Invalid phase, valid phaess are %s' % d.phases, True)
sysvals.cgphase = val
elif(arg == '-callloop-maxgap'):
sysvals.callloopmaxgap = getArgFloat('-callloop-maxgap', args, 0.0, 1.0)
elif(arg == '-callloop-maxlen'):
sysvals.callloopmaxlen = getArgFloat('-callloop-maxlen', args, 0.0, 1.0)
elif(arg == '-cmd'): elif(arg == '-cmd'):
try: try:
val = args.next() val = args.next()
...@@ -4788,14 +5120,14 @@ if __name__ == '__main__': ...@@ -4788,14 +5120,14 @@ if __name__ == '__main__':
val = args.next() val = args.next()
except: except:
doError('No subdirectory name supplied', True) doError('No subdirectory name supplied', True)
sysvals.outdir = val sysvals.setOutputFolder(val)
elif(arg == '-config'): elif(arg == '-config'):
try: try:
val = args.next() val = args.next()
except: except:
doError('No text file supplied', True) doError('No text file supplied', True)
if(os.path.exists(val) == False): if(os.path.exists(val) == False):
doError('%s does not exist' % val, False) doError('%s does not exist' % val)
configFromFile(val) configFromFile(val)
elif(arg == '-fadd'): elif(arg == '-fadd'):
try: try:
...@@ -4803,7 +5135,7 @@ if __name__ == '__main__': ...@@ -4803,7 +5135,7 @@ if __name__ == '__main__':
except: except:
doError('No text file supplied', True) doError('No text file supplied', True)
if(os.path.exists(val) == False): if(os.path.exists(val) == False):
doError('%s does not exist' % val, False) doError('%s does not exist' % val)
sysvals.addFtraceFilterFunctions(val) sysvals.addFtraceFilterFunctions(val)
elif(arg == '-dmesg'): elif(arg == '-dmesg'):
try: try:
...@@ -4813,7 +5145,7 @@ if __name__ == '__main__': ...@@ -4813,7 +5145,7 @@ if __name__ == '__main__':
sysvals.notestrun = True sysvals.notestrun = True
sysvals.dmesgfile = val sysvals.dmesgfile = val
if(os.path.exists(sysvals.dmesgfile) == False): if(os.path.exists(sysvals.dmesgfile) == False):
doError('%s does not exist' % sysvals.dmesgfile, False) doError('%s does not exist' % sysvals.dmesgfile)
elif(arg == '-ftrace'): elif(arg == '-ftrace'):
try: try:
val = args.next() val = args.next()
...@@ -4822,7 +5154,7 @@ if __name__ == '__main__': ...@@ -4822,7 +5154,7 @@ if __name__ == '__main__':
sysvals.notestrun = True sysvals.notestrun = True
sysvals.ftracefile = val sysvals.ftracefile = val
if(os.path.exists(sysvals.ftracefile) == False): if(os.path.exists(sysvals.ftracefile) == False):
doError('%s does not exist' % sysvals.ftracefile, False) doError('%s does not exist' % sysvals.ftracefile)
elif(arg == '-summary'): elif(arg == '-summary'):
try: try:
val = args.next() val = args.next()
...@@ -4832,7 +5164,7 @@ if __name__ == '__main__': ...@@ -4832,7 +5164,7 @@ if __name__ == '__main__':
cmdarg = val cmdarg = val
sysvals.notestrun = True sysvals.notestrun = True
if(os.path.isdir(val) == False): if(os.path.isdir(val) == False):
doError('%s is not accesible' % val, False) doError('%s is not accesible' % val)
elif(arg == '-filter'): elif(arg == '-filter'):
try: try:
val = args.next() val = args.next()
...@@ -4842,6 +5174,12 @@ if __name__ == '__main__': ...@@ -4842,6 +5174,12 @@ if __name__ == '__main__':
else: else:
doError('Invalid argument: '+arg, True) doError('Invalid argument: '+arg, True)
# compatibility errors
if(sysvals.usecallgraph and sysvals.usedevsrc):
doError('-dev is not compatible with -f')
if(sysvals.usecallgraph and sysvals.useprocmon):
doError('-proc is not compatible with -f')
# callgraph size cannot exceed device size # callgraph size cannot exceed device size
if sysvals.mincglen < sysvals.mindevlen: if sysvals.mincglen < sysvals.mindevlen:
sysvals.mincglen = sysvals.mindevlen sysvals.mincglen = sysvals.mindevlen
...@@ -4855,8 +5193,7 @@ if __name__ == '__main__': ...@@ -4855,8 +5193,7 @@ if __name__ == '__main__':
elif(cmd == 'usbtopo'): elif(cmd == 'usbtopo'):
detectUSB() detectUSB()
elif(cmd == 'modes'): elif(cmd == 'modes'):
modes = getModes() print getModes()
print modes
elif(cmd == 'flist'): elif(cmd == 'flist'):
sysvals.getFtraceFilterFunctions(True) sysvals.getFtraceFilterFunctions(True)
elif(cmd == 'flistall'): elif(cmd == 'flistall'):
......
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
""" This utility can be used to debug and tune the performance of the
intel_pstate driver. This utility can be used in two ways:
- If there is Linux trace file with pstate_sample events enabled, then
this utility can parse the trace file and generate performance plots.
- If user has not specified a trace file as input via command line parameters,
then this utility enables and collects trace data for a user specified interval
and generates performance plots.
Prerequisites:
Python version 2.7.x
gnuplot 5.0 or higher
gnuplot-py 1.8
(Most of the distributions have these required packages. They may be called
gnuplot-py, phython-gnuplot. )
HWP (Hardware P-States are disabled)
Kernel config for Linux trace is enabled
see print_help(): for Usage and Output details
"""
from __future__ import print_function
from datetime import datetime
import subprocess
import os
import time
import re
import sys
import getopt
import Gnuplot
from numpy import *
from decimal import *
__author__ = "Srinivas Pandruvada"
__copyright__ = " Copyright (c) 2017, Intel Corporation. "
__license__ = "GPL version 2"
MAX_CPUS = 256
# Define the csv file columns
C_COMM = 18
C_GHZ = 17
C_ELAPSED = 16
C_SAMPLE = 15
C_DURATION = 14
C_LOAD = 13
C_BOOST = 12
C_FREQ = 11
C_TSC = 10
C_APERF = 9
C_MPERF = 8
C_TO = 7
C_FROM = 6
C_SCALED = 5
C_CORE = 4
C_USEC = 3
C_SEC = 2
C_CPU = 1
global sample_num, last_sec_cpu, last_usec_cpu, start_time, testname
# 11 digits covers uptime to 115 days
getcontext().prec = 11
sample_num =0
last_sec_cpu = [0] * MAX_CPUS
last_usec_cpu = [0] * MAX_CPUS
def print_help():
print('intel_pstate_tracer.py:')
print(' Usage:')
print(' If the trace file is available, then to simply parse and plot, use (sudo not required):')
print(' ./intel_pstate_tracer.py [-c cpus] -t <trace_file> -n <test_name>')
print(' Or')
print(' ./intel_pstate_tracer.py [--cpu cpus] ---trace_file <trace_file> --name <test_name>')
print(' To generate trace file, parse and plot, use (sudo required):')
print(' sudo ./intel_pstate_tracer.py [-c cpus] -i <interval> -n <test_name>')
print(' Or')
print(' sudo ./intel_pstate_tracer.py [--cpu cpus] --interval <interval> --name <test_name>')
print(' Optional argument:')
print(' cpus: comma separated list of CPUs')
print(' Output:')
print(' If not already present, creates a "results/test_name" folder in the current working directory with:')
print(' cpu.csv - comma seperated values file with trace contents and some additional calculations.')
print(' cpu???.csv - comma seperated values file for CPU number ???.')
print(' *.png - a variety of PNG format plot files created from the trace contents and the additional calculations.')
print(' Notes:')
print(' Avoid the use of _ (underscore) in test names, because in gnuplot it is a subscript directive.')
print(' Maximum number of CPUs is {0:d}. If there are more the script will abort with an error.'.format(MAX_CPUS))
print(' Off-line CPUs cause the script to list some warnings, and create some empty files. Use the CPU mask feature for a clean run.')
print(' Empty y range warnings for autoscaled plots can occur and can be ignored.')
def plot_perf_busy_with_sample(cpu_index):
""" Plot method to per cpu information """
file_name = 'cpu{:0>3}.csv'.format(cpu_index)
if os.path.exists(file_name):
output_png = "cpu%03d_perf_busy_vs_samples.png" % cpu_index
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:40]')
g_plot('set y2range [0:200]')
g_plot('set y2tics 0, 10')
g_plot('set title "{} : cpu perf busy vs. sample : CPU {:0>3} : {:%F %H:%M}"'.format(testname, cpu_index, datetime.now()))
# Override common
g_plot('set xlabel "Samples"')
g_plot('set ylabel "P-State"')
g_plot('set y2label "Scaled Busy/performance/io-busy(%)"')
set_4_plot_linestyles(g_plot)
g_plot('plot "' + file_name + '" using {:d}:{:d} with linespoints linestyle 1 axis x1y2 title "performance",\\'.format(C_SAMPLE, C_CORE))
g_plot('"' + file_name + '" using {:d}:{:d} with linespoints linestyle 2 axis x1y2 title "scaled-busy",\\'.format(C_SAMPLE, C_SCALED))
g_plot('"' + file_name + '" using {:d}:{:d} with linespoints linestyle 3 axis x1y2 title "io-boost",\\'.format(C_SAMPLE, C_BOOST))
g_plot('"' + file_name + '" using {:d}:{:d} with linespoints linestyle 4 axis x1y1 title "P-State"'.format(C_SAMPLE, C_TO))
def plot_perf_busy(cpu_index):
""" Plot some per cpu information """
file_name = 'cpu{:0>3}.csv'.format(cpu_index)
if os.path.exists(file_name):
output_png = "cpu%03d_perf_busy.png" % cpu_index
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:40]')
g_plot('set y2range [0:200]')
g_plot('set y2tics 0, 10')
g_plot('set title "{} : perf busy : CPU {:0>3} : {:%F %H:%M}"'.format(testname, cpu_index, datetime.now()))
g_plot('set ylabel "P-State"')
g_plot('set y2label "Scaled Busy/performance/io-busy(%)"')
set_4_plot_linestyles(g_plot)
g_plot('plot "' + file_name + '" using {:d}:{:d} with linespoints linestyle 1 axis x1y2 title "performance",\\'.format(C_ELAPSED, C_CORE))
g_plot('"' + file_name + '" using {:d}:{:d} with linespoints linestyle 2 axis x1y2 title "scaled-busy",\\'.format(C_ELAPSED, C_SCALED))
g_plot('"' + file_name + '" using {:d}:{:d} with linespoints linestyle 3 axis x1y2 title "io-boost",\\'.format(C_ELAPSED, C_BOOST))
g_plot('"' + file_name + '" using {:d}:{:d} with linespoints linestyle 4 axis x1y1 title "P-State"'.format(C_ELAPSED, C_TO))
def plot_durations(cpu_index):
""" Plot per cpu durations """
file_name = 'cpu{:0>3}.csv'.format(cpu_index)
if os.path.exists(file_name):
output_png = "cpu%03d_durations.png" % cpu_index
g_plot = common_all_gnuplot_settings(output_png)
# Should autoscale be used here? Should seconds be used here?
g_plot('set yrange [0:5000]')
g_plot('set ytics 0, 500')
g_plot('set title "{} : durations : CPU {:0>3} : {:%F %H:%M}"'.format(testname, cpu_index, datetime.now()))
g_plot('set ylabel "Timer Duration (MilliSeconds)"')
# override common
g_plot('set key off')
set_4_plot_linestyles(g_plot)
g_plot('plot "' + file_name + '" using {:d}:{:d} with linespoints linestyle 1 axis x1y1'.format(C_ELAPSED, C_DURATION))
def plot_loads(cpu_index):
""" Plot per cpu loads """
file_name = 'cpu{:0>3}.csv'.format(cpu_index)
if os.path.exists(file_name):
output_png = "cpu%03d_loads.png" % cpu_index
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:100]')
g_plot('set ytics 0, 10')
g_plot('set title "{} : loads : CPU {:0>3} : {:%F %H:%M}"'.format(testname, cpu_index, datetime.now()))
g_plot('set ylabel "CPU load (percent)"')
# override common
g_plot('set key off')
set_4_plot_linestyles(g_plot)
g_plot('plot "' + file_name + '" using {:d}:{:d} with linespoints linestyle 1 axis x1y1'.format(C_ELAPSED, C_LOAD))
def plot_pstate_cpu_with_sample():
""" Plot all cpu information """
if os.path.exists('cpu.csv'):
output_png = 'all_cpu_pstates_vs_samples.png'
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:40]')
# override common
g_plot('set xlabel "Samples"')
g_plot('set ylabel "P-State"')
g_plot('set title "{} : cpu pstate vs. sample : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_SAMPLE, C_TO)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_pstate_cpu():
""" Plot all cpu information from csv files """
output_png = 'all_cpu_pstates.png'
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:40]')
g_plot('set ylabel "P-State"')
g_plot('set title "{} : cpu pstates : {:%F %H:%M}"'.format(testname, datetime.now()))
# the following command is really cool, but doesn't work with the CPU masking option because it aborts on the first missing file.
# plot_str = 'plot for [i=0:*] file=sprintf("cpu%03d.csv",i) title_s=sprintf("cpu%03d",i) file using 16:7 pt 7 ps 1 title title_s'
#
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_TO)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_load_cpu():
""" Plot all cpu loads """
output_png = 'all_cpu_loads.png'
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:100]')
g_plot('set ylabel "CPU load (percent)"')
g_plot('set title "{} : cpu loads : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_LOAD)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_frequency_cpu():
""" Plot all cpu frequencies """
output_png = 'all_cpu_frequencies.png'
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:4]')
g_plot('set ylabel "CPU Frequency (GHz)"')
g_plot('set title "{} : cpu frequencies : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_FREQ)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_duration_cpu():
""" Plot all cpu durations """
output_png = 'all_cpu_durations.png'
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:5000]')
g_plot('set ytics 0, 500')
g_plot('set ylabel "Timer Duration (MilliSeconds)"')
g_plot('set title "{} : cpu durations : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_DURATION)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_scaled_cpu():
""" Plot all cpu scaled busy """
output_png = 'all_cpu_scaled.png'
g_plot = common_all_gnuplot_settings(output_png)
# autoscale this one, no set y range
g_plot('set ylabel "Scaled Busy (Unitless)"')
g_plot('set title "{} : cpu scaled busy : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_SCALED)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_boost_cpu():
""" Plot all cpu IO Boosts """
output_png = 'all_cpu_boost.png'
g_plot = common_all_gnuplot_settings(output_png)
g_plot('set yrange [0:100]')
g_plot('set ylabel "CPU IO Boost (percent)"')
g_plot('set title "{} : cpu io boost : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_BOOST)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def plot_ghz_cpu():
""" Plot all cpu tsc ghz """
output_png = 'all_cpu_ghz.png'
g_plot = common_all_gnuplot_settings(output_png)
# autoscale this one, no set y range
g_plot('set ylabel "TSC Frequency (GHz)"')
g_plot('set title "{} : cpu TSC Frequencies (Sanity check calculation) : {:%F %H:%M}"'.format(testname, datetime.now()))
title_list = subprocess.check_output('ls cpu???.csv | sed -e \'s/.csv//\'',shell=True).replace('\n', ' ')
plot_str = "plot for [i in title_list] i.'.csv' using {:d}:{:d} pt 7 ps 1 title i".format(C_ELAPSED, C_GHZ)
g_plot('title_list = "{}"'.format(title_list))
g_plot(plot_str)
def common_all_gnuplot_settings(output_png):
""" common gnuplot settings for multiple CPUs one one graph. """
g_plot = common_gnuplot_settings()
g_plot('set output "' + output_png + '"')
return(g_plot)
def common_gnuplot_settings():
""" common gnuplot settings. """
g_plot = Gnuplot.Gnuplot(persist=1)
# The following line is for rigor only. It seems to be assumed for .csv files
g_plot('set datafile separator \",\"')
g_plot('set ytics nomirror')
g_plot('set xtics nomirror')
g_plot('set xtics font ", 10"')
g_plot('set ytics font ", 10"')
g_plot('set tics out scale 1.0')
g_plot('set grid')
g_plot('set key out horiz')
g_plot('set key bot center')
g_plot('set key samplen 2 spacing .8 font ", 9"')
g_plot('set term png size 1200, 600')
g_plot('set title font ", 11"')
g_plot('set ylabel font ", 10"')
g_plot('set xlabel font ", 10"')
g_plot('set xlabel offset 0, 0.5')
g_plot('set xlabel "Elapsed Time (Seconds)"')
return(g_plot)
def set_4_plot_linestyles(g_plot):
""" set the linestyles used for 4 plots in 1 graphs. """
g_plot('set style line 1 linetype 1 linecolor rgb "green" pointtype -1')
g_plot('set style line 2 linetype 1 linecolor rgb "red" pointtype -1')
g_plot('set style line 3 linetype 1 linecolor rgb "purple" pointtype -1')
g_plot('set style line 4 linetype 1 linecolor rgb "blue" pointtype -1')
def store_csv(cpu_int, time_pre_dec, time_post_dec, core_busy, scaled, _from, _to, mperf, aperf, tsc, freq_ghz, io_boost, common_comm, load, duration_ms, sample_num, elapsed_time, tsc_ghz):
""" Store master csv file information """
global graph_data_present
if cpu_mask[cpu_int] == 0:
return
try:
f_handle = open('cpu.csv', 'a')
string_buffer = "CPU_%03u, %05u, %06u, %u, %u, %u, %u, %u, %u, %u, %.4f, %u, %.2f, %.3f, %u, %.3f, %.3f, %s\n" % (cpu_int, int(time_pre_dec), int(time_post_dec), int(core_busy), int(scaled), int(_from), int(_to), int(mperf), int(aperf), int(tsc), freq_ghz, int(io_boost), load, duration_ms, sample_num, elapsed_time, tsc_ghz, common_comm)
f_handle.write(string_buffer);
f_handle.close()
except:
print('IO error cpu.csv')
return
graph_data_present = True;
def split_csv():
""" seperate the all csv file into per CPU csv files. """
global current_max_cpu
if os.path.exists('cpu.csv'):
for index in range(0, current_max_cpu + 1):
if cpu_mask[int(index)] != 0:
os.system('grep -m 1 common_cpu cpu.csv > cpu{:0>3}.csv'.format(index))
os.system('grep CPU_{:0>3} cpu.csv >> cpu{:0>3}.csv'.format(index, index))
def cleanup_data_files():
""" clean up existing data files """
if os.path.exists('cpu.csv'):
os.remove('cpu.csv')
f_handle = open('cpu.csv', 'a')
f_handle.write('common_cpu, common_secs, common_usecs, core_busy, scaled_busy, from, to, mperf, aperf, tsc, freq, boost, load, duration_ms, sample_num, elapsed_time, tsc_ghz, common_comm')
f_handle.write('\n')
f_handle.close()
def clear_trace_file():
""" Clear trace file """
try:
f_handle = open('/sys/kernel/debug/tracing/trace', 'w')
f_handle.close()
except:
print('IO error clearing trace file ')
quit()
def enable_trace():
""" Enable trace """
try:
open('/sys/kernel/debug/tracing/events/power/pstate_sample/enable'
, 'w').write("1")
except:
print('IO error enabling trace ')
quit()
def disable_trace():
""" Disable trace """
try:
open('/sys/kernel/debug/tracing/events/power/pstate_sample/enable'
, 'w').write("0")
except:
print('IO error disabling trace ')
quit()
def set_trace_buffer_size():
""" Set trace buffer size """
try:
open('/sys/kernel/debug/tracing/buffer_size_kb'
, 'w').write("10240")
except:
print('IO error setting trace buffer size ')
quit()
def read_trace_data(filename):
""" Read and parse trace data """
global current_max_cpu
global sample_num, last_sec_cpu, last_usec_cpu, start_time
try:
data = open(filename, 'r').read()
except:
print('Error opening ', filename)
quit()
for line in data.splitlines():
search_obj = \
re.search(r'(^(.*?)\[)((\d+)[^\]])(.*?)(\d+)([.])(\d+)(.*?core_busy=)(\d+)(.*?scaled=)(\d+)(.*?from=)(\d+)(.*?to=)(\d+)(.*?mperf=)(\d+)(.*?aperf=)(\d+)(.*?tsc=)(\d+)(.*?freq=)(\d+)'
, line)
if search_obj:
cpu = search_obj.group(3)
cpu_int = int(cpu)
cpu = str(cpu_int)
time_pre_dec = search_obj.group(6)
time_post_dec = search_obj.group(8)
core_busy = search_obj.group(10)
scaled = search_obj.group(12)
_from = search_obj.group(14)
_to = search_obj.group(16)
mperf = search_obj.group(18)
aperf = search_obj.group(20)
tsc = search_obj.group(22)
freq = search_obj.group(24)
common_comm = search_obj.group(2).replace(' ', '')
# Not all kernel versions have io_boost field
io_boost = '0'
search_obj = re.search(r'.*?io_boost=(\d+)', line)
if search_obj:
io_boost = search_obj.group(1)
if sample_num == 0 :
start_time = Decimal(time_pre_dec) + Decimal(time_post_dec) / Decimal(1000000)
sample_num += 1
if last_sec_cpu[cpu_int] == 0 :
last_sec_cpu[cpu_int] = time_pre_dec
last_usec_cpu[cpu_int] = time_post_dec
else :
duration_us = (int(time_pre_dec) - int(last_sec_cpu[cpu_int])) * 1000000 + (int(time_post_dec) - int(last_usec_cpu[cpu_int]))
duration_ms = Decimal(duration_us) / Decimal(1000)
last_sec_cpu[cpu_int] = time_pre_dec
last_usec_cpu[cpu_int] = time_post_dec
elapsed_time = Decimal(time_pre_dec) + Decimal(time_post_dec) / Decimal(1000000) - start_time
load = Decimal(int(mperf)*100)/ Decimal(tsc)
freq_ghz = Decimal(freq)/Decimal(1000000)
# Sanity check calculation, typically anomalies indicate missed samples
# However, check for 0 (should never occur)
tsc_ghz = Decimal(0)
if duration_ms != Decimal(0) :
tsc_ghz = Decimal(tsc)/duration_ms/Decimal(1000000)
store_csv(cpu_int, time_pre_dec, time_post_dec, core_busy, scaled, _from, _to, mperf, aperf, tsc, freq_ghz, io_boost, common_comm, load, duration_ms, sample_num, elapsed_time, tsc_ghz)
if cpu_int > current_max_cpu:
current_max_cpu = cpu_int
# End of for each trace line loop
# Now seperate the main overall csv file into per CPU csv files.
split_csv()
interval = ""
filename = ""
cpu_list = ""
testname = ""
graph_data_present = False;
valid1 = False
valid2 = False
cpu_mask = zeros((MAX_CPUS,), dtype=int)
try:
opts, args = getopt.getopt(sys.argv[1:],"ht:i:c:n:",["help","trace_file=","interval=","cpu=","name="])
except getopt.GetoptError:
print_help()
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print()
sys.exit()
elif opt in ("-t", "--trace_file"):
valid1 = True
location = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
filename = os.path.join(location, arg)
elif opt in ("-i", "--interval"):
valid1 = True
interval = arg
elif opt in ("-c", "--cpu"):
cpu_list = arg
elif opt in ("-n", "--name"):
valid2 = True
testname = arg
if not (valid1 and valid2):
print_help()
sys.exit()
if cpu_list:
for p in re.split("[,]", cpu_list):
if int(p) < MAX_CPUS :
cpu_mask[int(p)] = 1
else:
for i in range (0, MAX_CPUS):
cpu_mask[i] = 1
if not os.path.exists('results'):
os.mkdir('results')
os.chdir('results')
if os.path.exists(testname):
print('The test name directory already exists. Please provide a unique test name. Test re-run not supported, yet.')
sys.exit()
os.mkdir(testname)
os.chdir(testname)
# Temporary (or perhaps not)
cur_version = sys.version_info
print('python version (should be >= 2.7):')
print(cur_version)
# Left as "cleanup" for potential future re-run ability.
cleanup_data_files()
if interval:
filename = "/sys/kernel/debug/tracing/trace"
clear_trace_file()
set_trace_buffer_size()
enable_trace()
print('Sleeping for ', interval, 'seconds')
time.sleep(int(interval))
disable_trace()
current_max_cpu = 0
read_trace_data(filename)
if graph_data_present == False:
print('No valid data to plot')
sys.exit(2)
for cpu_no in range(0, current_max_cpu + 1):
plot_perf_busy_with_sample(cpu_no)
plot_perf_busy(cpu_no)
plot_durations(cpu_no)
plot_loads(cpu_no)
plot_pstate_cpu_with_sample()
plot_pstate_cpu()
plot_load_cpu()
plot_frequency_cpu()
plot_duration_cpu()
plot_scaled_cpu()
plot_boost_cpu()
plot_ghz_cpu()
os.chdir('../../')
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册