- 17 9月, 2020 1 次提交
-
-
由 Peter Zijlstra 提交于
Some drivers have to do significant work, some of which relies on RCU still being active. Instead of using RCU_NONIDLE in the drivers and flipping RCU back on, allow drivers to take over RCU-idle duty. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Tested-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 8月, 2020 1 次提交
-
-
由 Peter Zijlstra 提交于
This allows moving the leave_mm() call into generic code before rcu_idle_enter(). Gets rid of more trace_*_rcuidle() users. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NMarco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200821085348.369441600@infradead.org
-
- 30 7月, 2020 1 次提交
-
-
由 Neal Liu 提交于
Control Flow Integrity(CFI) is a security mechanism that disallows changes to the original control flow graph of a compiled binary, making it significantly harder to perform such attacks. init_state_node() assign same function callback to different function pointer declarations. static int init_state_node(struct cpuidle_state *idle_state, const struct of_device_id *matches, struct device_node *state_node) { ... idle_state->enter = match_id->data; ... idle_state->enter_s2idle = match_id->data; } Function declarations: struct cpuidle_state { ... int (*enter) (struct cpuidle_device *dev, struct cpuidle_driver *drv, int index); void (*enter_s2idle) (struct cpuidle_device *dev, struct cpuidle_driver *drv, int index); }; In this case, either enter() or enter_s2idle() would cause CFI check failed since they use same callee. Align function prototype of enter() since it needs return value for some use cases. The return value of enter_s2idle() is no need currently. Signed-off-by: NNeal Liu <neal.liu@mediatek.com> Reviewed-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 09 1月, 2020 1 次提交
-
-
由 Rafael J. Wysocki 提交于
The cpuidle_driver_ref() and cpuidle_driver_unref() functions are not used and the refcnt field in struct cpuidle_driver operated by them is not updated anywhere else (so it is permanently equal to 0), so drop both of them along with refcnt. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 27 12月, 2019 1 次提交
-
-
由 Rafael J. Wysocki 提交于
In certain situations it may be useful to prevent some idle states from being used by default while allowing user space to enable them later on. For this purpose, introduce a new state flag, CPUIDLE_FLAG_OFF, to mark idle states that should be disabled by default, make the core set CPUIDLE_STATE_DISABLED_BY_USER for those states at the initialization time and add a new state attribute in sysfs, "default_status", to inform user space of the initial status of the given idle state ("disabled" if CPUIDLE_FLAG_OFF is set for it, "enabled" otherwise). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 29 11月, 2019 1 次提交
-
-
由 Rafael J. Wysocki 提交于
After recent cpuidle updates the "disabled" field in struct cpuidle_state is only used by two drivers (intel_idle and shmobile cpuidle) for marking unusable idle states, but that may as well be achieved with the help of a state flag, so define an "unusable" idle state flag, CPUIDLE_FLAG_UNUSABLE, make the drivers in question use it instead of the "disabled" field and make the core set CPUIDLE_STATE_DISABLED_BY_DRIVER for the idle states with that flag set. After the above changes, the "disabled" field in struct cpuidle_state is not used any more, so drop it. No intentional functional impact. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 20 11月, 2019 2 次提交
-
-
由 Daniel Lezcano 提交于
Modify cpuidle_use_deepest_state() to take an additional exit latency limit argument to be passed to find_deepest_idle_state() and make cpuidle_idle_call() pass dev->forced_idle_latency_limit_ns to it for forced idle. Suggested-by: NRafael J. Wysocki <rafael@kernel.org> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> [ rjw: Rebase and rearrange code, subject & changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
In some cases it may be useful to specify an exit latency limit for the idle state to be used during CPU idle time injection. Instead of duplicating the information in struct cpuidle_device or propagating the latency limit in the call stack, replace the use_deepest_state field with forced_latency_limit_ns to represent that limit, so that the deepest idle state with exit latency within that limit is forced (i.e. no governors) when it is set. A zero exit latency limit for forced idle means to use governors in the usual way (analogous to use_deepest_state equal to "false" before this change). Additionally, add play_idle_precise() taking two arguments, the duration of forced idle and the idle state exit latency limit, both in nanoseconds, and redefine play_idle() as a wrapper around that new function. This change is preparatory, no functional impact is expected. Suggested-by: NRafael J. Wysocki <rafael@kernel.org> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> [ rjw: Subject, changelog, cpuidle_use_deepest_state() kerneldoc, whitespace ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 19 11月, 2019 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 99e98d3f ("cpuidle: Consolidate disabled state checks") overlooked the fact that the imx6q and tegra20 cpuidle drivers use the "disabled" field in struct cpuidle_state for quirks which trigger after the initialization of cpuidle, so reading the initial value of that field is not sufficient for those drivers. In order to allow them to implement the quirks without using the "disabled" field in struct cpuidle_state, introduce a new helper function and modify them to use it. Fixes: 99e98d3f ("cpuidle: Consolidate disabled state checks") Reported-by: NLen Brown <lenb@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 12 11月, 2019 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Currently, the cpuidle subsystem uses microseconds as the unit of time which (among other things) causes the idle loop to incur some integer division overhead for no clear benefit. In order to allow cpuidle to measure time in nanoseconds, add two new fields, exit_latency_ns and target_residency_ns, to represent the exit latency and target residency of an idle state in nanoseconds, respectively, to struct cpuidle_state and initialize them with the help of the corresponding values in microseconds provided by drivers. Additionally, change cpuidle_governor_latency_req() to return the idle state exit latency constraint in nanoseconds. Also meeasure idle state residency (last_residency_ns in struct cpuidle_device and time_ns in struct cpuidle_driver) in nanoseconds and update the cpuidle core and governors accordingly. However, the menu governor still computes typical intervals in microseconds to avoid integer overflows. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NDoug Smythies <dsmythies@telus.net> Tested-by: NDoug Smythies <dsmythies@telus.net>
-
- 06 11月, 2019 1 次提交
-
-
由 Rafael J. Wysocki 提交于
There are two reasons why CPU idle states may be disabled: either because the driver has disabled them or because they have been disabled by user space via sysfs. In the former case, the state's "disabled" flag is set once during the initialization of the driver and it is never cleared later (it is read-only effectively). In the latter case, the "disable" field of the given state's cpuidle_state_usage struct is set and it may be changed via sysfs. Thus checking whether or not an idle state has been disabled involves reading these two flags every time. In order to avoid the additional check of the state's "disabled" flag (which is effectively read-only anyway), use the value of it at the init time to set a (new) flag in the "disable" field of that state's cpuidle_state_usage structure and use the sysfs interface to manipulate another (new) flag in it. This way the state is disabled whenever the "disable" field of its cpuidle_state_usage structure is nonzero, whatever the reason, and it is the only place to look into to check whether or not the state has been disabled. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 11 9月, 2019 1 次提交
-
-
由 Joao Martins 提交于
The recently introduced haltpoll driver is largely only useful with haltpoll governor. To allow drivers to associate with a particular idle behaviour, add a @governor property to 'struct cpuidle_driver' and thus allow a cpuidle driver to switch to a *preferred* governor on idle driver registration. We save the previous governor, and when an idle driver is unregistered we switch back to that. The @governor can be overridden by cpuidle.governor= boot param or alternatively be ignored if the governor doesn't exist. Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 10 8月, 2019 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
Current PSCI code handles idle state entry through the psci_cpu_suspend_enter() API, that takes an idle state index as a parameter and convert the index into a previously initialized power_state parameter before calling the PSCI.CPU_SUSPEND() with it. This is unwieldly, since it forces the PSCI firmware layer to keep track of power_state parameter for every idle state so that the index->power_state conversion can be made in the PSCI firmware layer instead of the CPUidle driver implementations. Move the power_state handling out of drivers/firmware/psci into the respective ACPI/DT PSCI CPUidle backends and convert the psci_cpu_suspend_enter() API to get the power_state parameter as input, which makes it closer to its firmware interface PSCI.CPU_SUSPEND() API. A notable side effect is that the PSCI ACPI/DT CPUidle backends now can directly handle (and if needed update) power_state parameters before handing them over to the PSCI firmware interface to trigger PSCI.CPU_SUSPEND() calls. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 31 7月, 2019 1 次提交
-
-
由 Stephen Rothwell 提交于
An x86_64 allmodconfig build produces these errors: x86_64-linux-gnu-ld: kernel/sched/core.o: in function `cpuidle_poll_time': core.c:(.text+0x230): multiple definition of `cpuidle_poll_time'; arch/x86/= kernel/process.o:process.c:(.text+0xc0): first defined here (and more) Fixes: 259231a0 ("cpuidle: add poll_limit_ns to cpuidle_device structure") Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 30 7月, 2019 2 次提交
-
-
由 Marcelo Tosatti 提交于
Since this field is shared by all governors, move it to cpuidle device structure. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Marcelo Tosatti 提交于
Add a poll_limit_ns variable to cpuidle_device structure. Calculate and configure it in the new cpuidle_poll_time function, in case its zero. Individual governors are allowed to override this value. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 10 4月, 2019 1 次提交
-
-
由 Ulf Hansson 提交于
To be able to predict the sleep duration for a CPU entering idle, it is essential to know the expiration time of the next timer. Both the teo and the menu cpuidle governors already use this information for CPU idle state selection. Moving forward, a similar prediction needs to be made for a group of idle CPUs rather than for a single one and the following changes implement a new genpd governor for that purpose. In order to support that feature, add a new function called tick_nohz_get_next_hrtimer() that will return the next hrtimer expiration time of a given CPU to be invoked after deciding whether or not to stop the scheduler tick on that CPU. Make the cpuidle core call tick_nohz_get_next_hrtimer() right before invoking the ->enter() callback provided by the cpuidle driver for the given state and store its return value in the per-CPU struct cpuidle_device, so as to make it available to code outside of cpuidle. Note that at the point when cpuidle calls tick_nohz_get_next_hrtimer(), the governor's ->select() callback has already returned and indicated whether or not the tick should be stopped, so in fact the value returned by tick_nohz_get_next_hrtimer() always is the next hrtimer expiration time for the given CPU, possibly including the tick (if it hasn't been stopped). Co-developed-by: NLina Iyer <lina.iyer@linaro.org> Co-developed-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> [ rjw: Subject & changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 18 1月, 2019 1 次提交
-
-
由 Yangtao Li 提交于
Use BIT() macro to do a small tidy-up. CPUIDLE_DRIVER_FLAGS_MASK is not used, so remove it. Signed-off-by: NYangtao Li <tiny.windzz@gmail.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 12月, 2018 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Add two new metrics for CPU idle states, "above" and "below", to count the number of times the given state had been asked for (or entered from the kernel's perspective), but the observed idle duration turned out to be too short or too long for it (respectively). These metrics help to estimate the quality of the CPU idle governor in use. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 04 10月, 2018 1 次提交
-
-
由 Rafael J. Wysocki 提交于
If the CPU exits the "polling" state due to the time limit in the loop in poll_idle(), this is not a real wakeup and it just means that the "polling" state selection was not adequate. The governor mispredicted short idle duration, but had a more suitable state been selected, the CPU might have spent more time in it. In fact, there is no reason to expect that there would have been a wakeup event earlier than the next timer in that case. Handling such cases as regular wakeups in menu_update() may cause the menu governor to make suboptimal decisions going forward, but ignoring them altogether would not be correct either, because every time menu_select() is invoked, it makes a separate new attempt to predict the idle duration taking distinct time to the closest timer event as input and the outcomes of all those attempts should be recorded. For this reason, make menu_update() always assume that if the "polling" state was exited due to the time limit, the next proper wakeup event for the CPU would be the next timer event (not including the tick). Fixes: a37b969a "cpuidle: poll_state: Add time limit to poll_idle()" Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 18 9月, 2018 1 次提交
-
-
由 Fieah Lim 提交于
cpuidle_get_last_residency() is just a wrapper for retrieving the last_residency member of struct cpuidle_device. It's also weirdly the only wrapper function for accessing cpuidle_* struct member (by my best guess is it could be a leftover from v2.x). Anyhow, since the only two users (the ladder and menu governors) can access dev->last_residency directly, and it's more intuitive to do it that way, let's just get rid of the wrapper. This patch tidies up CPU idle code a bit without functional changes. Signed-off-by: NFieah Lim <kw@fieahl.im> [ rjw: Changelog cleanup ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 31 5月, 2018 1 次提交
-
-
由 Rafael J. Wysocki 提交于
There is some code duplication related to the PM QoS handling between the existing cpuidle governors, so move that code to a common helper function and call that from the governors. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 06 4月, 2018 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Add a new pointer argument to cpuidle_select() and to the ->select cpuidle governor callback to allow a boolean value indicating whether or not the tick should be stopped before entering the selected state to be returned from there. Make the ladder governor ignore that pointer (to preserve its current behavior) and make the menu governor return 'false" through it if: (1) the idle exit latency is constrained at 0, or (2) the selected state is a polling one, or (3) the expected idle period duration is within the tick period range. In addition to that, the correction factor computations in the menu governor need to take the possibility that the tick may not be stopped into account to avoid artificially small correction factor values. To that end, add a mechanism to record tick wakeups, as suggested by Peter Zijlstra, and use it to modify the menu_update() behavior when tick wakeup occurs. Namely, if the CPU is woken up by the tick and the return value of tick_nohz_get_sleep_length() is not within the tick boundary, the predicted idle duration is likely too short, so make menu_update() try to compensate for that by updating the governor statistics as though the CPU was idle for a long time. Since the value returned through the new argument pointer of cpuidle_select() is not used by its caller yet, this change by itself is not expected to alter the functionality of the code. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 29 3月, 2018 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Add a new attribute group called "s2idle" under the sysfs directory of each cpuidle state that supports the ->enter_s2idle callback and put two new attributes, "usage" and "time", into that group to represent the number of times the given state was requested for suspend-to-idle and the total time spent in suspend-to-idle after requesting that state, respectively. That will allow diagnostic information related to suspend-to-idle to be collected without enabling advanced debug features and analyzing dmesg output. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 12 2月, 2018 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit f8594220 (x86: PM: Make APM idle driver initialize polling state) made apm_init() call cpuidle_poll_state_init(), but that only is defined for CONFIG_CPU_IDLE set, so make the empty stub of it available for CONFIG_CPU_IDLE unset too to fix the resulting build issue. Fixes: f8594220 (x86: PM: Make APM idle driver initialize polling state) Cc: 4.14+ <stable@vger.kernel.org> # 4.14+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 02 1月, 2018 1 次提交
-
-
由 Prashanth Prakash 提交于
If a CPU is entering a low power idle state where it doesn't lose any context, then there is no need to call cpu_pm_enter()/cpu_pm_exit(). Add a new macro(CPU_PM_CPU_IDLE_ENTER_RETENTION) to be used by cpuidle drivers when they are entering retention state. By not calling cpu_pm_enter and cpu_pm_exit we reduce the latency involved in entering and exiting the retention idle states. CPU_PM_CPU_IDLE_ENTER_RETENTION assumes that no state is lost and hence CPU PM notifiers will not be called. We may need a broader change if we need to support partial retention states effeciently. On ARM64 based Qualcomm Server Platform we measured below overhead for for calling cpu_pm_enter and cpu_pm_exit for retention states. workload: stress --hdd #CPUs --hdd-bytes 32M -t 30 Average overhead of cpu_pm_enter - 1.2us Average overhead of cpu_pm_exit - 3.1us Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NPrashanth Prakash <pprakash@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 8月, 2017 3 次提交
-
-
由 Rafael J. Wysocki 提交于
Make the drivers that want to include the polling state into their states table initialize it explicitly and drop the initialization of it (which in fact is conditional, but that is not obvious from the code) from the core. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
由 Rafael J. Wysocki 提交于
Move the polling state initialization code to a separate file built conditionally on CONFIG_ARCH_HAS_CPU_RELAX to get rid of the #ifdef in driver.c. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
由 Rafael J. Wysocki 提交于
On some architectures the first (index 0) idle state is a polling one and it doesn't really save energy, so there is the CPUIDLE_DRIVER_STATE_START symbol allowing some pieces of cpuidle code to avoid using that state. However, this makes the code rather hard to follow. It is better to explicitly avoid the polling state, so add a new cpuidle state flag CPUIDLE_FLAG_POLLING to mark it and make the relevant code check that flag for the first state instead of using the CPUIDLE_DRIVER_STATE_START symbol. In the ACPI processor driver that cannot always rely on the state flags (like before the states table has been set up) define a new internal symbol ACPI_IDLE_STATE_START equivalent to the CPUIDLE_DRIVER_STATE_START one and drop the latter. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 11 8月, 2017 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Rename the ->enter_freeze cpuidle driver callback to ->enter_s2idle to make it clear that it is used for entering suspend-to-idle and rename the related functions, variables and so on accordingly. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 31 1月, 2017 1 次提交
-
-
由 Gautham R. Shenoy 提交于
In the current code for powernv_add_idle_states, there is a lot of code duplication while initializing an idle state in powernv_states table. Add an inline helper function to populate the powernv_states[] table for a given idle state. Invoke this for populating the "Nap", "Fastsleep" and the stop states in powernv_add_idle_states. Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-by: NBalbir Singh <bsingharora@gmail.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 29 11月, 2016 1 次提交
-
-
由 Jacob Pan 提交于
When idle injection is used to cap power, we need to override the governor's choice of idle states. For this reason, make it possible the deepest idle state selection to be enforced by setting a flag on a given CPU to achieve the maximum potential power draw reduction. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> [ rjw: Subject & changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 21 10月, 2016 1 次提交
-
-
由 Daniel Lezcano 提交于
The governor's code use try_module_get() and put_module() to refcount the governor's module. But the governors are not compiled as module. The refcount does not prevent to switch the governor or unload a module as they aren't compiled as modules. The code is pointless, so remove it. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 22 7月, 2016 1 次提交
-
-
由 Sudeep Holla 提交于
The function arm_enter_idle_state is exactly the same in both generic ARM{32,64} CPUIdle driver and will be the same even on ARM64 backend for ACPI processor idle driver. So we can unify it and move it to a common place by introducing CPU_PM_CPU_IDLE_ENTER macro that can be used in all places avoiding duplication. This is in preparation of reuse of the generic cpuidle entry function for ACPI LPI support on ARM64. Suggested-by: NRafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 03 6月, 2016 1 次提交
-
-
由 Catalin Marinas 提交于
The cpuidle_devices per-CPU variable is only defined when CPU_IDLE is enabled. Commit c8cc7d4d ("sched/idle: Reorganize the idle loop") removed the #ifdef CONFIG_CPU_IDLE around cpuidle_idle_call() with the compiler optimising away __this_cpu_read(cpuidle_devices). However, with CONFIG_UBSAN && !CONFIG_CPU_IDLE, this optimisation no longer happens and the kernel fails to link since cpuidle_devices is not defined. This patch introduces an accessor function for the current CPU cpuidle device (returning NULL when !CONFIG_CPU_IDLE) and uses it in cpuidle_idle_call(). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: 4.5+ <stable@vger.kernel.org> # 4.5+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 28 8月, 2015 1 次提交
-
-
由 Xunlei Pang 提交于
cpuidle_device::safe_state_index need to be initialized before use, it should be the same as cpuidle_driver::safe_state_index. We tackled this issue by removing the safe_state_index from the cpuidle_device structure and use the one in the cpuidle_driver structure instead. Suggested-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXunlei Pang <pang.xunlei@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 19 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Since idle_should_freeze() is defined to always return 'false' for CONFIG_SUSPEND unset, all of the code depending on it in cpuidle_idle_call() is not necessary in that case. Make that code depend on CONFIG_SUSPEND too to avoid building it when it is not going to be used. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 5月, 2015 2 次提交
-
-
由 Rafael J. Wysocki 提交于
The check of the cpuidle_enter() return value against -EBUSY made in call_cpuidle() will not be necessary any more if cpuidle_enter_state() calls default_idle_call() directly when it is about to return -EBUSY, so make that happen and eliminate the check. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
由 Rafael J. Wysocki 提交于
Introduce a wrapper function around idle_set_state() called sched_idle_set_state() that will pass this_rq() to it as the first argument and make cpuidle_enter_state() call the new function before and after entering the target state. At the same time, remove direct invocations of idle_set_state() from call_cpuidle(). This will allow the invocation of default_idle_call() to be moved from call_cpuidle() to cpuidle_enter_state() safely and call_cpuidle() to be simplified a bit as a result. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
- 03 4月, 2015 1 次提交
-
-
Thomas Schlichter reports the following issue on his Samsung NC20: "The C-states C1 and C2 to the OS when connected to AC, and additionally provides the C3 C-state when disconnected from AC. However, the number of C-states shown in sysfs is fixed to the number of C-states present at boot. If I boot with AC connected, I always only see the C-states up to C2 even if I disconnect AC. The reason is commit 130a5f69 (ACPI / cpuidle: remove dev->state_count setting). It removes the update of dev->state_count, but sysfs uses exactly this variable to show the C-states. The fix is to use drv->state_count in sysfs. As this is currently the last user of dev->state_count, this variable can be completely removed." Remove dev->state_count as per the above. Reported-by: NThomas Schlichter <thomas.schlichter@web.de> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ [ rjw: Changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-