- 30 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
The CPUIDLE_DRIVER_STATE_START symbol is defined as 1 only if CONFIG_ARCH_HAS_CPU_RELAX is set, otherwise it is defined as 0. However, if CONFIG_ARCH_HAS_CPU_RELAX is set, the first (index 0) entry in the cpuidle driver's table of states is overwritten with the default "poll" entry by the core. The "state" defined by the "poll" entry doesn't provide ->enter_dead and ->enter_freeze callbacks and its exit_latency is 0. For this reason, it is not necessary to use CPUIDLE_DRIVER_STATE_START in cpuidle_play_dead() (->enter_dead is NULL, so the "poll state" will be skipped by the loop). It also is arguably unuseful to return states with exit_latency equal to 0 from find_deepest_state(), so the function can be modified to start the loop from index 0 and the "poll state" will be skipped by it as a result of the check against latency_req. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
-
- 19 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Since idle_should_freeze() is defined to always return 'false' for CONFIG_SUSPEND unset, all of the code depending on it in cpuidle_idle_call() is not necessary in that case. Make that code depend on CONFIG_SUSPEND too to avoid building it when it is not going to be used. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 5月, 2015 3 次提交
-
-
由 Rafael J. Wysocki 提交于
If tick_broadcast_enter() fails in cpuidle_enter_state(), try to find another idle state to enter instead of invoking default_idle_call() immediately and returning -EBUSY which should increase the chances of saving some energy in those cases. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
由 Rafael J. Wysocki 提交于
The check of the cpuidle_enter() return value against -EBUSY made in call_cpuidle() will not be necessary any more if cpuidle_enter_state() calls default_idle_call() directly when it is about to return -EBUSY, so make that happen and eliminate the check. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
由 Rafael J. Wysocki 提交于
Introduce a wrapper function around idle_set_state() called sched_idle_set_state() that will pass this_rq() to it as the first argument and make cpuidle_enter_state() call the new function before and after entering the target state. At the same time, remove direct invocations of idle_set_state() from call_cpuidle(). This will allow the invocation of default_idle_call() to be moved from call_cpuidle() to cpuidle_enter_state() safely and call_cpuidle() to be simplified a bit as a result. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
- 10 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
The kerneldoc comment for cpuidle_enter_state() doesn't match the function's header any more, so fix it. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Avoid calling the governor's ->reflect method if the state index passed to cpuidle_reflect() is negative. This allows the analogous check to be dropped from menu_reflect(), so do that too, and ensures that arbitrary error codes can be passed to cpuidle_reflect() as the index with no adverse consequences. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 29 4月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 335f4919 (sched/idle: Use explicit broadcast oneshot control function) replaced clockevents_notify() invocations in cpuidle_idle_call() with direct calls to tick_broadcast_enter() and tick_broadcast_exit(), but it overlooked the fact that interrupts were already enabled before calling the latter which led to functional breakage on systems using idle states with the CPUIDLE_FLAG_TIMER_STOP flag set. Fix that by moving the invocations of tick_broadcast_enter() and tick_broadcast_exit() down into cpuidle_enter_state() where interrupts are still disabled when tick_broadcast_exit() is called. Also ensure that interrupts will be disabled before running tick_broadcast_exit() even if they have been enabled by the idle state's ->enter callback. Trigger a WARN_ON_ONCE() in that case, as we generally don't want that to happen for states with CPUIDLE_FLAG_TIMER_STOP set. Fixes: 335f4919 (sched/idle: Use explicit broadcast oneshot control function) Reported-and-tested-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reported-and-tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 03 4月, 2015 1 次提交
-
-
Thomas Schlichter reports the following issue on his Samsung NC20: "The C-states C1 and C2 to the OS when connected to AC, and additionally provides the C3 C-state when disconnected from AC. However, the number of C-states shown in sysfs is fixed to the number of C-states present at boot. If I boot with AC connected, I always only see the C-states up to C2 even if I disconnect AC. The reason is commit 130a5f69 (ACPI / cpuidle: remove dev->state_count setting). It removes the update of dev->state_count, but sysfs uses exactly this variable to show the C-states. The fix is to use drv->state_count in sysfs. As this is currently the last user of dev->state_count, this variable can be completely removed." Remove dev->state_count as per the above. Reported-by: NThomas Schlichter <thomas.schlichter@web.de> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ [ rjw: Changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 06 3月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 38106313 (PM / sleep: Re-implement suspend-to-idle handling) overlooked the fact that entering some sufficiently deep idle states by CPUs may cause their local timers to stop and in those cases it is necessary to switch over to a broadcast timer prior to entering the idle state. If the cpuidle driver in use does not provide the new ->enter_freeze callback for any of the idle states, that problem affects suspend-to-idle too, but it is not taken into account after the changes made by commit 38106313. Fix that by changing the definition of cpuidle_enter_freeze() and re-arranging of the code in cpuidle_idle_call(), so the former does not call cpuidle_enter() any more and the fallback case is handled by cpuidle_idle_call() directly. Fixes: 38106313 (PM / sleep: Re-implement suspend-to-idle handling) Reported-and-tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 01 3月, 2015 2 次提交
-
-
由 Rafael J. Wysocki 提交于
Modify cpuidle_enter_freeze() to do the sanity checks done by cpuidle_select() to avoid crashing the suspend-to-idle code path in case something is missing. Fixes: 38106313 (PM / sleep: Re-implement suspend-to-idle handling) Original-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
由 Rafael J. Wysocki 提交于
Disabling interrupts at the end of cpuidle_enter_freeze() is not useful, because its caller, cpuidle_idle_call(), re-enables them right away after invoking it. To avoid that unnecessary back and forth dance with interrupts, make cpuidle_enter_freeze() enable interrupts after calling enter_freeze_proper() and drop the local_irq_disable() at its end, so that all of the code paths in it end up with interrupts enabled. Then, cpuidle_idle_call() will not need to re-enable interrupts after calling cpuidle_enter_freeze() any more, because the latter will return with interrupts enabled, in analogy with cpuidle_enter(). Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 16 2月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
The efficiency of suspend-to-idle depends on being able to keep CPUs in the deepest available idle states for as much time as possible. Ideally, they should only be brought out of idle by system wakeup interrupts. However, timer interrupts occurring periodically prevent that from happening and it is not practical to chase all of the "misbehaving" timers in a whack-a-mole fashion. A much more effective approach is to suspend the local ticks for all CPUs and the entire timekeeping along the lines of what is done during full suspend, which also helps to keep suspend-to-idle and full suspend reasonably similar. The idea is to suspend the local tick on each CPU executing cpuidle_enter_freeze() and to make the last of them suspend the entire timekeeping. That should prevent timer interrupts from triggering until an IO interrupt wakes up one of the CPUs. It needs to be done with interrupts disabled on all of the CPUs, though, because otherwise the suspended clocksource might be accessed by an interrupt handler which might lead to fatal consequences. Unfortunately, the existing ->enter callbacks provided by cpuidle drivers generally cannot be used for implementing that, because some of them re-enable interrupts temporarily and some idle entry methods cause interrupts to be re-enabled automatically on exit. Also some of these callbacks manipulate local clock event devices of the CPUs which really shouldn't be done after suspending their ticks. To overcome that difficulty, introduce a new cpuidle state callback, ->enter_freeze, that will be guaranteed (1) to keep interrupts disabled all the time (and return with interrupts disabled) and (2) not to touch the CPU timer devices. Modify cpuidle_enter_freeze() to look for the deepest available idle state with ->enter_freeze present and to make the CPU execute that callback with suspended tick (and the last of the online CPUs to execute it with suspended timekeeping). Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 14 2月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
In preparation for adding support for quiescing timers in the final stage of suspend-to-idle transitions, rework the freeze_enter() function making the system wait on a wakeup event, the freeze_wake() function terminating the suspend-to-idle loop and the mechanism by which deep idle states are entered during suspend-to-idle. First of all, introduce a simple state machine for suspend-to-idle and make the code in question use it. Second, prevent freeze_enter() from losing wakeup events due to race conditions and ensure that the number of online CPUs won't change while it is being executed. In addition to that, make it force all of the CPUs re-enter the idle loop in case they are in idle states already (so they can enter deeper idle states if possible). Next, drop cpuidle_use_deepest_state() and replace use_deepest_state checks in cpuidle_select() and cpuidle_reflect() with a single suspend-to-idle state check in cpuidle_idle_call(). Finally, introduce cpuidle_enter_freeze() that will simply find the deepest idle state available to the given CPU and enter it using cpuidle_enter(). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 24 9月, 2014 1 次提交
-
-
由 Daniel Lezcano 提交于
When the cpu enters idle, it stores the cpuidle state pointer in its struct rq instance which in turn could be used to make a better decision when balancing tasks. As soon as the cpu exits its idle state, the struct rq reference is cleared. There are a couple of situations where the idle state pointer could be changed while it is being consulted: 1. For x86/acpi with dynamic c-states, when a laptop switches from battery to AC that could result on removing the deeper idle state. The acpi driver triggers: 'acpi_processor_cst_has_changed' 'cpuidle_pause_and_lock' 'cpuidle_uninstall_idle_handler' 'kick_all_cpus_sync'. All cpus will exit their idle state and the pointed object will be set to NULL. 2. The cpuidle driver is unloaded. Logically that could happen but not in practice because the drivers are always compiled in and 95% of them are not coded to unregister themselves. In any case, the unloading code must call 'cpuidle_unregister_device', that calls 'cpuidle_pause_and_lock' leading to 'kick_all_cpus_sync' as mentioned above. A race can happen if we use the pointer and then one of these two scenarios occurs at the same moment. In order to be safe, the idle state pointer stored in the rq must be used inside a rcu_read_lock section where we are protected with the 'rcu_barrier' in the 'cpuidle_uninstall_idle_handler' function. The idle_get_state() and idle_put_state() accessors should be used to that effect. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: linux-pm@vger.kernel.org Cc: linaro-kernel@lists.linaro.org Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 9月, 2014 1 次提交
-
-
由 Chuansheng Liu 提交于
Currently kick_all_cpus_sync() or smp_call_function() can not break the polling idle cpu immediately. Instead using wake_up_all_idle_cpus() which can wake up the polling idle cpu quickly is much more helpful for power. Signed-off-by: NChuansheng Liu <chuansheng.liu@intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-pm@vger.kernel.org Cc: changcheng.liu@intel.com Cc: xiaoming.wang@intel.com Cc: souvik.k.chakravarty@intel.com Cc: luto@amacapital.net Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: linux-pm@vger.kernel.org Link: http://lkml.kernel.org/r/1409815075-4180-3-git-send-email-chuansheng.liu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 09 7月, 2014 1 次提交
-
-
由 Sandeep Tripathy 提交于
idle_exit event is the first event after a core exits idle state. So this should be traced before local irq is ebabled. Likewise idle_entry is the last event before a core enters idle state. This will ease visualising the cpu idle state from kernel traces. Signed-off-by: NSandeep Tripathy <sandeep.tripathy@linaro.org> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> [rjw: Subject, rebase] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 07 5月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
If freeze_enter() is called, we want to bypass the current cpuidle governor and always use the deepest available (that is, not disabled) C-state, because we want to save as much energy as reasonably possible then and runtime latency constraints don't matter at that point, since the system is in a sleep state anyway. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NAubrey Li <aubrey.li@linux.intel.com>
-
- 01 5月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Since both cpuidle_enabled() and cpuidle_select() are only called by cpuidle_idle_call(), it is not really useful to keep them separate and combining them will help to avoid complicating cpuidle_idle_call() even further if governors are changed to return error codes sometimes. This code modification shouldn't lead to any functional changes. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Paul Burton 提交于
As described by a comment at the end of cpuidle_enter_state_coupled it can be inefficient for coupled idle states to return with IRQs enabled since they may proceed to service an interrupt instead of clearing the coupled idle state. Until they have finished & cleared the idle state all CPUs coupled with them will spin rather than being able to enter a safe idle state. Commits e1689795 "cpuidle: Add common time keeping and irq enabling" and 554c06ba "cpuidle: remove en_core_tk_irqen flag" led to the cpuidle_enter_state enabling interrupts for all idle states, including coupled ones, making this inefficiency unavoidable by drivers & the local_irq_enable near the end of cpuidle_enter_state_coupled redundant. This patch avoids enabling interrupts in cpuidle_enter_state after a coupled state has been entered, allowing them to remain disabled until all coupled CPUs have exited the idle state and cpuidle_enter_state_coupled re-enables them. Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 3月, 2014 2 次提交
-
-
由 Daniel Lezcano 提交于
The cpuidle_idle_call does nothing more than calling the three individuals function and is no longer used by any arch specific code but only in the cpuidle framework code. We can move this function into the idle task code to ensure better proximity to the scheduler code. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-2-git-send-email-daniel.lezcano@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Daniel Lezcano 提交于
In order to allow better integration between the cpuidle framework and the scheduler, reducing the distance between these two sub-components will facilitate this integration by moving part of the cpuidle code in the idle task file and, because idle.c is in the sched directory, we have access to the scheduler's private structures. This patch splits the cpuidle_idle_call main entry function into 3 calls to a newly added API: 1. select the idle state 2. enter the idle state 3. reflect the idle state The cpuidle_idle_call calls these three functions to implement the main idle entry function. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-1-git-send-email-daniel.lezcano@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 2月, 2014 1 次提交
-
-
由 Preeti U Murthy 提交于
Some archs set the CPUIDLE_FLAG_TIMER_STOP flag for idle states in which the local timers stop. The cpuidle_idle_call() currently handles such idle states by calling into the broadcast framework so as to wakeup CPUs at their next wakeup event. With the hrtimer mode of broadcast, the BROADCAST_ENTER call into the broadcast frameowork can fail for archs that do not have an external clock device to handle wakeups and the CPU in question has thus to be made the stand by CPU. This patch handles such cases by failing the call into cpuidle so that the arch can take some default action. The arch will certainly not enter a similar idle state because a failed cpuidle call will also implicitly indicate that the broadcast framework has not registered this CPU to be woken up. Hence we are safe if we fail the cpuidle call. In the process move the functions that trace idle statistics just before and after the entry and exit into idle states respectively. In other scenarios where the call to cpuidle fails, we end up not tracing idle entry and exit since a decision on an idle state could not be taken. Similarly when the call to broadcast framework fails, we skip tracing idle statistics because we are in no further position to take a decision on an alternative idle state to enter into. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Cc: deepthi@linux.vnet.ibm.com Cc: paulmck@linux.vnet.ibm.com Cc: fweisbec@gmail.com Cc: paulus@samba.org Cc: srivatsa.bhat@linux.vnet.ibm.com Cc: svaidy@linux.vnet.ibm.com Cc: peterz@infradead.org Cc: benh@kernel.crashing.org Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lkml.kernel.org/r/20140207080652.17187.66344.stgit@preeti.in.ibm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 12月, 2013 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
If not, we could end up in the unfortunate situation where we dereference a NULL pointer b/c we have cpuidle disabled. This is the case when booting under Xen (which uses the ACPI P/C states but disables the CPU idle driver) - and can be easily reproduced when booting with cpuidle.off=1. BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffff8156db4a>] cpuidle_unregister_device+0x2a/0x90 .. snip.. Call Trace: [<ffffffff813b15b4>] acpi_processor_power_exit+0x3c/0x5c [<ffffffff813af0a9>] acpi_processor_stop+0x61/0xb6 [<ffffffff814215bf>] __device_release_driver+0fffff81421653>] device_release_driver+0x23/0x30 [<ffffffff81420ed8>] bus_remove_device+0x108/0x180 [<ffffffff8141d9d9>] device_del+0x129/0x1c0 [<ffffffff813cb4b0>] ? unregister_xenbus_watch+0x1f0/0x1f0 [<ffffffff8141da8e>] device_unregister+0x1e/0x60 [<ffffffff814243e9>] unregister_cpu+0x39/0x60 [<ffffffff81019e03>] arch_unregister_cpu+0x23/0x30 [<ffffffff813c3c51>] handle_vcpu_hotplug_event+0xc1/0xe0 [<ffffffff813cb4f5>] xenwatch_thread+0x45/0x120 [<ffffffff810af010>] ? abort_exclusive_wait+0xb0/0xb0 [<ffffffff8108ec42>] kthread+0xd2/0xf0 [<ffffffff8108eb70>] ? kthread_create_on_node+0x180/0x180 [<ffffffff816ce17c>] ret_from_fork+0x7c/0xb0 [<ffffffff8108eb70>] ? kthread_create_on_node+0x180/0x180 This problem also appears in 3.12 and could be a candidate for backport. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: All applicable <stable@vger.kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 30 10月, 2013 7 次提交
-
-
由 Viresh Kumar 提交于
poll_idle_init() just initializes drv->states[0] and so that is required to be done only once for each driver. Currently, it is called from cpuidle_enable_device() which is called for every CPU that the driver supports. That is not required, so move it to a better place and call it from __cpuidle_register_driver() so that the initialization is carried out only once. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Few statements in cpuidle_idle_call() are broken into multiple lines, although that isn't really necessary. Convert those to single line. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
We are doing this twice in cpuidle_idle_call() routine: drv->states[next_state].flags & CPUIDLE_FLAG_TIMER_STOP Would be better if we actually store this in a local variable and use that. That reduces code duplication and likely makes this piece of code run faster (in case the compiler wasn't able to optimize it earlier) [rjw: Cast the result of bitwise AND to bool explicitly using !!] Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Two checks cpuidle_idle_call() cause the same error code to be returned if they fail, so merge them for clarity. [rjw: Changelog] Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
This patch rearranges __cpuidle_register_device() a bit in order to reduce the number of exit points in that function. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
The only value returned by __cpuidle_device_init() is 0, so it very well may be a void function. Make that happen. [rjw: Changelog] Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Some comments in cpuidle core files contain trivial mistakes. This patch fixes them. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 7月, 2013 4 次提交
-
-
由 Daniel Lezcano 提交于
Make __cpuidle_register_device() check whether or not the device has been registered already and return -EBUSY immediately if that's the case. [rjw: Changelog] Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
Add __cpuidle_device_init() for initializing the cpuidle_device structure. [rjw: Changelog] Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
To reduce code duplication related to the unregistration of cpuidle devices, introduce __cpuidle_unregister_device() and move all of the unregistration code to that function. [rjw: Changelog] Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
We previously changed the ordering of the cpuidle framework initialization so that the governors are registered before the drivers which can register their devices right from the start. Now, we can safely remove the __cpuidle_register_device() call hack in cpuidle_enable_device() and check if the driver has been registered before enabling it. Then, cpuidle_register_device() can consistently check the cpuidle_enable_device() return value when enabling the device. [rjw: Changelog] Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 6月, 2013 1 次提交
-
-
由 Daniel Lezcano 提交于
Commit bf4d1b5d (cpuidle: support multiple drivers) introduced support for using multiple cpuidle drivers at the same time. It added a couple of new APIs to register the driver per CPU, but that led to some unnecessary code complexity related to the kernel config options deciding whether or not the multiple driver support is enabled. The code has to work as it did before when the multiple driver support is not enabled and the multiple driver support has to be compatible with the previously existing API. Remove the new API, not used by any driver in the tree yet (but needed for the HMP cpuidle drivers that will be submitted soon), and add a new cpumask pointer to the cpuidle driver structure that will point to the mask of CPUs handled by the given driver. That will allow the cpuidle_[un]register_driver() API to be used for the multiple driver support along with the cpuidle_[un]register() functions added recently. [rjw: Changelog] Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 24 4月, 2013 1 次提交
-
-
由 Daniel Lezcano 提交于
Fix comment format for the kernel doc script. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 4月, 2013 2 次提交
-
-
由 Daniel Lezcano 提交于
The usual scheme to initialize a cpuidle driver on a SMP is: cpuidle_register_driver(drv); for_each_possible_cpu(cpu) { device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); } This code is duplicated in each cpuidle driver. On UP systems, it is done this way: cpuidle_register_driver(drv); device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); On UP, the macro 'for_each_cpu' does one iteration: #define for_each_cpu(cpu, mask) \ for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask) Hence, the initialization loop is the same for UP than SMP. Beside, we saw different bugs / mis-initialization / return code unchecked in the different drivers, the code is duplicated including bugs. After fixing all these ones, it appears the initialization pattern is the same for everyone. Please note, some drivers are doing dev->state_count = drv->state_count. This is not necessary because it is done by the cpuidle_enable_device function in the cpuidle framework. This is true, until you have the same states for all your devices. Otherwise, the 'low level' API should be used instead with the specific initialization for the driver. Let's add a wrapper function doing this initialization with a cpumask parameter for the coupled idle states and use it for all the drivers. That will save a lot of LOC, consolidate the code, and the modifications in the future could be done in a single place. Another benefit is the consolidation of the cpuidle_device variable which is now in the cpuidle framework and no longer spread accross the different arch specific drivers. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
The en_core_tk_irqen flag is set in all the cpuidle driver which means it is not necessary to specify this flag. Remove the flag and the code related to it. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> # for mach-omap2/* Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 01 4月, 2013 1 次提交
-
-
由 Daniel Lezcano 提交于
When a cpu enters a deep idle state, the local timers are stopped and the time framework falls back to the timer device used as a broadcast timer. The different cpuidle drivers are calling clockevents_notify ENTER/EXIT when the idle state stops the local timer. Add a new flag CPUIDLE_FLAG_TIMER_STOP which can be set by the cpuidle drivers. If the flag is set, the cpuidle core code takes care of the notification on behalf of the driver to avoid pointless code duplication. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-