- 22 1月, 2016 1 次提交
-
-
由 Sudeep Holla 提交于
Commit 51164251 "sched / idle: Drop default_idle_call() fallback from call_cpuidle()" made find_deepest_state() return non-negative value and check all the states with index > 0. Also as a result, find_deepest_state() returns 0 even when enter_freeze callbacks are not implemented and enter_freeze_proper() is called which ends up crashing the kernel. This patch updates the check for index > 0 in cpuidle_enter_freeze and cpuidle_idle_call(when idle_should_freeze is true) to restore the suspend-to-idle functionality in absence of enter_freeze callback. Fixes: 51164251 "sched / idle: Drop default_idle_call() fallback from call_cpuidle()" Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 19 1月, 2016 2 次提交
-
-
由 Rafael J. Wysocki 提交于
If menu_select() cannot find a suitable state to return, it will return the state index stored in data->last_state_idx. This means that it is pointless to look at the states whose indices are less than or equal to data->last_state_idx in the main loop, so don't do that. Given that those checks are done on every idle state selection, this change can save quite a bit of completely unnecessary overhead. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NIngo Molnar <mingo@kernel.org> Tested-by: NSudeep Holla <sudeep.holla@arm.com>
-
由 Rafael J. Wysocki 提交于
After commit 9c4b2867 (cpuidle: menu: Fix menu_select() for CPUIDLE_DRIVER_STATE_START == 0) it is clear that menu_select() cannot return negative values. Moreover, ladder_select_state() will never return a negative value too, so make find_deepest_state() return non-negative values too and drop the default_idle_call() fallback from call_cpuidle(). This eliminates one branch from the idle loop and makes the governors and find_deepest_state() handle the case when all states have been disabled from sysfs consistently. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NIngo Molnar <mingo@kernel.org> Tested-by: NSudeep Holla <sudeep.holla@arm.com>
-
- 16 1月, 2016 2 次提交
-
-
由 Jean Delvare 提交于
Since commit d6f346f2 (cpuidle: improve governor Kconfig options) the best cpuidle governor is selected automatically. There is little point in additionally selecting the other one as it will not be used, so don't select both governors by default. Being able to select more than one governor is still good for developers booting with cpuidle_sysfs_switch, though. This fixes the second half of kernel BZ #65531. Link: https://bugzilla.kernel.org/show_bug.cgi?id=65531Signed-off-by: NJean Delvare <jdelvare@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Jean Delvare 提交于
The menu governor is currently the default on all systems. However the documentation claims that the ladder governor is preferred on ticking systems. So bump the rating of the ladder governor when NO_HZ is disabled, or when booting with nohz=off. This fixes the first half of kernel BZ #65531. Link: https://bugzilla.kernel.org/show_bug.cgi?id=65531Signed-off-by: NJean Delvare <jdelvare@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 1月, 2016 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit a9ceb78b (cpuidle,menu: use interactivity_req to disable polling) exposed a bug in menu_select() causing it to return -1 on systems with CPUIDLE_DRIVER_STATE_START equal to zero, although it should have returned 0. As a result, idle states are not entered by CPUs on those systems. Namely, on the systems in question data->last_state_idx is initially equal to -1 and the above commit modified the condition that would have caused it to be changed to 0 to be less likely to trigger which exposed the problem. However, setting data->last_state_idx initially to -1 doesn't make sense at all and on the affected systems it should always be set to CPUIDLE_DRIVER_STATE_START (ie. 0) unconditionally, so make that happen. Fixes: a9ceb78b (cpuidle,menu: use interactivity_req to disable polling) Reported-and-tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 12月, 2015 3 次提交
-
-
由 Paul Gortmaker 提交于
The Kconfig currently controlling compilation of this code is: cpuidle/Kconfig.arm:config ARM_EXYNOS_CPUIDLE cpuidle/Kconfig.arm: bool "Cpu Idle Driver for the Exynos processors" ...meaning that it currently is not being built as a module by anyone. Lets remove the couple traces of modularity so that when reading the driver there is no doubt it is builtin-only. Since module_platform_driver() uses the same init level priority as builtin_platform_driver() the init ordering remains unchanged with this commit. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Paul Gortmaker 提交于
The Kconfig currently controlling compilation of this code is: cpuidle/Kconfig.arm:config ARM_U8500_CPUIDLE cpuidle/Kconfig.arm: bool "Cpu Idle Driver for the ST-E u8500 processors" ...meaning that it currently is not being built as a module by anyone. Lets remove the couple traces of modularity so that when reading the driver there is no doubt it is builtin-only. Since module_platform_driver() uses the same init level priority as builtin_platform_driver() the init ordering remains unchanged with this commit. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Paul Gortmaker 提交于
The Kconfig currently controlling compilation of this code is: drivers/cpuidle/Kconfig.arm:config ARM_CLPS711X_CPUIDLE drivers/cpuidle/Kconfig.arm: bool "CPU Idle Driver for CLPS711X processors" ...meaning that it currently is not being built as a module by anyone. Lets remove the modular code that is essentially orphaned, so that when reading the driver there is no doubt it is builtin-only. Since module_platform_driver() uses the same init level priority as builtin_platform_driver() the init ordering remains unchanged with this commit. Also note that MODULE_DEVICE_TABLE is a no-op for non-modular code. We also delete the MODULE_LICENSE tag etc. since all that information is already contained at the top of the file in the comments. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 17 11月, 2015 3 次提交
-
-
由 Rik van Riel 提交于
The cpuidle state tables contain the maximum exit latency for each cpuidle state. On x86, that is the exit latency for when the entire package goes into that same idle state. However, a lot of the time we only go into the core idle state, not the package idle state. This means we see a much smaller exit latency. We have no way to detect whether we went into the core or package idle state while idle, and that is ok. However, the current menu_update logic does have the potential to trip up the repeating pattern detection in get_typical_interval. If the system is experiencing an exit latency near the idle state's exit latency, some of the samples will have exit_us subtracted, while others will not. This turns a repeating pattern into mush, potentially breaking get_typical_interval. Furthermore, for smaller sleep intervals, we know the chance that all the cores in the package went to the same idle state are fairly small. Dividing the measured_us by two, instead of subtracting the full exit latency when hitting a small measured_us, will reduce the error. Signed-off-by: NRik van Riel <riel@redhat.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Rik van Riel 提交于
The menu governor carefully figures out how much time we typically sleep for an estimated sleep interval, or whether there is a repeating pattern going on, and corrects that estimate for the CPU load. Then it proceeds to ignore that information when determining whether or not to consider polling. This is not a big deal on most x86 CPUs, which have very low C1 latencies, and the patch should not have any effect on those CPUs. However, certain CPUs (eg. Atom) have much higher C1 latencies, and it would be good to not waste performance and power on those CPUs if we are expecting a very low wakeup latency. Disable polling based on the estimated interactivity requirement, not on the time to the next timer interrupt. Signed-off-by: NRik van Riel <riel@redhat.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Rik van Riel 提交于
The cpuidle menu governor has a forced cut-off for polling at 5us, in order to deal with firmware that gives the OS bad information on cpuidle states, leading to the system spending way too much time in polling. However, at least one x86 CPU family (Atom) has chips that have a 20us break-even point for C1. Forcing the polling cut-off to less than that wastes performance and power. Increase the polling cut-off to 20us. Systems with a lower C1 latency will be found in the states table by the menu governor, which will pick those states as appropriate. Signed-off-by: NRik van Riel <riel@redhat.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 10月, 2015 2 次提交
-
-
由 Russell King 提交于
As the driver doesn't support unbinding, nor does it support arbitary binding of devices, disable the bind/unbind attributes for this driver. Also, as the driver has no remove function, it can never be modular, so use builtin_platform_driver() to avoid the module exit boilerplate. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
由 Russell King 提交于
There's no need to use multiple platform drivers, especially when we want to do something different in the probe, but we still use a common probe function. We can use the platform ID system to only register one platform driver, but have it match several devices, and give us the CPU idle driver via the ID's driver_data. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 03 9月, 2015 1 次提交
-
-
由 Xunlei Pang 提交于
Since we are using cpuidle_driver::safe_state_index directly as the target state index, it is better to add the sanity check at the point of registering the driver. Signed-off-by: NXunlei Pang <pang.xunlei@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 28 8月, 2015 2 次提交
-
-
由 Xunlei Pang 提交于
For cpuidle_state_is_coupled(), 'dev' is not used, so remove it. Signed-off-by: NXunlei Pang <pang.xunlei@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Xunlei Pang 提交于
cpuidle_device::safe_state_index need to be initialized before use, it should be the same as cpuidle_driver::safe_state_index. We tackled this issue by removing the safe_state_index from the cpuidle_device structure and use the one in the cpuidle_driver structure instead. Suggested-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NXunlei Pang <pang.xunlei@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 03 8月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Now that the common PSCI client code has been factored out to drivers/firmware, and made safe for 32-bit use, move the 32-bit ARM code over to it. This results in a moderate reduction of duplicated lines, and will prevent further duplication as the PSCI client code is updated for PSCI 1.0 and beyond. The two legacy platform users of the PSCI invocation code are updated to account for interface changes. In both cases the power state parameter (which is constant) is now generated using macros, so that the pack/unpack logic can be killed in preparation for PSCI 1.0 power state changes. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NRob Herring <robh@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ashwin Chaugule <ashwin.chaugule@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 7月, 2015 1 次提交
-
-
由 Lucas Stach 提交于
Make sure to stop tracing only once we are past a point where all latency tracing events have been processed (irqs are not enabled again). This has the slight advantage of capturing more latency related events in the idle path, but most importantly it makes sure that latency tracing doesn't get re-enabled inadvertently when new events are coming in. This makes the irqsoff latency tracer useful again, as we stop capturing CPU sleep time as IRQ latency. Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel@pengutronix.de Cc: patchwork-lst@pengutronix.de Link: http://lkml.kernel.org/r/1437410090-3747-1-git-send-email-l.stach@pengutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 7月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Put tick_freeze() under RCU_NONIDLE() to prevent RCU from complaining about suspicious RCU usage in idle by trace_suspend_resume() called from there. While at it, fix a comment related to another usage of RCU_NONIDLE() in enter_freeze_proper(). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 26 6月, 2015 1 次提交
-
-
由 preeti 提交于
On some archs, the local clockevent device stops in deep cpuidle states. The broadcast framework is used to wakeup cpus in these idle states, in which either an external clockevent device is used to send wakeup ipis or the hrtimer broadcast framework kicks in in the absence of such a device. One cpu is nominated as the broadcast cpu and this cpu sends wakeup ipis to sleeping cpus at the appropriate time. This is the implementation in the oneshot mode of broadcast. In periodic mode of broadcast however, the presence of such cpuidle states results in the cpuidle driver calling tick_broadcast_enable() which shuts down the local clockevent devices of all the cpus and appoints the tick broadcast device as the clockevent device for each of them. This works on those archs where the tick broadcast device is a real clockevent device. But on archs which depend on the hrtimer mode of broadcast, the tick broadcast device hapens to be a pseudo device. The consequence is that the local clockevent devices of all cpus are shutdown and the kernel hangs at boot time in periodic mode. Let us thus not register the cpuidle states which have CPUIDLE_FLAG_TIMER_STOP flag set, on archs which depend on the hrtimer mode of broadcast in periodic mode. This patch takes care of doing this on powerpc. The cpus would not have entered into such deep cpuidle states in periodic mode on powerpc anyway. So there is no loss here. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Cc: 3.19+ <stable@vger.kernel.org> # 3.19+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 22 6月, 2015 1 次提交
-
-
由 Shilpasri G Bhat 提交于
The idle cpus which stay in snooze for a long period can degrade the perfomance of the sibling cpus. If the cpu stays in snooze for more than target residency of the next available idle state, then exit from snooze. This gives a chance to the cpuidle governor to re-evaluate the last idle state of the cpu to promote it to deeper idle states. Signed-off-by: NShilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 17 6月, 2015 1 次提交
-
-
由 Paul Gortmaker 提交于
All these drivers are configured with Kconfig options that are declared as bool. Hence it is not possible for the code to be built as modular. However the code is currently using the module_platform_driver() macro for driver registration. While this currently works, we really don't want to be including the module.h header in non-modular code, which we'll be forced to do, pending some upcoming code relocation from init.h into module.h. So we fix it now by using the non-modular equivalent. Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Michal Simek <michal.simek@xilinx.com> Cc: linux-pm@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 30 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
The CPUIDLE_DRIVER_STATE_START symbol is defined as 1 only if CONFIG_ARCH_HAS_CPU_RELAX is set, otherwise it is defined as 0. However, if CONFIG_ARCH_HAS_CPU_RELAX is set, the first (index 0) entry in the cpuidle driver's table of states is overwritten with the default "poll" entry by the core. The "state" defined by the "poll" entry doesn't provide ->enter_dead and ->enter_freeze callbacks and its exit_latency is 0. For this reason, it is not necessary to use CPUIDLE_DRIVER_STATE_START in cpuidle_play_dead() (->enter_dead is NULL, so the "poll state" will be skipped by the loop). It also is arguably unuseful to return states with exit_latency equal to 0 from find_deepest_state(), so the function can be modified to start the loop from index 0 and the "poll state" will be skipped by it as a result of the check against latency_req. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
-
- 19 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Since idle_should_freeze() is defined to always return 'false' for CONFIG_SUSPEND unset, all of the code depending on it in cpuidle_idle_call() is not necessary in that case. Make that code depend on CONFIG_SUSPEND too to avoid building it when it is not going to be used. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 5月, 2015 3 次提交
-
-
由 Rafael J. Wysocki 提交于
If tick_broadcast_enter() fails in cpuidle_enter_state(), try to find another idle state to enter instead of invoking default_idle_call() immediately and returning -EBUSY which should increase the chances of saving some energy in those cases. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
由 Rafael J. Wysocki 提交于
The check of the cpuidle_enter() return value against -EBUSY made in call_cpuidle() will not be necessary any more if cpuidle_enter_state() calls default_idle_call() directly when it is about to return -EBUSY, so make that happen and eliminate the check. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
由 Rafael J. Wysocki 提交于
Introduce a wrapper function around idle_set_state() called sched_idle_set_state() that will pass this_rq() to it as the first argument and make cpuidle_enter_state() call the new function before and after entering the target state. At the same time, remove direct invocations of idle_set_state() from call_cpuidle(). This will allow the invocation of default_idle_call() to be moved from call_cpuidle() to cpuidle_enter_state() safely and call_cpuidle() to be simplified a bit as a result. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKevin Hilman <khilman@linaro.org>
-
- 10 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
The kerneldoc comment for cpuidle_enter_state() doesn't match the function's header any more, so fix it. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 06 5月, 2015 1 次提交
-
-
由 Nicolas Pitre 提交于
This is currently unused. If a suspend must be limited to CPU level only by preventing the last man from triggering a cluster level suspend then this should be determined according to many other criteria the MCPM layer is currently not aware of. It is unlikely that mcpm_cpu_suspend() would be the proper conduit for that information anyway. Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NDave Martin <Dave.Martin@arm.com>
-
- 05 5月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Avoid calling the governor's ->reflect method if the state index passed to cpuidle_reflect() is negative. This allows the analogous check to be dropped from menu_reflect(), so do that too, and ensures that arbitrary error codes can be passed to cpuidle_reflect() as the index with no adverse consequences. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 29 4月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 335f4919 (sched/idle: Use explicit broadcast oneshot control function) replaced clockevents_notify() invocations in cpuidle_idle_call() with direct calls to tick_broadcast_enter() and tick_broadcast_exit(), but it overlooked the fact that interrupts were already enabled before calling the latter which led to functional breakage on systems using idle states with the CPUIDLE_FLAG_TIMER_STOP flag set. Fix that by moving the invocations of tick_broadcast_enter() and tick_broadcast_exit() down into cpuidle_enter_state() where interrupts are still disabled when tick_broadcast_exit() is called. Also ensure that interrupts will be disabled before running tick_broadcast_exit() even if they have been enabled by the idle state's ->enter callback. Trigger a WARN_ON_ONCE() in that case, as we generally don't want that to happen for states with CPUIDLE_FLAG_TIMER_STOP set. Fixes: 335f4919 (sched/idle: Use explicit broadcast oneshot control function) Reported-and-tested-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reported-and-tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 17 4月, 2015 1 次提交
-
-
由 Javi Merino 提交于
Now that the kernel provides DIV_ROUND_CLOSEST_ULL(), drop the internal implementation and use the kernel one. Signed-off-by: NJavi Merino <javi.merino@arm.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Stephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 4月, 2015 2 次提交
-
-
Thomas Schlichter reports the following issue on his Samsung NC20: "The C-states C1 and C2 to the OS when connected to AC, and additionally provides the C3 C-state when disconnected from AC. However, the number of C-states shown in sysfs is fixed to the number of C-states present at boot. If I boot with AC connected, I always only see the C-states up to C2 even if I disconnect AC. The reason is commit 130a5f69 (ACPI / cpuidle: remove dev->state_count setting). It removes the update of dev->state_count, but sysfs uses exactly this variable to show the C-states. The fix is to use drv->state_count in sysfs. As this is currently the last user of dev->state_count, this variable can be completely removed." Remove dev->state_count as per the above. Reported-by: NThomas Schlichter <thomas.schlichter@web.de> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ [ rjw: Changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Thomas Gleixner 提交于
Replace the clockevents_notify() call with an explicit function call. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/2106401.cYdJzzA6Ic@vostro.rjw.lanSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 3月, 2015 5 次提交
-
-
由 Daniel Lezcano 提交于
If the cpuidle init cpu operation returns -ENXIO, therefore reporting HW failure or misconfiguration, the CPUidle driver skips the respective cpuidle device initialization because the associated platform back-end HW is not operational. That prevents the system to crash and allows to handle the error gracefully. For example, on Qcom's platform, each core has a SPM. The device associated with this SPM is initialized before the cpuidle framework. If there is an error in the initialization (eg. error in the DT), the system continues to boot but in degraded mode as some SPM may not be correctly initialized. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Daniel Lezcano 提交于
ARM32 and ARM64 have the same DT definitions and the same approaches. The generic ARM cpuidle driver can be put in common for those two architectures. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NRob Herring <robherring2@gmail.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Daniel Lezcano 提交于
In the next patch, this driver will be common across ARM/ARM64. Remove all refs to ARM64 as it will be shared with ARM32. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NRob Herring <robherring2@gmail.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Daniel Lezcano 提交于
With this change the cpuidle-arm64.c file calls the same function name for both ARM and ARM64. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NRob Herring <robherring2@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Daniel Lezcano 提交于
Call the common ARM/ARM64 'arm_cpuidle_suspend' instead of cpu_suspend function which is specific to ARM64. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NRob Herring <robherring2@gmail.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-