- 01 4月, 2013 1 次提交
-
-
由 Daniel Lezcano 提交于
When a cpu enters a deep idle state, the local timers are stopped and the time framework falls back to the timer device used as a broadcast timer. The different cpuidle drivers are calling clockevents_notify ENTER/EXIT when the idle state stops the local timer. Add a new flag CPUIDLE_FLAG_TIMER_STOP which can be set by the cpuidle drivers. If the flag is set, the cpuidle core code takes care of the notification on behalf of the driver to avoid pointless code duplication. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 12 2月, 2013 1 次提交
-
-
由 Len Brown 提交于
This field is no longer used. Signed-off-by: NLen Brown <len.brown@intel.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 15 1月, 2013 1 次提交
-
-
由 Daniel Lezcano 提交于
We realized that the power usage field is never filled and when it is filled for tegra, the power_specified flag is not set causing all of these values to be reset when the driver is initialized with set_power_state(). However, the power_specified flag can be simply removed under the assumption that the states are always backward sorted, which is the case with the current code. This change allows the menu governor select function and the cpuidle_play_dead() to be simplified. Moreover, the set_power_states() function can removed as it does not make sense any more. Drop the power_specified flag from struct cpuidle_driver and make the related changes as described above. As a consequence, this also fixes the bug where on the dynamic C-states system, the power fields are not initialized. [rjw: Changelog] References: https://bugzilla.kernel.org/show_bug.cgi?id=42870 References: https://bugzilla.kernel.org/show_bug.cgi?id=43349 References: https://lkml.org/lkml/2012/10/16/518Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 11月, 2012 3 次提交
-
-
由 Daniel Lezcano 提交于
With the tegra3 and the big.LITTLE [1] new architectures, several cpus with different characteristics (latencies and states) can co-exists on the system. The cpuidle framework has the limitation of handling only identical cpus. This patch removes this limitation by introducing the multiple driver support for cpuidle. This option is configurable at compile time and should be enabled for the architectures mentioned above. So there is no impact for the other platforms if the option is disabled. The option defaults to 'n'. Note the multiple drivers support is also compatible with the existing drivers, even if just one driver is needed, all the cpu will be tied to this driver using an extra small chunk of processor memory. The multiple driver support use a per-cpu driver pointer instead of a global variable and the accessor to this variable are done from a cpu context. In order to keep the compatibility with the existing drivers, the function 'cpuidle_register_driver' and 'cpuidle_unregister_driver' will register the specified driver for all the cpus. The semantic for the output of /sys/devices/system/cpu/cpuidle/current_driver remains the same except the driver name will be related to the current cpu. The /sys/devices/system/cpu/cpu[0-9]/cpuidle/driver/name files are added allowing to read the per cpu driver name. [1] http://lwn.net/Articles/481055/Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NPeter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
We want to support different cpuidle drivers co-existing together. In this case we should move the refcount to the cpuidle_driver structure to handle several drivers at a time. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NPeter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Daniel Lezcano 提交于
The structure cpuidle_state_kobj is not used anywhere except in the sysfs.c file. The definition of this structure is not needed in the cpuidle header file. This patch moves it to the sysfs.c file in order to encapsulate the code a bit more. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 8月, 2012 1 次提交
-
-
由 Arnd Bergmann 提交于
The new omap4 cpuidle implementation currently requires ARCH_NEEDS_CPU_IDLE_COUPLED, which only works on SMP. This patch makes it possible to build a non-SMP kernel for that platform. This is not normally desired for end-users but can be useful for testing. Without this patch, building rand-0y2jSKT results in: drivers/cpuidle/coupled.c: In function 'cpuidle_coupled_poke': drivers/cpuidle/coupled.c:317:3: error: implicit declaration of function '__smp_call_function_single' [-Werror=implicit-function-declaration] It's not clear if this patch is the best solution for the problem at hand. I have made sure that we can now build the kernel in all configurations, but that does not mean it will actually work on an OMAP44xx. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> Cc: Tony Lindgren <tony@atomide.com>
-
- 11 7月, 2012 1 次提交
-
-
由 Preeti U Murthy 提交于
On certain bios, resume hangs if cpus are allowed to enter idle states during suspend [1]. This was fixed in apci idle driver [2].But intel_idle driver does not have this fix. Thus instead of replicating the fix in both the idle drivers, or in more platform specific idle drivers if needed, the more general cpuidle infrastructure could handle this. A suspend callback in cpuidle_driver could handle this fix. But a cpuidle_driver provides only basic functionalities like platform idle state detection capability and mechanisms to support entry and exit into CPU idle states. All other cpuidle functions are found in the cpuidle generic infrastructure for good reason that all cpuidle drivers, irrepective of their platforms will support these functions. One option therefore would be to register a suspend callback in cpuidle which handles this fix. This could be called through a PM_SUSPEND_PREPARE notifier. But this is too generic a notfier for a driver to handle. Also, ideally the job of cpuidle is not to handle side effects of suspend. It should expose the interfaces which "handle cpuidle 'during' suspend" or any other operation, which the subsystems call during that respective operation. The fix demands that during suspend, no cpus should be allowed to enter deep C-states. The interface cpuidle_uninstall_idle_handler() in cpuidle ensures that. Not just that it also kicks all the cpus which are already in idle out of their idle states which was being done during cpu hotplug through a CPU_DYING_FROZEN callbacks. Now the question arises about when during suspend should cpuidle_uninstall_idle_handler() be called. Since we are dealing with drivers it seems best to call this function during dpm_suspend(). Delaying the call till dpm_suspend_noirq() does no harm, as long as it is before cpu_hotplug_begin() to avoid race conditions with cpu hotpulg operations. In dpm_suspend_noirq(), it would be wise to place this call before suspend_device_irqs() to avoid ugly interactions with the same. Ananlogously, during resume. References: [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/674075. [2] http://marc.info/?l=linux-pm&m=133958534231884&w=2Reported-and-tested-by: NDave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 06 7月, 2012 1 次提交
-
-
由 Daniel Lezcano 提交于
When the system is booted with some cpus offline, the idle driver is not initialized. When a cpu is set online, the acpi code call the intel idle init function. Unfortunately this code introduce a dependency between intel_idle and acpi. This patch is intended to remove this dependency by using the notifier of intel_idle. This patch has the benefit of encapsulating the intel_idle driver and remove some exported functions. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 04 7月, 2012 3 次提交
-
-
由 Rafael J. Wysocki 提交于
On some systems there are CPU cores located in the same power domains as I/O devices. Then, power can only be removed from the domain if all I/O devices in it are not in use and the CPU core is idle. Add preliminary support for that to the generic PM domains framework. First, the platform is expected to provide a cpuidle driver with one extra state designated for use with the generic PM domains code. This state should be initially disabled and its exit_latency value should be set to whatever time is needed to bring up the CPU core itself after restoring power to it, not including the domain's power on latency. Its .enter() callback should point to a procedure that will remove power from the domain containing the CPU core at the end of the CPU power transition. The remaining characteristics of the extra cpuidle state, referred to as the "domain" cpuidle state below, (e.g. power usage, target residency) should be populated in accordance with the properties of the hardware. Next, the platform should execute genpd_attach_cpuidle() on the PM domain containing the CPU core. That will cause the generic PM domains framework to treat that domain in a special way such that: * When all devices in the domain have been suspended and it is about to be turned off, the states of the devices will be saved, but power will not be removed from the domain. Instead, the "domain" cpuidle state will be enabled so that power can be removed from the domain when the CPU core is idle and the state has been chosen as the target by the cpuidle governor. * When the first I/O device in the domain is resumed and __pm_genpd_poweron(() is called for the first time after power has been removed from the domain, the "domain" cpuidle state will be disabled to avoid subsequent surprise power removals via cpuidle. The effective exit_latency value of the "domain" cpuidle state depends on the time needed to bring up the CPU core itself after restoring power to it as well as on the power on latency of the domain containing the CPU core. Thus the "domain" cpuidle state's exit_latency has to be recomputed every time the domain's power on latency is updated, which may happen every time power is restored to the domain, if the measured power on latency is greater than the latency stored in the corresponding generic_pm_domain structure. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Reviewed-by: NKevin Hilman <khilman@ti.com>
-
由 Rafael J. Wysocki 提交于
Add a reference counter for the cpuidle driver, so that it can't be unregistered when it is in use. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 ShuoX Liu 提交于
Andrew J.Schorr raises a question. When he changes the disable setting on a single CPU, it affects all the other CPUs. Basically, currently, the disable field is per-driver instead of per-cpu. All the C states of the same driver are shared by all CPU in the same machine. The patch changes the `disable' field to per-cpu, so we could set this separately for each cpu. Signed-off-by: NShuoX Liu <shuox.liu@intel.com> Reported-by: NAndrew J.Schorr <aschorr@telemetry-investments.com> Reviewed-by: NYanmin Zhang <yanmin_zhang@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 02 6月, 2012 2 次提交
-
-
由 Colin Cross 提交于
Adds cpuidle_coupled_parallel_barrier, which can be used by coupled cpuidle state enter functions to handle resynchronization after determining if any cpu needs to abort. The normal use case will be: static bool abort_flag; static atomic_t abort_barrier; int arch_cpuidle_enter(struct cpuidle_device *dev, ...) { if (arch_turn_off_irq_controller()) { /* returns an error if an irq is pending and would be lost if idle continued and turned off power */ abort_flag = true; } cpuidle_coupled_parallel_barrier(dev, &abort_barrier); if (abort_flag) { /* One of the cpus didn't turn off it's irq controller */ arch_turn_on_irq_controller(); return -EINTR; } /* continue with idle */ ... } This will cause all cpus to abort idle together if one of them needs to abort. Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Tested-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NColin Cross <ccross@android.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Colin Cross 提交于
On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the cpus cannot be independently powered down, either due to sequencing restrictions (on Tegra 2, cpu 0 must be the last to power down), or due to HW bugs (on OMAP4460, a cpu powering up will corrupt the gic state unless the other cpu runs a work around). Each cpu has a power state that it can enter without coordinating with the other cpu (usually Wait For Interrupt, or WFI), and one or more "coupled" power states that affect blocks shared between the cpus (L2 cache, interrupt controller, and sometimes the whole SoC). Entering a coupled power state must be tightly controlled on both cpus. The easiest solution to implementing coupled cpu power states is to hotplug all but one cpu whenever possible, usually using a cpufreq governor that looks at cpu load to determine when to enable the secondary cpus. This causes problems, as hotplug is an expensive operation, so the number of hotplug transitions must be minimized, leading to very slow response to loads, often on the order of seconds. This file implements an alternative solution, where each cpu will wait in the WFI state until all cpus are ready to enter a coupled state, at which point the coupled state function will be called on all cpus at approximately the same time. Once all cpus are ready to enter idle, they are woken by an smp cross call. At this point, there is a chance that one of the cpus will find work to do, and choose not to enter idle. A final pass is needed to guarantee that all cpus will call the power state enter function at the same time. During this pass, each cpu will increment the ready counter, and continue once the ready counter matches the number of online coupled cpus. If any cpu exits idle, the other cpus will decrement their counter and retry. To use coupled cpuidle states, a cpuidle driver must: Set struct cpuidle_device.coupled_cpus to the mask of all coupled cpus, usually the same as cpu_possible_mask if all cpus are part of the same cluster. The coupled_cpus mask must be set in the struct cpuidle_device for each cpu. Set struct cpuidle_device.safe_state to a state that is not a coupled state. This is usually WFI. Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each state that affects multiple cpus. Provide a struct cpuidle_state.enter function for each state that affects multiple cpus. This function is guaranteed to be called on all cpus at approximately the same time. The driver should ensure that the cpus all abort together if any cpu tries to abort once the function is called. update1: cpuidle: coupled: fix count of online cpus online_count was never incremented on boot, and was also counting cpus that were not part of the coupled set. Fix both issues by introducting a new function that counts online coupled cpus, and call it from register as well as the hotplug notifier. update2: cpuidle: coupled: fix decrementing ready count cpuidle_coupled_set_not_ready sometimes refuses to decrement the ready count in order to prevent a race condition. This makes it unsuitable for use when finished with idle. Add a new function cpuidle_coupled_set_done that decrements both the ready count and waiting count, and call it after idle is complete. Cc: Amit Kucheria <amit.kucheria@linaro.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Tested-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NColin Cross <ccross@android.com> Acked-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 30 3月, 2012 5 次提交
-
-
由 Boris Ostrovsky 提交于
power_usage is always assigned a negative value and should be declared a signed integer Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@amd.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Boris Ostrovsky 提交于
Currently when a CPU is off-lined it enters either MWAIT-based idle or, if MWAIT is not desired or supported, HLT-based idle (which places the processor in C1 state). This patch allows processors without MWAIT support to stay in states deeper than C1. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@amd.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Daniel Lezcano 提交于
As far as I can see, this field is never used in the code. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Daniel Lezcano 提交于
All the modules name are ro-data, it is never copied to the array. eg. static struct cpuidle_driver intel_idle_driver = { .name = "intel_idle", .owner = THIS_MODULE, }; It safe to assign the pointer of this ro-data to a const char *. By this way we save 12 bytes. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Tested-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 ShuoX Liu 提交于
Some C states of new CPU might be not good. One reason is BIOS might configure them incorrectly. To help developers root cause it quickly, the patch adds a new sysfs entry, so developers could disable specific C state manually. In addition, C state might have much impact on performance tuning, as it takes much time to enter/exit C states, which might delay interrupt processing. With the new debug option, developers could check if a deep C state could impact performance and how much impact it could cause. Also add this option in Documentation/cpuidle/sysfs.txt. [akpm@linux-foundation.org: check kstrtol return value] Signed-off-by: NShuoX Liu <shuox.liu@intel.com> Reviewed-by: NYanmin Zhang <yanmin_zhang@intel.com> Reviewed-and-Tested-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 21 3月, 2012 1 次提交
-
-
由 Robert Lee 提交于
Make necessary changes to implement time keeping and irq enabling in the core cpuidle code. This will allow the removal of these functionalities from various platform cpuidle implementations whose timekeeping and irq enabling follows the form in this common code. Signed-off-by: NRobert Lee <rob.lee@linaro.org> Tested-by: NJean Pihet <j-pihet@ti.com> Tested-by: NAmit Daniel <amit.kachhap@linaro.org> Tested-by: NRobert Lee <rob.lee@linaro.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Reviewed-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Acked-by: NJean Pihet <j-pihet@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 18 1月, 2012 1 次提交
-
-
由 Thomas Renninger 提交于
Function split up, should have no functional change. Provides entry point for physically hotplugged CPUs to initialize and activate cpuidle. Signed-off-by: NThomas Renninger <trenn@suse.de> CC: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> CC: Shaohua Li <shaohua.li@intel.com> CC: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 08 12月, 2011 1 次提交
-
-
由 Deepthi Dharwar 提交于
This patch enables cpuidle for pSeries and pSeries_idle is directly called from the idle loop. As a result of pSeries_idle, cpuidle driver registered with cpuidle subsystem comes into action. On failure of loading of the driver or cpuidle framework default idle is executed as part of the function. This patch also removes the routines pseries_shared_idle_sleep and pseries_dedicated_idle_sleep as they are now implemented as part of pseries_idle cpuidle driver. Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Signed-off-by: NArun R Bharadwaj <arun.r.bharadwaj@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 11月, 2011 4 次提交
-
-
由 Deepthi Dharwar 提交于
This patch makes the cpuidle_states structure global (single copy) instead of per-cpu. The statistics needed on per-cpu basis by the governor are kept per-cpu. This simplifies the cpuidle subsystem as state registration is done by single cpu only. Having single copy of cpuidle_states saves memory. Rare case of asymmetric C-states can be handled within the cpuidle driver and architectures such as POWER do not have asymmetric C-states. Having single/global registration of all the idle states, dynamic C-state transitions on x86 are handled by the boot cpu. Here, the boot cpu would disable all the devices, re-populate the states and later enable all the devices, irrespective of the cpu that would receive the notification first. Reference: https://lkml.org/lkml/2011/4/25/83Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Deepthi Dharwar 提交于
This is the first step towards global registration of cpuidle states. The statistics used primarily by the governor are per-cpu and have to be split from rest of the fields inside cpuidle_state, which would be made global i.e. single copy. The driver_data field is also per-cpu and moved. Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Deepthi Dharwar 提交于
The cpuidle_device->prepare() mechanism causes updates to the cpuidle_state[].flags, setting and clearing CPUIDLE_FLAG_IGNORE to tell the governor not to chose a state on a per-cpu basis at run-time. State demotion is now handled by the driver and it returns the actual state entered. Hence, this mechanism is not required. Also this removes per-cpu flags from cpuidle_state enabling it to be made global. Reference: https://lkml.org/lkml/2011/3/25/52Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Deepthi Dharwar 提交于
Cpuidle governor only suggests the state to enter using the governor->select() interface, but allows the low level driver to override the recommended state. The actual entered state may be different because of software or hardware demotion. Software demotion is done by the back-end cpuidle driver and can be accounted correctly. Current cpuidle code uses last_state field to capture the actual state entered and based on that updates the statistics for the state entered. Ideally the driver enter routine should update the counters, and it should return the state actually entered rather than the time spent there. The generic cpuidle code should simply handle where the counters live in the sysfs namespace, not updating the counters. Reference: https://lkml.org/lkml/2011/3/25/52Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
The <linux/module.h> pretty much brings in the kitchen sink along with it, so it should be avoided wherever reasonably possible in terms of being included from other commonly used <linux/something.h> files, as it results in a measureable increase on compile times. The worst culprit was probably device.h since it is used everywhere. This file also had an implicit dependency/usage of mutex.h which was masked by module.h, and is also fixed here at the same time. There are over a dozen other headers that simply declare the struct instead of pulling in the whole file, so follow their lead and simply make it a few more. Most of the implicit dependencies on module.h being present by these headers pulling it in have been now weeded out, so we can finally make this change with hopefully minimal breakage. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 04 8月, 2011 2 次提交
-
-
由 Len Brown 提交于
cpuidle users should call cpuidle_call_idle() directly rather than via (pm_idle)() function pointer. Architecture may choose to continue using (pm_idle)(), but cpuidle need not depend on it: my_arch_cpu_idle() ... if(cpuidle_call_idle()) pm_idle(); cc: Kevin Hilman <khilman@deeprootsystems.com> cc: Paul Mundt <lethal@linux-sh.org> cc: x86@kernel.org Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Len Brown 提交于
When a Xen Dom0 kernel boots on a hypervisor, it gets access to the raw-hardware ACPI tables. While it parses the idle tables for the hypervisor's beneift, it uses HLT for its own idle. Rather than have xen scribble on pm_idle and access default_idle, have it simply disable_cpuidle() so acpi_idle will not load and architecture default HLT will be used. cc: xen-devel@lists.xensource.com Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 13 1月, 2011 4 次提交
-
-
由 Len Brown 提交于
Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Len Brown 提交于
Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Len Brown 提交于
Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Len Brown 提交于
it serves no purpose Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 01 10月, 2010 1 次提交
-
-
由 Suresh Siddha 提交于
Avoid TLB flush IPIs for the cores in deeper c-states by voluntary leave_mm() before entering into that state. CPUs tend to flush TLB in those c-states anyways. acpi_idle does this with C3-type states, but it was not caried over when intel_idle was introduced. intel_idle can apply it to C-states in addition to those that ACPI might export as C3... Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 10 8月, 2010 1 次提交
-
-
由 Ai Li 提交于
On some SoC chips, HW resources may be in use during any particular idle period. As a consequence, the cpuidle states that the SoC is safe to enter can change from idle period to idle period. In addition, the latency and threshold of each cpuidle state can vary, depending on the operating condition when the CPU becomes idle, e.g. the current cpu frequency, the current state of the HW blocks, etc. cpuidle core and the menu governor, in the current form, are geared towards cpuidle states that are static, i.e. the availabiltiy of the states, their latencies, their thresholds are non-changing during run time. cpuidle does not provide any hook that cpuidle drivers can use to adjust those values on the fly for the current idle period before the menu governor selects the target cpuidle state. This patch extends cpuidle core and the menu governor to handle states that are dynamic. There are three additions in the patch and the patch maintains backwards-compatibility with existing cpuidle drivers. 1) add prepare() to struct cpuidle_device. A cpuidle driver can hook into the callback and cpuidle will call prepare() before calling the governor's select function. The callback gives the cpuidle driver a chance to update the dynamic information of the cpuidle states for the current idle period, e.g. state availability, latencies, thresholds, power values, etc. 2) add CPUIDLE_FLAG_IGNORE as one of the state flags. In the prepare() function, a cpuidle driver can set/clear the flag to indicate to the menu governor whether a cpuidle state should be ignored, i.e. not available, during the current idle period. 3) add power_specified bit to struct cpuidle_device. The menu governor currently assumes that the cpuidle states are arranged in the order of increasing latency, threshold, and power savings. This is true or can be made true for static states. Once the state parameters are dynamic, the latencies, thresholds, and power savings for the cpuidle states can increase or decrease by different amounts from idle period to idle period. So the assumption of increasing latency, threshold, and power savings from Cn to C(n+1) can no longer be guaranteed. It can be straightforward to calculate the power consumption of each available state and to specify it in power_usage for the idle period. Using the power_usage fields, the menu governor then selects the state that has the lowest power consumption and that still satisfies all other critieria. The power_specified bit defaults to 0. For existing cpuidle drivers, cpuidle detects that power_specified is 0 and fills in a dummy set of power_usage values. Signed-off-by: NAi Li <aili@codeaurora.org> Cc: Len Brown <len.brown@intel.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Venkatesh Pallipadi <venki@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 5月, 2010 1 次提交
-
-
由 Len Brown 提交于
cpuidle_register_driver() sets cpuidle_curr_driver cpuidle_unregister_driver() clears cpuidle_curr_driver We should't expose cpuidle_curr_driver to potential modification except via these interfaces. So make it static and create cpuidle_get_driver() to observe it. Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 27 5月, 2010 1 次提交
-
-
由 Len Brown 提交于
Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 12 6月, 2008 1 次提交
-
-
由 Venkatesh Pallipadi 提交于
cpuidle and acpi driver interaction bug with the way cpuidle_register_driver() is called. Due to this bug, there will be oops on AC<->DC on some systems, where they support C-states in one DC and not in AC. The current code does ON BOOT: Look at CST and other C-state info to see whether more than C1 is supported. If it is, then acpi processor_idle does a cpuidle_register_driver() call, which internally enables the device. ON CST change notification (AC<->DC) and on suspend-resume: acpi driver temporarily disables device, updates the device with any new C-states, and reenables the device. The problem is is on boot, there are no C2, C3 states supported and we skip the register. Later on AC<->DC, we may get a CST notification and we try to reevaluate CST and enabled the device, without actually registering it. This causes breakage as we try to create /sys fs sub directory, without the parent directory which is created at register time. Thanks to Sanjeev for reporting the problem here. http://bugzilla.kernel.org/show_bug.cgi?id=10394Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 26 3月, 2008 1 次提交
-
-
由 Yi Yang 提交于
cpuidle C-state sysfs node time and usage are very easy to overflow because they are all of unsigned int type, time will overflow within about two hours, usage will take longer time to overflow, but they are increasing for ever. This patch will convert them to unsigned long long. Signed-off-by: NYi Yang <yi.y.yang@intel.com> Acked-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 14 2月, 2008 1 次提交
-
-
由 Venkatesh Pallipadi 提交于
Add a new sysfs entry under cpuidle states. desc - can be used by driver to communicate to userspace any specific information about the state. This helps in identifying the exact hardware C-states behind the ACPI C-state definition. Idea is to export this through powertop, which will help to map the C-state reported by powertop to actual hardware C-state. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-