- 08 8月, 2013 1 次提交
-
-
由 Viresh Kumar 提交于
This patch addresses the following issues in the header files in the cpufreq core: - Include headers in ascending order, so that we don't add same many times by mistake. - <asm/> must be included after <linux/>, so that they override whatever they need to. - Remove unnecessary includes. - Don't include files already included by cpufreq.h or cpufreq_governor.h. [rjw: Changelog] Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 7月, 2013 1 次提交
-
-
由 Stratos Karafotis 提交于
The ondemand governor calculates load in terms of frequency and increases it only if load_freq is greater than up_threshold multiplied by the current or average frequency. This appears to produce oscillations of frequency between min and max because, for example, a relatively small load can easily saturate minimum frequency and lead the CPU to the max. Then, it will decrease back to the min due to small load_freq. Change the calculation method of load and target frequency on the basis of the following two observations: - Load computation should not depend on the current or average measured frequency. For example, absolute load of 80% at 100MHz is not necessarily equivalent to 8% at 1000MHz in the next sampling interval. - It should be possible to increase the target frequency to any value present in the frequency table proportional to the absolute load, rather than to the max only, so that: Target frequency = C * load where we take C = policy->cpuinfo.max_freq / 100. Tested on Intel i7-3770 CPU @ 3.40GHz and on Quad core 1500MHz Krait. Phoronix benchmark of Linux Kernel Compilation 3.1 test shows an increase ~1.5% in performance. cpufreq_stats (time_in_state) shows that middle frequencies are used more, with this patch. Highest and lowest frequencies were used less by ~9%. [rjw: We have run multiple other tests on kernels with this change applied and in the vast majority of cases it turns out that the resulting performance improvement also leads to reduced consumption of energy. The change is additionally justified by the overall simplification of the code in question.] Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 6月, 2013 1 次提交
-
-
由 Jacob Shin 提交于
When initializing the default powersave_bias value, we need to first make sure that this policy is running the ondemand governor. Reported-and-tested-by: NTim Gardner <tim.gardner@canonical.com> Signed-off-by: NJacob Shin <jacob.shin@amd.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 5月, 2013 1 次提交
-
-
由 Borislav Petkov 提交于
I don't see how the virtual address of the tuners pointer would be of any help to anyone so remove it. Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 10 4月, 2013 1 次提交
-
-
由 Jacob Shin 提交于
This allows for another [arch specific] driver to hook into existing powersave bias function of the ondemand governor. i.e. This allows AMD specific powersave bias function (in a separate AMD specific driver) to aid ondemand governor's frequency transition decisions. Signed-off-by: NJacob Shin <jacob.shin@amd.com> Acked-by: NThomas Renninger <trenn@suse.de> Acked-by: NBorislav Petkov <bp@suse.de> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 01 4月, 2013 4 次提交
-
-
由 Stratos Karafotis 提交于
Currently we always calculate the CPU iowait time and add it to idle time. If we are in ondemand and we use io_is_busy, we re-calculate iowait time and we subtract it from idle time. With this patch iowait time is calculated only when necessary avoiding the double call to get_cpu_iowait_time_us. We use a parameter in function get_cpu_idle_time to distinguish when the iowait time will be added to idle time or not, without the need of keeping the prev_io_wait. Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.,org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Following patch has introduced per cpu timers or works for ondemand and conservative governors. commit 2abfa876 Author: Rickard Andersson <rickard.andersson@stericsson.com> Date: Thu Dec 27 14:55:38 2012 +0000 cpufreq: handle SW coordinated CPUs This causes additional unnecessary interrupts on all cpus when the load is recently evaluated by any other cpu. i.e. When load is recently evaluated by cpu x, we don't really need any other cpu to evaluate this load again for the next sampling_rate time. Some sort of code is present to avoid that but we are still getting timer interrupts for all cpus. A good way of avoiding this would be to modify delays for all cpus (policy->cpus) whenever any cpu has evaluated load. This patch does this change and some related code cleanup. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Because we have per cpu timer now, we check if we need to evaluate load again or not (In case it is recently evaluated). Here the 2nd cpu which got timer interrupt updates core_dbs_info->sample_type irrespective of load evaluation is required or not. Which is wrong as the first cpu is dependent on this variable set to an older value. Moreover it would be best in this case to schedule 2nd cpu's timer to sampling_rate instead of freq_lo or hi as that must be managed by the other cpu. In case the other cpu idles in between then also we wouldn't loose much power. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Currently, there can't be multiple instances of single governor_type. If we have a multi-package system, where we have multiple instances of struct policy (per package), we can't have multiple instances of same governor. i.e. We can't have multiple instances of ondemand governor for multiple packages. Governors directory in sysfs is created at /sys/devices/system/cpu/cpufreq/ governor-name/. Which again reflects that there can be only one instance of a governor_type in the system. This is a bottleneck for multicluster system, where we want different packages to use same governor type, but with different tunables. This patch uses the infrastructure provided by earlier patch and implements init/exit routines for ondemand and conservative governors. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 09 2月, 2013 2 次提交
-
-
由 Stratos Karafotis 提交于
Fix some typos in comments. Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Stratos Karafotis 提交于
In order to avoid the calculation of up_threshold - down_differential every time that the frequency must be decreased, we replace the down_differential tuner with the adj_up_threshold which keeps the difference across multiple checks. Update the adj_up_threshold only when the up_theshold is also updated. Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 02 2月, 2013 6 次提交
-
-
由 Viresh Kumar 提交于
With the inclusion of following patches: 9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary 772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary code redundancy between the conservative and ondemand governors is introduced again, so get rid of it. [rjw: Changelog] Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NFabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Fabio Baltieri 提交于
Fix governors code to set all cpu's cdbs->cpu to the the actual cpu id and use cur_policy->cpu istead of cdbs->cpu to track current governor's leader cpu. Reported-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Fabio Baltieri 提交于
Implement a generic helper function policy_is_shared() to replace the current dbs_sw_coordinated_cpus() at cpufreq level, so that it can be used by code other than cpufreq governors. Suggested-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Fabio Baltieri 提交于
Modify update_sampling_rate() to check, and eventually immediately schedule, all CPU's do_dbs_timer delayed work. This is required in case of software coordinated CPUs, as we now have a separate delayed work for each CPU. Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Fabio Baltieri 提交于
Modify ondemand timer to not resample CPU utilization if recently sampled from another SW coordinated core. Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Rickard Andersson 提交于
This patch fixes a bug that occurred when we had load on a secondary CPU and the primary CPU was sleeping. Only one sampling timer was spawned and it was spawned as a deferred timer on the primary CPU, so when a secondary CPU had a change in load this was not detected by the cpufreq governor (both ondemand and conservative). This patch make sure that deferred timers are run on all CPUs in the case of software controlled CPUs that run on the same frequency. Signed-off-by: NRickard Andersson <rickard.andersson@stericsson.com> Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 27 11月, 2012 1 次提交
-
-
由 Fabio Baltieri 提交于
Fix cpufreq_gov_ondemand to skip CPU where another governor is used. The bug present itself as NULL pointer access on the mutex_lock() call, an can be reproduced on an SMP machine by setting the default governor to anything other than ondemand, setting a single CPU's governor to ondemand, then changing the sample rate by writing on: > /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate Backtrace: Nov 26 17:36:54 balto kernel: [ 839.585241] BUG: unable to handle kernel NULL pointer dereference at (null) Nov 26 17:36:54 balto kernel: [ 839.585311] IP: [<ffffffff8174e082>] __mutex_lock_slowpath+0xb2/0x170 [snip] Nov 26 17:36:54 balto kernel: [ 839.587005] Call Trace: Nov 26 17:36:54 balto kernel: [ 839.587030] [<ffffffff8174da82>] mutex_lock+0x22/0x40 Nov 26 17:36:54 balto kernel: [ 839.587067] [<ffffffff81610b8f>] store_sampling_rate+0xbf/0x150 Nov 26 17:36:54 balto kernel: [ 839.587110] [<ffffffff81031e9c>] ? __do_page_fault+0x1cc/0x4c0 Nov 26 17:36:54 balto kernel: [ 839.587153] [<ffffffff813309bf>] kobj_attr_store+0xf/0x20 Nov 26 17:36:54 balto kernel: [ 839.587192] [<ffffffff811bb62d>] sysfs_write_file+0xcd/0x140 Nov 26 17:36:54 balto kernel: [ 839.587234] [<ffffffff8114c12c>] vfs_write+0xac/0x180 Nov 26 17:36:54 balto kernel: [ 839.587271] [<ffffffff8114c472>] sys_write+0x52/0xa0 Nov 26 17:36:54 balto kernel: [ 839.587306] [<ffffffff810321ce>] ? do_page_fault+0xe/0x10 Nov 26 17:36:54 balto kernel: [ 839.587345] [<ffffffff81751202>] system_call_fastpath+0x16/0x1b Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 24 11月, 2012 1 次提交
-
-
由 Fabio Baltieri 提交于
Restore the correct delay value for ondemand's od_dbs_timer, as it was changed erroneously in commit 83f0e55 (cpufreq: governors: remove redundant code). Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org> Reviewed-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 11月, 2012 2 次提交
-
-
由 Viresh Kumar 提交于
Initially ondemand governor was written and then using its code conservative governor is written. It used a lot of code from ondemand governor, but copy of code was created instead of using the same routines from both governors. Which increased code redundancy, which is difficult to manage. This patch is an attempt to move common part of both the governors to cpufreq_governor.c file to come over above mentioned issues. This shouldn't change anything from functionality point of view. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 viresh kumar 提交于
Multiple cpufreq governers have defined similar get_cpu_idle_time_***() routines. These routines must be moved to some common place, so that all governors can use them. So moving them to cpufreq_governor.c, which seems to be a better place for keeping these routines. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 9月, 2012 1 次提交
-
-
由 Michal Pecio 提交于
Reevaluate CPU load and update frequency immediately whenever limits are changed. Currently ondemand doesn't do that when limits are relaxed, wasting power on systems with relatively low sampling rate. Signed-off-by: NMichal Pecio <mpecio@nvidia.com> Reviewed-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 22 8月, 2012 1 次提交
-
-
由 Tejun Heo 提交于
Initalizers for deferrable delayed_work are confused. * __DEFERRED_WORK_INITIALIZER() * DECLARE_DEFERRED_WORK() * INIT_DELAYED_WORK_DEFERRABLE() Rename them to * __DEFERRABLE_WORK_INITIALIZER() * DECLARE_DEFERRABLE_WORK() * INIT_DEFERRABLE_WORK() This patch doesn't cause any functional changes. Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 01 3月, 2012 1 次提交
-
-
由 MyungJoo Ham 提交于
When a new sampling rate is shorter than the current one, (e.g., 1 sec --> 10 ms) regardless how short the new one is, the current ondemand mechanism wait for the previously set timer to be expired. For example, if the user has just expressed that the sampling rate should be 10 ms from now and the previous was 1000 ms, the new rate may become effective 999 ms later, which could be not acceptable for the user if the user has intended to speed up sampling because the system is expected to react to CPU load fluctuation quickly from __now__. In order to address this issue, we need to cancel the previously set timer (schedule_delayed_work) and reset the timer if resetting timer is expected to trigger the delayed_work ealier. Signed-off-by: NMyungJoo Ham <myungjoo.ham@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
- 15 12月, 2011 1 次提交
-
-
由 Martin Schwidefsky 提交于
Make cputime_t and cputime64_t nocast to enable sparse checking to detect incorrect use of cputime. Drop the cputime macros for simple scalar operations. The conversion macros are still needed. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 12月, 2011 1 次提交
-
-
由 Kamalesh Babulal 提交于
CPUFREQ Remove wall variable from cpufreq_gov_dbs_init() Remove wall variable from cpufreq_gov_dbs_init() as get_cpu_idle_time_us() no longer updates the last_update_time unconditionally. Passing non-NULL last_update_time address will result in accounting additional idle time with update_ts_time_stats() before returning idle_sleeptime. Signed-off-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: NDave Jones <davej@redhat.com> -- drivers/cpufreq/cpufreq_ondemand.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-)
-
- 06 12月, 2011 1 次提交
-
-
由 Glauber Costa 提交于
This patch changes fields in cpustat from a structure, to an u64 array. Math gets easier, and the code is more flexible. Signed-off-by: NGlauber Costa <glommer@parallels.com> Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul Tuner <pjt@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1322498719-2255-2-git-send-email-glommer@parallels.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 9月, 2011 1 次提交
-
-
由 Michal Hocko 提交于
update_ts_time_stat currently updates idle time even if we are in iowait loop at the moment. The only real users of the idle counter (via get_cpu_idle_time_us) are CPU governors and they expect to get cumulative time for both idle and iowait times. The value (idle_sleeptime) is also printed to userspace by print_cpu but it prints both idle and iowait times so the idle part is misleading. Let's clean this up and fix update_ts_time_stat to account both counters properly and update consumers of idle to consider iowait time as well. If we do this we might use get_cpu_{idle,iowait}_time_us from other contexts as well and we will get expected values. Signed-off-by: NMichal Hocko <mhocko@suse.cz> Cc: Dave Jones <davej@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Alexey Dobriyan <adobriyan@gmail.com> Link: http://lkml.kernel.org/r/e9c909c221a8da402c4da07e4cd968c3218f8eb1.1314172057.git.mhocko@suse.czSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 09 8月, 2011 1 次提交
-
-
由 Paul Bolle 提交于
Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Acked-by: NLen Brown <len.brown@intel.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 17 3月, 2011 4 次提交
-
-
由 Thomas Renninger 提交于
There cannot be any concurrent access to these through different cpu sysfs files anymore, because these tunables are now all global (not per cpu). I still have some doubts whether some of these locks were needed at all. Anyway, let's get rid of them. Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NDave Jones <davej@redhat.com> CC: cpufreq@vger.kernel.org
-
由 Thomas Renninger 提交于
Marked deprecated for quite a whilte now... Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NDave Jones <davej@redhat.com> CC: cpufreq@vger.kernel.org
-
由 Thomas Renninger 提交于
Marked deprecated for quite a while now... Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NDave Jones <davej@redhat.com> CC: cpufreq@vger.kernel.org
-
由 Vincent Guittot 提交于
calculate ondemand delay after dbs_check_cpu call because it can modify rate_mult value use freq_lo_jiffies value for the sub sample period of powersave_bias mode Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org> Signed-off-by: NDave Jones <davej@redhat.com>
-
- 26 1月, 2011 1 次提交
-
-
由 Tejun Heo 提交于
With cmwq, there's no reason for cpufreq drivers to use separate workqueues. Remove the dedicated workqueues from cpufreq_conservative and cpufreq_ondemand and use system_wq instead. The work items are already sync canceled on stop, so it's already guaranteed that no work is running on module exit. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NDave Jones <davej@redhat.com> Cc: cpufreq@vger.kernel.org
-
- 22 10月, 2010 1 次提交
-
-
由 David C Niemi 提交于
Adds a new global tunable, sampling_down_factor. Set to 1 it makes no changes from existing behavior, but set to greater than 1 (e.g. 100) it acts as a multiplier for the scheduling interval for reevaluating load when the CPU is at its top speed due to high load. This improves performance by reducing the overhead of load evaluation and helping the CPU stay at its top speed when truly busy, rather than shifting back and forth in speed. This tunable has no effect on behavior at lower speeds/lower CPU loads. This patch is against 2.6.36-rc6. This patch should help solve kernel bug 19672 "ondemand is slow". Signed-off-by: NDavid Niemi <dniemi@verisign.com> Acked-by: NVenkatesh Pallipadi <venki@google.com> CC: Daniel Hollocher <danielhollocher@gmail.com> CC: <cpufreq-list@vger.kernel.org> CC: <linux-kernel@vger.kernel.org> Signed-off-by: NDave Jones <davej@redhat.com>
-
- 04 8月, 2010 2 次提交
-
-
由 Jocelyn Falempe 提交于
For UP systems this is not required, and results in a more consistent sample interval. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NJocelyn Falempe <jocelyn.falempe@motorola.com> Signed-off-by: NMike Chan <mike@android.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Mike Chan 提交于
Make simpler to read and call. *** v3 - Always call when powersave_bias is enabled. Acked-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NMike Chan <mike@android.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
- 10 5月, 2010 2 次提交
-
-
由 Arjan van de Ven 提交于
Pavel Machek pointed out that not all CPUs have an efficient idle at high frequency. Specifically, older Intel and various AMD cpus would get a higher powerusage when copying files from USB. Mike Chan pointed out that the same is true for various ARM chips as well. Thomas Renninger suggested to make this a sysfs tunable with a reasonable default. This patch adds a sysfs tunable for the new behavior, and uses a very simple function to determine a reasonable default, depending on the CPU vendor/type. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NPavel Machek <pavel@ucw.cz> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: davej@redhat.com LKML-Reference: <20100509082651.46914d04@infradead.org> [ minor tidyup ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arjan van de Ven 提交于
The ondemand cpufreq governor uses CPU busy time (e.g. not-idle time) as a measure for scaling the CPU frequency up or down. If the CPU is busy, the CPU frequency scales up, if it's idle, the CPU frequency scales down. Effectively, it uses the CPU busy time as proxy variable for the more nebulous "how critical is performance right now" question. This algorithm falls flat on its face in the light of workloads where you're alternatingly disk and CPU bound, such as the ever popular "git grep", but also things like startup of programs and maildir using email clients... much to the chagarin of Andrew Morton. This patch changes the ondemand algorithm to count iowait time as busy, not idle, time. As shown in the breakdown cases above, iowait is performance critical often, and by counting iowait, the proxy variable becomes a more accurate representation of the "how critical is performance" question. The problem and fix are both verified with the "perf timechar" tool. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDave Jones <davej@redhat.com> Reviewed-by: NRik van Riel <riel@redhat.com> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100509082606.3d9f00d0@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 4月, 2010 1 次提交
-
-
由 Borislav Petkov 提交于
Multiple modules used to define those which are with identical functionality and were needlessly replicated among the different cpufreq drivers. Push them into the header and remove duplication. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <1270065406-1814-7-git-send-email-bp@amd64.org> Reviewed-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-