- 08 8月, 2013 15 次提交
-
-
由 Viresh Kumar 提交于
Chapter 14 of Documentation/CodingStyle says: The preferred form for passing a size of a struct is the following: p = kmalloc(sizeof(*p), ...); The alternative form where struct name is spelled out hurts readability and introduces an opportunity for a bug when the pointer variable type is changed but the corresponding sizeof that is passed to a memory allocator is not. This wasn't followed consistently in drivers/cpufreq, let's make it more consistent by always following this rule. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
They are called policy, cur_policy, new_policy, data, etc. Just call them policy wherever possible. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
This patch addresses the following issues in the header files in the cpufreq core: - Include headers in ascending order, so that we don't add same many times by mistake. - <asm/> must be included after <linux/>, so that they override whatever they need to. - Remove unnecessary includes. - Don't include files already included by cpufreq.h or cpufreq_governor.h. [rjw: Changelog] Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
The caller of cpufreq_add_policy_cpu() already has a pointer to the policy structure and there is no need to look it up again in cpufreq_add_policy_cpu(). Let's pass it directly. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Rafael J. Wysocki 提交于
The only case triggering a jump to the err_out_unregister label in __cpufreq_add_dev() is when cpufreq_add_dev_interface() fails. However, if cpufreq_add_dev_interface() fails, it calls kobject_put() for the policy kobject in its error code path and since that causes the kobject's refcount to become 0, the additional kobject_put() for the same kobject under err_out_unregister and the wait_for_completion() following it are pointless, so drop them. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Rafael J. Wysocki 提交于
The cpufreq core is a little inconsistent in the way it uses the driver module refcount. Namely, if __cpufreq_add_dev() is called for a CPU that doesn't share the policy object with any other CPUs, the driver module refcount it grabs to start with will be dropped by it before returning and will be equal to whatever it had been before that function was invoked. However, if the given CPU does share the policy object with other CPUs, either cpufreq_add_policy_cpu() is called to link the new CPU to the existing policy, or cpufreq_add_dev_symlink() is used to link the other CPUs sharing the policy with it to the just created policy object. In that case, because both cpufreq_add_policy_cpu() and cpufreq_add_dev_symlink() call cpufreq_cpu_get() for the given policy (the latter possibly many times) without the balancing cpufreq_cpu_put() (unless there is an error), the driver module refcount will be left by __cpufreq_add_dev() with a nonzero value (different from the initial one). To remove that inconsistency make cpufreq_add_policy_cpu() execute cpufreq_cpu_put() for the given policy before returning, which decrements the driver module refcount so that it will be equal to its initial value after __cpufreq_add_dev() returns. Also remove the cpufreq_cpu_get() call from cpufreq_add_dev_symlink(), since both the policy refcount and the driver module refcount are nonzero when it is called and they don't need to be bumped up by it. Accordingly, drop the cpufreq_cpu_put() from __cpufreq_remove_dev(), since it is only necessary to balance the cpufreq_cpu_get() called by cpufreq_add_policy_cpu() or cpufreq_add_dev_symlink(). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
Pointer to struct cpufreq_policy is already passed to these routines and we don't need to send policy->cpu to them as well. So, get rid of this extra argument and use policy->cpu everywhere. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
We call cpufreq_cpu_get() in cpufreq_add_dev_symlink() to increase usage refcount of policy, but not to get a policy for the given CPU. So, we don't really need to capture the return value of this routine. We can simply use policy passed as an argument to cpufreq_add_dev_symlink(). Moreover debug print is rewritten to make it more clear. [rjw: Changelog] Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
Now that we have the infrastructure to perform a light-weight init/tear-down, use that in the cpufreq CPU hotplug notifier when invoked from the suspend/resume path. This also ensures that the file permissions of the cpufreq sysfs files are preserved across suspend/resume, something which commit a66b2e (cpufreq: Preserve sysfs files across suspend/resume) originally intended to do, but had to be reverted due to other problems. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
To perform light-weight cpu-init and teardown in the cpufreq subsystem during suspend/resume, we need to separate out the 2 main functionalities of the cpufreq CPU hotplug callbacks, as outlined below: 1. Init/tear-down of core cpufreq and CPU-specific components, which are critical to the correct functioning of the cpufreq subsystem. 2. Init/tear-down of cpufreq sysfs files during suspend/resume. The first part requires accurate updates to the policy structure such as its ->cpus and ->related_cpus masks, whereas the second part requires that the policy->kobj structure is not released or re-initialized during suspend/resume. To handle both these requirements, we need to allow updates to the policy structure throughout suspend/resume, but prevent the structure from getting freed up. Also, we must have a mechanism by which the cpu-up callbacks can restore the policy structure, without allocating things afresh. (That also helps avoid memory leaks). To achieve this, we use 2 schemes: a. Use a fallback per-cpu storage area for preserving the policy structures during suspend, so that they can be restored during resume appropriately. b. Use the 'frozen' flag to determine when to free or allocate the policy structure vs when to restore the policy from the saved fallback storage. Thus we can successfully preserve the structure across suspend/resume. Effectively, this helps us complete the separation of the 'light-weight' and the 'full' init/tear-down sequences in the cpufreq subsystem, so that this can be made use of in the suspend/resume scenario. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
During suspend/resume we would like to do a light-weight init/teardown of CPUs in the cpufreq subsystem and preserve certain things such as sysfs files etc across suspend/resume transitions. Add a flag called 'frozen' to help distinguish the full init/teardown sequence from the light-weight one. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
During cpu offline, when the policy->cpu is going down, some other CPU present in the policy->cpus mask is nominated as the new policy->cpu. Extract this functionality from __cpufreq_remove_dev() and implement it in a helper function. This helps in upcoming code reorganization. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
cpufreq_add_dev_interface() includes the work of exposing the interface to the device, as well as a lot of unrelated stuff. Move the latter to cpufreq_add_dev(), where it is more appropriate. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
Separate out the allocation of the cpufreq policy structure (along with its error handling) to a helper function. This makes the code easier to read and also helps with some upcoming code reorganization. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
The call to cpufreq_update_policy() is placed in the CPU hotplug callback of cpufreq_stats, which has a higher priority than the CPU hotplug callback of cpufreq-core. As a result, during CPU_ONLINE/CPU_ONLINE_FROZEN, we end up calling cpufreq_update_policy() *before* calling cpufreq_add_dev() ! And for uninitialized CPUs, it just returns silently, not doing anything. To add to that, cpufreq_stats is not even the right place to call cpufreq_update_policy() to begin with. The cpufreq core ought to handle this in its own callback, from an elegance/relevance perspective. So move the invocation of cpufreq_update_policy() to cpufreq_cpu_callback, and place it *after* cpufreq_add_dev(). Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 30 7月, 2013 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Since cpufreq_cpu_put() called by __cpufreq_remove_dev() drops the driver module refcount, __cpufreq_remove_dev() causes that refcount to become negative for the cpufreq driver after a suspend/resume cycle. This is not the only bad thing that happens there, however, because kobject_put() should only be called for the policy kobject at this point if the CPU is not the last one for that policy. Namely, if the given CPU is the last one for that policy, the policy kobject's refcount should be 1 at this point, as set by cpufreq_add_dev_interface(), and only needs to be dropped once for the kobject to go away. This actually happens under the cpu == 1 check, so it need not be done before by cpufreq_cpu_put(). On the other hand, if the given CPU is not the last one for that policy, this means that cpufreq_add_policy_cpu() has been called at least once for that policy and cpufreq_cpu_get() has been called for it too. To balance that cpufreq_cpu_get(), we need to call cpufreq_cpu_put() in that case. Thus, to fix the described problem and keep the reference counters balanced in both cases, move the cpufreq_cpu_get() call in __cpufreq_remove_dev() to the code path executed only for CPUs that share the policy with other CPUs. Reported-and-tested-by: NToralf Förster <toralf.foerster@gmx.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: 3.10+ <stable@vger.kernel.org>
-
- 26 7月, 2013 3 次提交
-
-
由 Stratos Karafotis 提交于
The target frequency calculation method in the ondemand governor has changed and it is now independent of the measured average frequency. Consequently, the __cpufreq_driver_getavg() function and getavg member of struct cpufreq_driver are not used any more, so drop them. [rjw: Changelog] Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Stratos Karafotis 提交于
The target frequency calculation method in the ondemand governor has changed and it is now independent of the measured average frequency. Consequently, the APERF/MPERF support in cpufreq is not used any more, so drop it. [rjw: Changelog] Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Stratos Karafotis 提交于
The ondemand governor calculates load in terms of frequency and increases it only if load_freq is greater than up_threshold multiplied by the current or average frequency. This appears to produce oscillations of frequency between min and max because, for example, a relatively small load can easily saturate minimum frequency and lead the CPU to the max. Then, it will decrease back to the min due to small load_freq. Change the calculation method of load and target frequency on the basis of the following two observations: - Load computation should not depend on the current or average measured frequency. For example, absolute load of 80% at 100MHz is not necessarily equivalent to 8% at 1000MHz in the next sampling interval. - It should be possible to increase the target frequency to any value present in the frequency table proportional to the absolute load, rather than to the max only, so that: Target frequency = C * load where we take C = policy->cpuinfo.max_freq / 100. Tested on Intel i7-3770 CPU @ 3.40GHz and on Quad core 1500MHz Krait. Phoronix benchmark of Linux Kernel Compilation 3.1 test shows an increase ~1.5% in performance. cpufreq_stats (time_in_state) shows that middle frequencies are used more, with this patch. Highest and lowest frequencies were used less by ~9%. [rjw: We have run multiple other tests on kernels with this change applied and in the vast majority of cases it turns out that the resulting performance improvement also leads to reduced consumption of energy. The change is additionally justified by the overall simplification of the code in question.] Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 7月, 2013 1 次提交
-
-
由 Dirk Brandewie 提交于
Change to using max P-state instead of max turbo P-state. This change resolves two issues. On a quiet system intel_pstate can fail to respond to a load change. On CPU SKUs that have a limited number of P-states and no turbo range intel_pstate fails to select the highest available P-state. This change is suitable for stable v3.9+ References: https://bugzilla.kernel.org/show_bug.cgi?id=59481Reported-and-tested-by: NArjan van de Ven <arjan@linux.intel.com> Reported-and-tested-by: dsmythies@telus.net Signed-off-by: NDirk Brandewie <dirk.j.brandewie@intel.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Cc: 3.9+ <stable@vger.kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 17 7月, 2013 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
commit 2f7021a8 "cpufreq: protect 'policy->cpus' from offlining during __gov_queue_work()" caused a regression in CPU hotplug, because it lead to a deadlock between cpufreq governor worker thread and the CPU hotplug writer task. Lockdep splat corresponding to this deadlock is shown below: [ 60.277396] ====================================================== [ 60.277400] [ INFO: possible circular locking dependency detected ] [ 60.277407] 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744 Not tainted [ 60.277411] ------------------------------------------------------- [ 60.277417] bash/2225 is trying to acquire lock: [ 60.277422] ((&(&j_cdbs->work)->work)){+.+...}, at: [<ffffffff810621b5>] flush_work+0x5/0x280 [ 60.277444] but task is already holding lock: [ 60.277449] (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60 [ 60.277465] which lock already depends on the new lock. [ 60.277472] the existing dependency chain (in reverse order) is: [ 60.277477] -> #2 (cpu_hotplug.lock){+.+.+.}: [ 60.277490] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.277503] [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410 [ 60.277514] [<ffffffff81042cbc>] get_online_cpus+0x3c/0x60 [ 60.277522] [<ffffffff814b842a>] gov_queue_work+0x2a/0xb0 [ 60.277532] [<ffffffff814b7891>] cs_dbs_timer+0xc1/0xe0 [ 60.277543] [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0 [ 60.277552] [<ffffffff81063d31>] worker_thread+0x121/0x3a0 [ 60.277560] [<ffffffff8106ae2b>] kthread+0xdb/0xe0 [ 60.277569] [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0 [ 60.277580] -> #1 (&j_cdbs->timer_mutex){+.+...}: [ 60.277592] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.277600] [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410 [ 60.277608] [<ffffffff814b785d>] cs_dbs_timer+0x8d/0xe0 [ 60.277616] [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0 [ 60.277624] [<ffffffff81063d31>] worker_thread+0x121/0x3a0 [ 60.277633] [<ffffffff8106ae2b>] kthread+0xdb/0xe0 [ 60.277640] [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0 [ 60.277649] -> #0 ((&(&j_cdbs->work)->work)){+.+...}: [ 60.277661] [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30 [ 60.277669] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.277677] [<ffffffff810621ed>] flush_work+0x3d/0x280 [ 60.277685] [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120 [ 60.277693] [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20 [ 60.277701] [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0 [ 60.277709] [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20 [ 60.277719] [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100 [ 60.277728] [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0 [ 60.277737] [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c [ 60.277747] [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110 [ 60.277759] [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10 [ 60.277768] [<ffffffff815a0a68>] _cpu_down+0x88/0x330 [ 60.277779] [<ffffffff815a0d46>] cpu_down+0x36/0x50 [ 60.277788] [<ffffffff815a2748>] store_online+0x98/0xd0 [ 60.277796] [<ffffffff81452a28>] dev_attr_store+0x18/0x30 [ 60.277806] [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150 [ 60.277818] [<ffffffff8116806d>] vfs_write+0xbd/0x1f0 [ 60.277826] [<ffffffff811686fc>] SyS_write+0x4c/0xa0 [ 60.277834] [<ffffffff815bbbbe>] tracesys+0xd0/0xd5 [ 60.277842] other info that might help us debug this: [ 60.277848] Chain exists of: (&(&j_cdbs->work)->work) --> &j_cdbs->timer_mutex --> cpu_hotplug.lock [ 60.277864] Possible unsafe locking scenario: [ 60.277869] CPU0 CPU1 [ 60.277873] ---- ---- [ 60.277877] lock(cpu_hotplug.lock); [ 60.277885] lock(&j_cdbs->timer_mutex); [ 60.277892] lock(cpu_hotplug.lock); [ 60.277900] lock((&(&j_cdbs->work)->work)); [ 60.277907] *** DEADLOCK *** [ 60.277915] 6 locks held by bash/2225: [ 60.277919] #0: (sb_writers#6){.+.+.+}, at: [<ffffffff81168173>] vfs_write+0x1c3/0x1f0 [ 60.277937] #1: (&buffer->mutex){+.+.+.}, at: [<ffffffff811d9e3c>] sysfs_write_file+0x3c/0x150 [ 60.277954] #2: (s_active#61){.+.+.+}, at: [<ffffffff811d9ec3>] sysfs_write_file+0xc3/0x150 [ 60.277972] #3: (x86_cpu_hotplug_driver_mutex){+.+...}, at: [<ffffffff81024cf7>] cpu_hotplug_driver_lock+0x17/0x20 [ 60.277990] #4: (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff815a0d32>] cpu_down+0x22/0x50 [ 60.278007] #5: (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60 [ 60.278023] stack backtrace: [ 60.278031] CPU: 3 PID: 2225 Comm: bash Not tainted 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744 [ 60.278037] Hardware name: Acer Aspire 5741G /Aspire 5741G , BIOS V1.20 02/08/2011 [ 60.278042] ffffffff8204e110 ffff88014df6b9f8 ffffffff815b3d90 ffff88014df6ba38 [ 60.278055] ffffffff815b0a8d ffff880150ed3f60 ffff880150ed4770 3871c4002c8980b2 [ 60.278068] ffff880150ed4748 ffff880150ed4770 ffff880150ed3f60 ffff88014df6bb00 [ 60.278081] Call Trace: [ 60.278091] [<ffffffff815b3d90>] dump_stack+0x19/0x1b [ 60.278101] [<ffffffff815b0a8d>] print_circular_bug+0x2b6/0x2c5 [ 60.278111] [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30 [ 60.278123] [<ffffffff81067e08>] ? __kernel_text_address+0x58/0x80 [ 60.278134] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.278142] [<ffffffff810621b5>] ? flush_work+0x5/0x280 [ 60.278151] [<ffffffff810621ed>] flush_work+0x3d/0x280 [ 60.278159] [<ffffffff810621b5>] ? flush_work+0x5/0x280 [ 60.278169] [<ffffffff810a9b14>] ? mark_held_locks+0x94/0x140 [ 60.278178] [<ffffffff81062d77>] ? __cancel_work_timer+0x77/0x120 [ 60.278188] [<ffffffff810a9cbd>] ? trace_hardirqs_on_caller+0xfd/0x1c0 [ 60.278196] [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120 [ 60.278206] [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20 [ 60.278214] [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0 [ 60.278225] [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20 [ 60.278234] [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100 [ 60.278244] [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0 [ 60.278255] [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c [ 60.278265] [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110 [ 60.278275] [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10 [ 60.278284] [<ffffffff815a0a68>] _cpu_down+0x88/0x330 [ 60.278292] [<ffffffff81024cf7>] ? cpu_hotplug_driver_lock+0x17/0x20 [ 60.278302] [<ffffffff815a0d46>] cpu_down+0x36/0x50 [ 60.278311] [<ffffffff815a2748>] store_online+0x98/0xd0 [ 60.278320] [<ffffffff81452a28>] dev_attr_store+0x18/0x30 [ 60.278329] [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150 [ 60.278337] [<ffffffff8116806d>] vfs_write+0xbd/0x1f0 [ 60.278347] [<ffffffff81185950>] ? fget_light+0x320/0x4b0 [ 60.278355] [<ffffffff811686fc>] SyS_write+0x4c/0xa0 [ 60.278364] [<ffffffff815bbbbe>] tracesys+0xd0/0xd5 [ 60.280582] smpboot: CPU 1 is now offline The intention of that commit was to avoid warnings during CPU hotplug, which indicated that offline CPUs were getting IPIs from the cpufreq governor's work items. But the real root-cause of that problem was commit a66b2e50 (cpufreq: Preserve sysfs files across suspend/resume) because it totally skipped all the cpufreq callbacks during CPU hotplug in the suspend/resume path, and hence it never actually shut down the cpufreq governor's worker threads during CPU offline in the suspend/resume path. Reflecting back, the reason why we never suspected that commit as the root-cause earlier, was that the original issue was reported with just the halt command and nobody had brought in suspend/resume to the equation. The reason for _that_ in turn, as it turns out, is that earlier halt/shutdown was being done by disabling non-boot CPUs while tasks were frozen, just like suspend/resume.... but commit cf7df378 (reboot: migrate shutdown/reboot to boot cpu) which came somewhere along that very same time changed that logic: shutdown/halt no longer takes CPUs offline. Thus, the test-cases for reproducing the bug were vastly different and thus we went totally off the trail. Overall, it was one hell of a confusion with so many commits affecting each other and also affecting the symptoms of the problems in subtle ways. Finally, now since the original problematic commit (a66b2e50) has been completely reverted, revert this intermediate fix too (2f7021a8), to fix the CPU hotplug deadlock. Phew! Reported-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Tested-by: NPeter Wu <lekensteyn@gmail.com> Cc: 3.10+ <stable@vger.kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 7月, 2013 3 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the drivers/cpufreq uses of the __cpuinit macros from all C files. [1] https://lkml.org/lkml/2013/5/20/589 [v2: leave 2nd lines of args misaligned as requested by Viresh] Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: cpufreq@vger.kernel.org Cc: linux-pm@vger.kernel.org Acked-by: NDirk Brandewie <dirk.j.brandewie@intel.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Paul Bolle 提交于
The Kconfig symbol CPU_FREQ_S3C24XX_DEBUGFS was renamed to ARM_S3C24XX_CPUFREQ_DEBUGFS in commit f023f8dd ("cpufreq: s3c24xx: move cpufreq driver to drivers/cpufreq"). But that commit missed one instance of its macro CONFIG_CPU_FREQ_S3C24XX_DEBUGFS. Rename it too. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
commit a66b2e (cpufreq: Preserve sysfs files across suspend/resume) has unfortunately caused several things in the cpufreq subsystem to break subtly after a suspend/resume cycle. The intention of that patch was to retain the file permissions of the cpufreq related sysfs files across suspend/resume. To achieve that, the commit completely removed the calls to cpufreq_add_dev() and __cpufreq_remove_dev() during suspend/resume transitions. But the problem is that those functions do 2 kinds of things: 1. Low-level initialization/tear-down that are critical to the correct functioning of cpufreq-core. 2. Kobject and sysfs related initialization/teardown. Ideally we should have reorganized the code to cleanly separate these two responsibilities, and skipped only the sysfs related parts during suspend/resume. Since we skipped the entire callbacks instead (which also included some CPU and cpufreq-specific critical components), cpufreq subsystem started behaving erratically after suspend/resume. So revert the commit to fix the regression. We'll revisit and address the original goal of that commit separately, since it involves quite a bit of careful code reorganization and appears to be non-trivial. (While reverting the commit, note that another commit f51e1eb6 (cpufreq: Fix cpufreq regression after suspend/resume) already reverted part of the original set of changes. So revert only the remaining ones). Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NPaul Bolle <pebolle@tiscali.nl> Cc: 3.10+ <stable@vger.kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 04 7月, 2013 1 次提交
-
-
由 Viresh Kumar 提交于
Commit 7c30ed ("cpufreq: make sure frequency transitions are serialized") interacts poorly with systems that have a single core freqency for all cores. On such systems we have a single policy for all cores with several CPUs. When we do a frequency transition the governor calls the pre and post change notifiers which causes cpufreq_notify_transition() per CPU. Since the policy is the same for all of them all CPUs after the first and the warnings added are generated by checking a per-policy flag the warnings will be triggered for all cores after the first. Fix this by allowing notifier to be called for n times. Where n is the number of cpus in policy->cpus. Reported-and-tested-by: NMark Brown <broonie@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 01 7月, 2013 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
Toralf Förster reported that the cpufreq ondemand governor behaves erratically (doesn't scale well) after a suspend/resume cycle. The problem was that the cpufreq subsystem's idea of the cpu frequencies differed from the actual frequencies set in the hardware after a suspend/resume cycle. Toralf bisected the problem to commit a66b2e50 (cpufreq: Preserve sysfs files across suspend/resume). Among other (harmless) things, that commit skipped the call to cpufreq_update_policy() in the resume path. But cpufreq_update_policy() plays an important role during resume, because it is responsible for checking if the BIOS changed the cpu frequencies behind our back and resynchronize the cpufreq subsystem's knowledge of the cpu frequencies, and update them accordingly. So, restore the call to cpufreq_update_policy() in the resume path to fix the cpufreq regression. Reported-and-tested-by: NToralf Förster <toralf.foerster@gmx.de> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: 3.10+ <stable@vger.kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 28 6月, 2013 3 次提交
-
-
由 Jacob Shin 提交于
Clear ->cur_policy when stopping a governor, or the ->cur_policy pointer may be stale on systems with have_governor_per_policy when a new policy is allocated due to CPU hotplug offline/online. [rjw: Changelog] Suggested-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NJacob Shin <jacob.shin@amd.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lan Tianyu 提交于
Commits fcf80582 (cpufreq: Simplify cpufreq_add_dev()) and aa77a527 (cpufreq: acpi-cpufreq: Don't set policy->related_cpus from .init()) changed the contents of the "related_cpus" sysfs attribute on systems where acpi-cpufreq is used and user space can't get the list of CPUs which are in the same hardware coordination CPU domain (provided by the ACPI AML method _PSD) via "related_cpus" any more. To make up for that loss add a new sysfs attribute "freqdomian_cpus" for the acpi-cpufreq driver which exposes the list of CPUs in the same domain regardless of whether it is coordinated by hardware or software. [rjw: Changelog, documentation] References: https://bugzilla.kernel.org/show_bug.cgi?id=58761Reported-by: NJean-Philippe Halimi <jean-philippe.halimi@exascale-computing.eu> Signed-off-by: NLan Tianyu <tianyu.lan@intel.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Whenever we are changing frequency of a cpu, we are calling PRECHANGE and POSTCHANGE notifiers. They must be serialized. i.e. PRECHANGE or POSTCHANGE shouldn't be called twice contiguously. This can happen due to bugs in users of __cpufreq_driver_target() or actual cpufreq drivers who are sending these notifiers. This patch adds some protection against this. Now, we keep track of the last transaction and see if something went wrong. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 6月, 2013 1 次提交
-
-
由 Jacob Shin 提交于
When initializing the default powersave_bias value, we need to first make sure that this policy is running the ondemand governor. Reported-and-tested-by: NTim Gardner <tim.gardner@canonical.com> Signed-off-by: NJacob Shin <jacob.shin@amd.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 24 6月, 2013 10 次提交
-
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. Cc: Mark Brown <broonie@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. Omap driver was taking care of it well, but wasn't restoring freqs.new to freqs.old in some cases. I wasn't required to add code for it as moving PRECHANGE notifier down was a better option, so that we call it just before starting frequency transition. Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. This also moves PRECHANGE notifier down so that we call it just before starting frequency transition. Acked-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. Cc: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. Davinci driver was taking care of it but frequency isn't restored to freqs.old. This patch fixes it. Acked-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. This also removes code setting policy->cur as this is also done by POSTCHANGE notifier. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e. either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that sequence of calling notifiers is complete. This patch fixes it. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
PRECHANGE and POSTCHANGE notifiers must be called in groups, i.e either both should be called or both shouldn't be. In case we have started PRECHANGE notifier and found an error, we must call POSTCHANGE notifier with freqs.new = freqs.old to guarantee that the sequence of calling notifiers is complete. This patch fixes it. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-