1. 02 2月, 2013 6 次提交
  2. 27 11月, 2012 1 次提交
    • F
      cpufreq: ondemand: update sampling rate only on right CPUs · 3e33ee9e
      Fabio Baltieri 提交于
      Fix cpufreq_gov_ondemand to skip CPU where another governor is used.
      
      The bug present itself as NULL pointer access on the mutex_lock() call,
      an can be reproduced on an SMP machine by setting the default governor
      to anything other than ondemand, setting a single CPU's governor to
      ondemand, then changing the sample rate by writing on:
      
      > /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate
      
      Backtrace:
      
      Nov 26 17:36:54 balto kernel: [  839.585241] BUG: unable to handle kernel NULL pointer dereference at           (null)
      Nov 26 17:36:54 balto kernel: [  839.585311] IP: [<ffffffff8174e082>] __mutex_lock_slowpath+0xb2/0x170
      [snip]
      Nov 26 17:36:54 balto kernel: [  839.587005] Call Trace:
      Nov 26 17:36:54 balto kernel: [  839.587030]  [<ffffffff8174da82>] mutex_lock+0x22/0x40
      Nov 26 17:36:54 balto kernel: [  839.587067]  [<ffffffff81610b8f>] store_sampling_rate+0xbf/0x150
      Nov 26 17:36:54 balto kernel: [  839.587110]  [<ffffffff81031e9c>] ?  __do_page_fault+0x1cc/0x4c0
      Nov 26 17:36:54 balto kernel: [  839.587153]  [<ffffffff813309bf>] kobj_attr_store+0xf/0x20
      Nov 26 17:36:54 balto kernel: [  839.587192]  [<ffffffff811bb62d>] sysfs_write_file+0xcd/0x140
      Nov 26 17:36:54 balto kernel: [  839.587234]  [<ffffffff8114c12c>] vfs_write+0xac/0x180
      Nov 26 17:36:54 balto kernel: [  839.587271]  [<ffffffff8114c472>] sys_write+0x52/0xa0
      Nov 26 17:36:54 balto kernel: [  839.587306]  [<ffffffff810321ce>] ?  do_page_fault+0xe/0x10
      Nov 26 17:36:54 balto kernel: [  839.587345]  [<ffffffff81751202>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      3e33ee9e
  3. 24 11月, 2012 1 次提交
  4. 15 11月, 2012 2 次提交
  5. 15 9月, 2012 1 次提交
  6. 22 8月, 2012 1 次提交
  7. 01 3月, 2012 1 次提交
    • M
      [CPUFREQ] CPUfreq ondemand: update sampling rate without waiting for next sampling · fd0ef7a0
      MyungJoo Ham 提交于
      When a new sampling rate is shorter than the current one, (e.g., 1 sec
      --> 10 ms) regardless how short the new one is, the current ondemand
      mechanism wait for the previously set timer to be expired.
      
      For example, if the user has just expressed that the sampling rate
      should be 10 ms from now and the previous was 1000 ms, the new rate may
      become effective 999 ms later, which could be not acceptable for the
      user if the user has intended to speed up sampling because the system is
      expected to react to CPU load fluctuation quickly from __now__.
      
      In order to address this issue, we need to cancel the previously set
      timer (schedule_delayed_work) and reset the timer if resetting timer is
      expected to trigger the delayed_work ealier.
      Signed-off-by: NMyungJoo Ham <myungjoo.ham@samsung.com>
      Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      fd0ef7a0
  8. 15 12月, 2011 1 次提交
  9. 09 12月, 2011 1 次提交
  10. 06 12月, 2011 1 次提交
  11. 08 9月, 2011 1 次提交
    • M
      nohz: Fix update_ts_time_stat idle accounting · 6beea0cd
      Michal Hocko 提交于
      update_ts_time_stat currently updates idle time even if we are in
      iowait loop at the moment. The only real users of the idle counter
      (via get_cpu_idle_time_us) are CPU governors and they expect to get
      cumulative time for both idle and iowait times.
      The value (idle_sleeptime) is also printed to userspace by print_cpu
      but it prints both idle and iowait times so the idle part is misleading.
      
      Let's clean this up and fix update_ts_time_stat to account both counters
      properly and update consumers of idle to consider iowait time as well.
      If we do this we might use get_cpu_{idle,iowait}_time_us from other
      contexts as well and we will get expected values.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Link: http://lkml.kernel.org/r/e9c909c221a8da402c4da07e4cd968c3218f8eb1.1314172057.git.mhocko@suse.czSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6beea0cd
  12. 09 8月, 2011 1 次提交
  13. 17 3月, 2011 4 次提交
  14. 26 1月, 2011 1 次提交
  15. 22 10月, 2010 1 次提交
    • D
      [CPUFREQ] add sampling_down_factor tunable to improve ondemand performance · 3f78a9f7
      David C Niemi 提交于
      Adds a new global tunable, sampling_down_factor.  Set to 1 it makes no
      changes from existing behavior, but set to greater than 1 (e.g. 100)
      it acts as a multiplier for the scheduling interval for reevaluating
      load when the CPU is at its top speed due to high load.  This improves
      performance by reducing the overhead of load evaluation and helping
      the CPU stay at its top speed when truly busy, rather than shifting
      back and forth in speed.  This tunable has no effect on behavior at
      lower speeds/lower CPU loads.
      
      This patch is against 2.6.36-rc6.
      
      This patch should help solve kernel bug 19672 "ondemand is slow".
      Signed-off-by: NDavid Niemi <dniemi@verisign.com>
      Acked-by: NVenkatesh Pallipadi <venki@google.com>
      CC: Daniel Hollocher <danielhollocher@gmail.com>
      CC: <cpufreq-list@vger.kernel.org>
      CC: <linux-kernel@vger.kernel.org>
      Signed-off-by: NDave Jones <davej@redhat.com>
      3f78a9f7
  16. 04 8月, 2010 2 次提交
  17. 10 5月, 2010 2 次提交
  18. 10 4月, 2010 1 次提交
  19. 13 1月, 2010 1 次提交
  20. 18 11月, 2009 1 次提交
  21. 02 9月, 2009 1 次提交
    • T
      [CPUFREQ] ondemand - Use global sysfs dir for tuning settings · 0e625ac1
      Thomas Renninger 提交于
      Ondemand has only global variables for userspace tunings via sysfs.
      But they were exposed per CPU which wrongly implies to the user that
      his settings are applied per cpu. Also locking sysfs against concurrent
      access won't be necessary anymore after deprecation time.
      
      This means the ondemand config dir is moved:
      /sys/devices/system/cpu/cpu*/cpufreq/ondemand ->
           /sys/devices/system/cpu/cpufreq/ondemand
      
      The old files will still exist, but reading or writing to them will
      result in one (printk_once) deprecation msg to syslog per file.
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      Signed-off-by: NDave Jones <davej@redhat.com>
      0e625ac1
  22. 07 7月, 2009 2 次提交
  23. 24 6月, 2009 1 次提交
    • T
      percpu: clean up percpu variable definitions · 245b2e70
      Tejun Heo 提交于
      Percpu variable definition is about to be updated such that all percpu
      symbols including the static ones must be unique.  Update percpu
      variable definitions accordingly.
      
      * as,cfq: rename ioc_count uniquely
      
      * cpufreq: rename cpu_dbs_info uniquely
      
      * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it
      
      * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
        rename it
      
      * ipv4,6: rename cookie_scratch uniquely
      
      * x86 perf_counter: rename prev_left to pmc_prev_left, irq_entry to
        pmc_irq_entry and nmi_entry to pmc_nmi_entry
      
      * perf_counter: rename disable_count to perf_disable_count
      
      * ftrace: rename test_event_disable to ftrace_test_event_disable
      
      * kmemleak: rename test_pointer to kmemleak_test_pointer
      
      * mce: rename next_interval to mce_next_interval
      
      [ Impact: percpu usage cleanups, no duplicate static percpu var names ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      245b2e70
  24. 15 6月, 2009 2 次提交
  25. 27 5月, 2009 1 次提交
    • M
      [CPUFREQ] fix timer teardown in ondemand governor · b14893a6
      Mathieu Desnoyers 提交于
      * Rafael J. Wysocki (rjw@sisk.pl) wrote:
      > This message has been generated automatically as a part of a report
      > of regressions introduced between 2.6.28 and 2.6.29.
      >
      > The following bug entry is on the current list of known regressions
      > introduced between 2.6.28 and 2.6.29.  Please verify if it still should
      > be listed and let me know (either way).
      >
      >
      > Bug-Entry	: http://bugzilla.kernel.org/show_bug.cgi?id=13186
      > Subject		: cpufreq timer teardown problem
      > Submitter	: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      > Date		: 2009-04-23 14:00 (24 days old)
      > References	: http://marc.info/?l=linux-kernel&m=124049523515036&w=4
      > Handled-By	: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      > Patch		: http://patchwork.kernel.org/patch/19754/
      > 		  http://patchwork.kernel.org/patch/19753/
      >
      
      (updated changelog)
      
      cpufreq fix timer teardown in ondemand governor
      
      The problem is that dbs_timer_exit() uses cancel_delayed_work() when it should
      use cancel_delayed_work_sync(). cancel_delayed_work() does not wait for the
      workqueue handler to exit.
      
      The ondemand governor does not seem to be affected because the
      "if (!dbs_info->enable)" check at the beginning of the workqueue handler returns
      immediately without rescheduling the work. The conservative governor in
      2.6.30-rc has the same check as the ondemand governor, which makes things
      usually run smoothly. However, if the governor is quickly stopped and then
      started, this could lead to the following race :
      
      dbs_enable could be reenabled and multiple do_dbs_timer handlers would run.
      This is why a synchronized teardown is required.
      
      The following patch applies to, at least, 2.6.28.x, 2.6.29.1, 2.6.30-rc2.
      
      Depends on patch
      cpufreq: remove rwsem lock from CPUFREQ_GOV_STOP call
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Andrew Morton <akpm@linux-foundation.org>
      CC: gregkh@suse.de
      CC: stable@kernel.org
      CC: cpufreq@vger.kernel.org
      CC: Ingo Molnar <mingo@elte.hu>
      CC: rjw@sisk.pl
      CC: Ben Slusky <sluskyb@paranoiacs.org>
      Signed-off-by: NDave Jones <davej@redhat.com>
      b14893a6
  26. 25 2月, 2009 2 次提交