1. 08 8月, 2013 1 次提交
  2. 26 7月, 2013 1 次提交
    • S
      cpufreq: ondemand: Change the calculation of target frequency · dfa5bb62
      Stratos Karafotis 提交于
      The ondemand governor calculates load in terms of frequency and
      increases it only if load_freq is greater than up_threshold
      multiplied by the current or average frequency.  This appears to
      produce oscillations of frequency between min and max because,
      for example, a relatively small load can easily saturate minimum
      frequency and lead the CPU to the max.  Then, it will decrease
      back to the min due to small load_freq.
      
      Change the calculation method of load and target frequency on the
      basis of the following two observations:
      
       - Load computation should not depend on the current or average
         measured frequency.  For example, absolute load of 80% at 100MHz
         is not necessarily equivalent to 8% at 1000MHz in the next
         sampling interval.
      
       - It should be possible to increase the target frequency to any
         value present in the frequency table proportional to the absolute
         load, rather than to the max only, so that:
      
         Target frequency = C * load
      
         where we take C = policy->cpuinfo.max_freq / 100.
      
      Tested on Intel i7-3770 CPU @ 3.40GHz and on Quad core 1500MHz Krait.
      Phoronix benchmark of Linux Kernel Compilation 3.1 test shows an
      increase ~1.5% in performance. cpufreq_stats (time_in_state) shows
      that middle frequencies are used more, with this patch.  Highest
      and lowest frequencies were used less by ~9%.
      
      [rjw: We have run multiple other tests on kernels with this
       change applied and in the vast majority of cases it turns out
       that the resulting performance improvement also leads to reduced
       consumption of energy.  The change is additionally justified by
       the overall simplification of the code in question.]
      Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      dfa5bb62
  3. 26 6月, 2013 1 次提交
  4. 13 5月, 2013 1 次提交
  5. 10 4月, 2013 1 次提交
  6. 01 4月, 2013 4 次提交
    • S
      cpufreq: governors: Calculate iowait time only when necessary · 9366d840
      Stratos Karafotis 提交于
      Currently we always calculate the CPU iowait time and add it to idle time.
      If we are in ondemand and we use io_is_busy, we re-calculate iowait time
      and we subtract it from idle time.
      
      With this patch iowait time is calculated only when necessary avoiding
      the double call to get_cpu_iowait_time_us. We use a parameter in
      function get_cpu_idle_time to distinguish when the iowait time will be
      added to idle time or not, without the need of keeping the prev_io_wait.
      Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.,org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      9366d840
    • V
      cpufreq: governors: Avoid unnecessary per cpu timer interrupts · 031299b3
      Viresh Kumar 提交于
      Following patch has introduced per cpu timers or works for ondemand and
      conservative governors.
      
      	commit 2abfa876
      	Author: Rickard Andersson <rickard.andersson@stericsson.com>
      	Date:   Thu Dec 27 14:55:38 2012 +0000
      
      	    cpufreq: handle SW coordinated CPUs
      
      This causes additional unnecessary interrupts on all cpus when the load is
      recently evaluated by any other cpu. i.e. When load is recently evaluated by cpu
      x, we don't really need any other cpu to evaluate this load again for the next
      sampling_rate time.
      
      Some sort of code is present to avoid that but we are still getting timer
      interrupts for all cpus. A good way of avoiding this would be to modify delays
      for all cpus (policy->cpus) whenever any cpu has evaluated load.
      
      This patch does this change and some related code cleanup.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      031299b3
    • V
      cpufreq: ondemand: Don't update sample_type if we don't evaluate load again · 9d445920
      Viresh Kumar 提交于
      Because we have per cpu timer now, we check if we need to evaluate load again or
      not (In case it is recently evaluated). Here the 2nd cpu which got timer
      interrupt updates core_dbs_info->sample_type irrespective of load evaluation is
      required or not. Which is wrong as the first cpu is dependent on this variable
      set to an older value.
      
      Moreover it would be best in this case to schedule 2nd cpu's timer to
      sampling_rate instead of freq_lo or hi as that must be managed by the other cpu.
      In case the other cpu idles in between then also we wouldn't loose much power.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      9d445920
    • V
      cpufreq: governor: Implement per policy instances of governors · 4d5dcc42
      Viresh Kumar 提交于
      Currently, there can't be multiple instances of single governor_type.
      If we have a multi-package system, where we have multiple instances
      of struct policy (per package), we can't have multiple instances of
      same governor. i.e. We can't have multiple instances of ondemand
      governor for multiple packages.
      
      Governors directory in sysfs is created at /sys/devices/system/cpu/cpufreq/
      governor-name/. Which again reflects that there can be only one
      instance of a governor_type in the system.
      
      This is a bottleneck for multicluster system, where we want different
      packages to use same governor type, but with different tunables.
      
      This patch uses the infrastructure provided by earlier patch and
      implements init/exit routines for ondemand and conservative
      governors.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4d5dcc42
  7. 09 2月, 2013 2 次提交
  8. 02 2月, 2013 6 次提交
  9. 27 11月, 2012 1 次提交
    • F
      cpufreq: ondemand: update sampling rate only on right CPUs · 3e33ee9e
      Fabio Baltieri 提交于
      Fix cpufreq_gov_ondemand to skip CPU where another governor is used.
      
      The bug present itself as NULL pointer access on the mutex_lock() call,
      an can be reproduced on an SMP machine by setting the default governor
      to anything other than ondemand, setting a single CPU's governor to
      ondemand, then changing the sample rate by writing on:
      
      > /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate
      
      Backtrace:
      
      Nov 26 17:36:54 balto kernel: [  839.585241] BUG: unable to handle kernel NULL pointer dereference at           (null)
      Nov 26 17:36:54 balto kernel: [  839.585311] IP: [<ffffffff8174e082>] __mutex_lock_slowpath+0xb2/0x170
      [snip]
      Nov 26 17:36:54 balto kernel: [  839.587005] Call Trace:
      Nov 26 17:36:54 balto kernel: [  839.587030]  [<ffffffff8174da82>] mutex_lock+0x22/0x40
      Nov 26 17:36:54 balto kernel: [  839.587067]  [<ffffffff81610b8f>] store_sampling_rate+0xbf/0x150
      Nov 26 17:36:54 balto kernel: [  839.587110]  [<ffffffff81031e9c>] ?  __do_page_fault+0x1cc/0x4c0
      Nov 26 17:36:54 balto kernel: [  839.587153]  [<ffffffff813309bf>] kobj_attr_store+0xf/0x20
      Nov 26 17:36:54 balto kernel: [  839.587192]  [<ffffffff811bb62d>] sysfs_write_file+0xcd/0x140
      Nov 26 17:36:54 balto kernel: [  839.587234]  [<ffffffff8114c12c>] vfs_write+0xac/0x180
      Nov 26 17:36:54 balto kernel: [  839.587271]  [<ffffffff8114c472>] sys_write+0x52/0xa0
      Nov 26 17:36:54 balto kernel: [  839.587306]  [<ffffffff810321ce>] ?  do_page_fault+0xe/0x10
      Nov 26 17:36:54 balto kernel: [  839.587345]  [<ffffffff81751202>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      3e33ee9e
  10. 24 11月, 2012 1 次提交
  11. 15 11月, 2012 2 次提交
  12. 15 9月, 2012 1 次提交
  13. 22 8月, 2012 1 次提交
  14. 01 3月, 2012 1 次提交
    • M
      [CPUFREQ] CPUfreq ondemand: update sampling rate without waiting for next sampling · fd0ef7a0
      MyungJoo Ham 提交于
      When a new sampling rate is shorter than the current one, (e.g., 1 sec
      --> 10 ms) regardless how short the new one is, the current ondemand
      mechanism wait for the previously set timer to be expired.
      
      For example, if the user has just expressed that the sampling rate
      should be 10 ms from now and the previous was 1000 ms, the new rate may
      become effective 999 ms later, which could be not acceptable for the
      user if the user has intended to speed up sampling because the system is
      expected to react to CPU load fluctuation quickly from __now__.
      
      In order to address this issue, we need to cancel the previously set
      timer (schedule_delayed_work) and reset the timer if resetting timer is
      expected to trigger the delayed_work ealier.
      Signed-off-by: NMyungJoo Ham <myungjoo.ham@samsung.com>
      Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      fd0ef7a0
  15. 15 12月, 2011 1 次提交
  16. 09 12月, 2011 1 次提交
  17. 06 12月, 2011 1 次提交
  18. 08 9月, 2011 1 次提交
    • M
      nohz: Fix update_ts_time_stat idle accounting · 6beea0cd
      Michal Hocko 提交于
      update_ts_time_stat currently updates idle time even if we are in
      iowait loop at the moment. The only real users of the idle counter
      (via get_cpu_idle_time_us) are CPU governors and they expect to get
      cumulative time for both idle and iowait times.
      The value (idle_sleeptime) is also printed to userspace by print_cpu
      but it prints both idle and iowait times so the idle part is misleading.
      
      Let's clean this up and fix update_ts_time_stat to account both counters
      properly and update consumers of idle to consider iowait time as well.
      If we do this we might use get_cpu_{idle,iowait}_time_us from other
      contexts as well and we will get expected values.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Link: http://lkml.kernel.org/r/e9c909c221a8da402c4da07e4cd968c3218f8eb1.1314172057.git.mhocko@suse.czSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6beea0cd
  19. 09 8月, 2011 1 次提交
  20. 17 3月, 2011 4 次提交
  21. 26 1月, 2011 1 次提交
  22. 22 10月, 2010 1 次提交
    • D
      [CPUFREQ] add sampling_down_factor tunable to improve ondemand performance · 3f78a9f7
      David C Niemi 提交于
      Adds a new global tunable, sampling_down_factor.  Set to 1 it makes no
      changes from existing behavior, but set to greater than 1 (e.g. 100)
      it acts as a multiplier for the scheduling interval for reevaluating
      load when the CPU is at its top speed due to high load.  This improves
      performance by reducing the overhead of load evaluation and helping
      the CPU stay at its top speed when truly busy, rather than shifting
      back and forth in speed.  This tunable has no effect on behavior at
      lower speeds/lower CPU loads.
      
      This patch is against 2.6.36-rc6.
      
      This patch should help solve kernel bug 19672 "ondemand is slow".
      Signed-off-by: NDavid Niemi <dniemi@verisign.com>
      Acked-by: NVenkatesh Pallipadi <venki@google.com>
      CC: Daniel Hollocher <danielhollocher@gmail.com>
      CC: <cpufreq-list@vger.kernel.org>
      CC: <linux-kernel@vger.kernel.org>
      Signed-off-by: NDave Jones <davej@redhat.com>
      3f78a9f7
  23. 04 8月, 2010 2 次提交
  24. 10 5月, 2010 2 次提交
  25. 10 4月, 2010 1 次提交