1. 09 3月, 2016 26 次提交
  2. 05 2月, 2016 1 次提交
  3. 10 12月, 2015 2 次提交
    • V
      cpufreq: ondemand: update update_sampling_rate() to make it more efficient · f08f638b
      Viresh Kumar 提交于
      Currently update_sampling_rate() runs over each online CPU and
      cancels/queues timers on all policy->cpus every time. This should be
      done just once for any cpu belonging to a policy.
      
      Create a cpumask and keep on clearing it as and when we process
      policies, so that we don't have to traverse through all CPUs of the same
      policy.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      f08f638b
    • V
      cpufreq: governor: replace per-CPU delayed work with timers · 70f43e5e
      Viresh Kumar 提交于
      cpufreq governors evaluate load at sampling rate and based on that they
      update frequency for a group of CPUs belonging to the same cpufreq
      policy.
      
      This is required to be done in a single thread for all policy->cpus, but
      because we don't want to wakeup idle CPUs to do just that, we use
      deferrable work for this. If we would have used a single delayed
      deferrable work for the entire policy, there were chances that the CPU
      required to run the handler can be in idle and we might end up not
      changing the frequency for the entire group with load variations.
      
      And so we were forced to keep per-cpu works, and only the one that
      expires first need to do the real work and others are rescheduled for
      next sampling time.
      
      We have been using the more complex solution until now, where we used a
      delayed deferrable work for this, which is a combination of a timer and
      a work.
      
      This could be made lightweight by keeping per-cpu deferred timers with a
      single work item, which is scheduled by the first timer that expires.
      
      This patch does just that and here are important changes:
      - The timer handler will run in irq context and so we need to use a
        spin_lock instead of the timer_mutex. And so a separate timer_lock is
        created. This also makes the use of the mutex and lock quite clear, as
        we know what exactly they are protecting.
      - A new field 'skip_work' is added to track when the timer handlers can
        queue a work. More comments present in code.
      Suggested-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Reviewed-by: NAshwin Chaugule <ashwin.chaugule@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      70f43e5e
  4. 07 12月, 2015 3 次提交
  5. 28 10月, 2015 1 次提交
  6. 21 7月, 2015 2 次提交
  7. 18 7月, 2015 2 次提交
  8. 15 6月, 2015 2 次提交
    • V
      cpufreq: governor: Serialize governor callbacks · 732b6d61
      Viresh Kumar 提交于
      There are several races reported in cpufreq core around governors (only
      ondemand and conservative) by different people.
      
      There are at least two race scenarios present in governor code:
       (a) Concurrent access/updates of governor internal structures.
      
       It is possible that fields such as 'dbs_data->usage_count', etc.  are
       accessed simultaneously for different policies using same governor
       structure (i.e. CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag unset). And
       because of this we can dereference bad pointers.
      
       For example consider a system with two CPUs with separate 'struct
       cpufreq_policy' instances. CPU0 governor: ondemand and CPU1: powersave.
       CPU0 switching to powersave and CPU1 to ondemand:
      	CPU0				CPU1
      
      	store*				store*
      
      	cpufreq_governor_exit()		cpufreq_governor_init()
      					dbs_data = cdata->gdbs_data;
      
      	if (!--dbs_data->usage_count)
      		kfree(dbs_data);
      
      					dbs_data->usage_count++;
      					*Bad pointer dereference*
      
       There are other races possible between EXIT and START/STOP/LIMIT as
       well. Its really complicated.
      
       (b) Switching governor state in bad sequence:
      
       For example trying to switch a governor to START state, when the
       governor is in EXIT state. There are some checks present in
       __cpufreq_governor() but they aren't sufficient as they compare events
       against 'policy->governor_enabled', where as we need to take governor's
       state into account, which can be used by multiple policies.
      
      These two issues need to be solved separately and the responsibility
      should be properly divided between cpufreq and governor core.
      
      The first problem is more about the governor core, as it needs to
      protect its structures properly. And the second problem should be fixed
      in cpufreq core instead of governor, as its all about sequence of
      events.
      
      This patch is trying to solve only the first problem.
      
      There are two types of data we need to protect,
      - 'struct common_dbs_data': No matter what, there is going to be a
        single copy of this per governor.
      - 'struct dbs_data': With CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag set, we
        will have per-policy copy of this data, otherwise a single copy.
      
      Because of such complexities, the mutex present in 'struct dbs_data' is
      insufficient to solve our problem. For example we need to protect
      fetching of 'dbs_data' from different structures at the beginning of
      cpufreq_governor_dbs(), to make sure it isn't currently being updated.
      
      This can be fixed if we can guarantee serialization of event parsing
      code for an individual governor. This is best solved with a mutex per
      governor, and the placeholder for that is 'struct common_dbs_data'.
      
      And so this patch moves the mutex from 'struct dbs_data' to 'struct
      common_dbs_data' and takes it at the beginning and drops it at the end
      of cpufreq_governor_dbs().
      
      Tested with and without following configuration options:
      
      CONFIG_LOCKDEP_SUPPORT=y
      CONFIG_DEBUG_RT_MUTEXES=y
      CONFIG_DEBUG_PI_LIST=y
      CONFIG_DEBUG_SPINLOCK=y
      CONFIG_DEBUG_MUTEXES=y
      CONFIG_DEBUG_LOCK_ALLOC=y
      CONFIG_PROVE_LOCKING=y
      CONFIG_LOCKDEP=y
      CONFIG_DEBUG_ATOMIC_SLEEP=y
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      732b6d61
    • V
      cpufreq: governor: register notifier from cs_init() · 8e0484d2
      Viresh Kumar 提交于
      Notifiers are required only for conservative governor and the common
      governor code is unnecessarily polluted with that. Handle that from
      cs_init/exit() instead of cpufreq_governor_dbs().
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      8e0484d2
  9. 21 7月, 2014 1 次提交
    • S
      cpufreq: ondemand: Eliminate the deadband effect · 6393d6a1
      Stratos Karafotis 提交于
      Currently, ondemand calculates the target frequency proportional to load
      using the formula:
      	Target frequency = C * load
      	where C = policy->cpuinfo.max_freq / 100
      
      Though, in many cases, the minimum available frequency is pretty high and
      the above calculation introduces a dead band from load 0 to
      100 * policy->cpuinfo.min_freq / policy->cpuinfo.max_freq where the target
      frequency is always calculated to less than policy->cpuinfo.min_freq and
      the minimum frequency is selected.
      
      For example: on Intel i7-3770 @ 3.4GHz the policy->cpuinfo.min_freq = 1600000
      and the policy->cpuinfo.max_freq = 3400000 (without turbo). Thus, the CPU
      starts to scale up at a load above 47.
      On quad core 1500MHz Krait the policy->cpuinfo.min_freq = 384000
      and the policy->cpuinfo.max_freq = 1512000. Thus, the CPU starts to scale
      at load above 25.
      
      Change the calculation of target frequency to eliminate the above effect using
      the formula:
      
      	Target frequency = A + B * load
      	where A = policy->cpuinfo.min_freq and
      	      B = (policy->cpuinfo.max_freq - policy->cpuinfo->min_freq) / 100
      
      This will map load values 0 to 100 linearly to cpuinfo.min_freq to
      cpuinfo.max_freq.
      
      Also, use the CPUFREQ_RELATION_C in __cpufreq_driver_target to select the
      closest frequency in frequency_table. This is necessary to avoid selection
      of minimum frequency only when load equals to 0. It will also help for selection
      of frequencies using a more 'fair' criterion.
      
      Tables below show the difference in selected frequency for specific values
      of load without and with this patch. On Intel i7-3770 @ 3.40GHz:
      	Without			With
      Load	Target	Selected	Target	Selected
      0	0	1600000		1600000	1600000
      5	170050	1600000		1690050	1700000
      10	340100	1600000		1780100	1700000
      15	510150	1600000		1870150	1900000
      20	680200	1600000		1960200	2000000
      25	850250	1600000		2050250	2100000
      30	1020300	1600000		2140300	2100000
      35	1190350	1600000		2230350	2200000
      40	1360400	1600000		2320400	2400000
      45	1530450	1600000		2410450	2400000
      50	1700500	1900000		2500500	2500000
      55	1870550	1900000		2590550	2600000
      60	2040600	2100000		2680600	2600000
      65	2210650	2400000		2770650	2800000
      70	2380700	2400000		2860700	2800000
      75	2550750	2600000		2950750	3000000
      80	2720800	2800000		3040800	3000000
      85	2890850	2900000		3130850	3100000
      90	3060900	3100000		3220900	3300000
      95	3230950	3300000		3310950	3300000
      100	3401000	3401000		3401000	3401000
      
      On ARM quad core 1500MHz Krait:
      	Without			With
      Load	Target	Selected	Target	Selected
      0	0	384000		384000	384000
      5	75600	384000		440400	486000
      10	151200	384000		496800	486000
      15	226800	384000		553200	594000
      20	302400	384000		609600	594000
      25	378000	384000		666000	702000
      30	453600	486000		722400	702000
      35	529200	594000		778800	810000
      40	604800	702000		835200	810000
      45	680400	702000		891600	918000
      50	756000	810000		948000	918000
      55	831600	918000		1004400	1026000
      60	907200	918000		1060800	1026000
      65	982800	1026000		1117200	1134000
      70	1058400	1134000		1173600	1134000
      75	1134000	1134000		1230000	1242000
      80	1209600	1242000		1286400	1242000
      85	1285200	1350000		1342800	1350000
      90	1360800	1458000		1399200	1350000
      95	1436400	1458000		1455600	1458000
      100	1512000	1512000		1512000	1512000
      
      Tested on Intel i7-3770 CPU @ 3.40GHz and on ARM quad core 1500MHz Krait
      (Android smartphone).
      Benchmarks on Intel i7 shows a performance improvement on low and medium
      work loads with lower power consumption. Specifics:
      
      Phoronix Linux Kernel Compilation 3.1:
      Time: -0.40%, energy: -0.07%
      Phoronix Apache:
      Time: -4.98%, energy: -2.35%
      Phoronix FFMPEG:
      Time: -6.29%, energy: -4.02%
      
      Also, running mp3 decoding (very low load) shows no differences with and
      without this patch.
      Signed-off-by: NStratos Karafotis <stratosk@semaphore.gr>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      6393d6a1