1. 09 3月, 2016 29 次提交
  2. 28 1月, 2016 1 次提交
    • V
      cpufreq: Fix NULL reference crash while accessing policy->governor_data · e4b133cc
      Viresh Kumar 提交于
      There is a race discovered by Juri, where we are able to:
      - create and read a sysfs file before policy->governor_data is being set
        to a non NULL value.
        OR
      - set policy->governor_data to NULL, and reading a file before being
        destroyed.
      
      And so such a crash is reported:
      
      Unable to handle kernel NULL pointer dereference at virtual address 0000000c
      pgd = edfc8000
      [0000000c] *pgd=bfc8c835
      Internal error: Oops: 17 [#1] SMP ARM
      Modules linked in:
      CPU: 4 PID: 1730 Comm: cat Not tainted 4.5.0-rc1+ #463
      Hardware name: ARM-Versatile Express
      task: ee8e8480 ti: ee930000 task.ti: ee930000
      PC is at show_ignore_nice_load_gov_pol+0x24/0x34
      LR is at show+0x4c/0x60
      pc : [<c058f1bc>]    lr : [<c058ae88>]    psr: a0070013
      sp : ee931dd0  ip : ee931de0  fp : ee931ddc
      r10: ee4bc290  r9 : 00001000  r8 : ef2cb000
      r7 : ee4bc200  r6 : ef2cb000  r5 : c0af57b0  r4 : ee4bc2e0
      r3 : 00000000  r2 : 00000000  r1 : c0928df4  r0 : ef2cb000
      Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
      Control: 10c5387d  Table: adfc806a  DAC: 00000051
      Process cat (pid: 1730, stack limit = 0xee930210)
      Stack: (0xee931dd0 to 0xee932000)
      1dc0:                                     ee931dfc ee931de0 c058ae88 c058f1a4
      1de0: edce3bc0 c07bfca4 edce3ac0 00001000 ee931e24 ee931e00 c01fcb90 c058ae48
      1e00: 00000001 edce3bc0 00000000 00000001 ee931e50 ee8ff480 ee931e34 ee931e28
      1e20: c01fb33c c01fcb0c ee931e8c ee931e38 c01a5210 c01fb314 ee931e9c ee931e48
      1e40: 00000000 edce3bf0 befe4a00 ee931f78 00000000 00000000 000001e4 00000000
      1e60: c00545a8 edce3ac0 00001000 00001000 befe4a00 ee931f78 00000000 00001000
      1e80: ee931ed4 ee931e90 c01fbed8 c01a5038 ed085a58 00020000 00000000 00000000
      1ea0: c0ad72e4 ee931f78 ee8ff488 ee8ff480 c077f3fc 00001000 befe4a00 ee931f78
      1ec0: 00000000 00001000 ee931f44 ee931ed8 c017c328 c01fbdc4 00001000 00000000
      1ee0: ee8ff480 00001000 ee931f44 ee931ef8 c017c65c c03deb10 ee931fac ee931f08
      1f00: c0009270 c001f290 c0a8d968 ef2cb000 ef2cb000 ee8ff480 00000020 ee8ff480
      1f20: ee8ff480 befe4a00 00001000 ee931f78 00000000 00000000 ee931f74 ee931f48
      1f40: c017d1ec c017c2f8 c019c724 c019c684 ee8ff480 ee8ff480 00001000 befe4a00
      1f60: 00000000 00000000 ee931fa4 ee931f78 c017d2a8 c017d160 00000000 00000000
      1f80: 000a9f20 00001000 befe4a00 00000003 c000ffe4 ee930000 00000000 ee931fa8
      1fa0: c000fe40 c017d264 000a9f20 00001000 00000003 befe4a00 00001000 00000000
      Unable to handle kernel NULL pointer dereference at virtual address 0000000c
      1fc0: 000a9f20 00001000 befe4a00 00000003 00000000 00000000 00000003 00000001
      pgd = edfc4000
      [0000000c] *pgd=bfcac835
      1fe0: 00000000 befe49dc 000197f8 b6e35dfc 60070010 00000003 3065b49d 134ac2c9
      
      [<c058f1bc>] (show_ignore_nice_load_gov_pol) from [<c058ae88>] (show+0x4c/0x60)
      [<c058ae88>] (show) from [<c01fcb90>] (sysfs_kf_seq_show+0x90/0xfc)
      [<c01fcb90>] (sysfs_kf_seq_show) from [<c01fb33c>] (kernfs_seq_show+0x34/0x38)
      [<c01fb33c>] (kernfs_seq_show) from [<c01a5210>] (seq_read+0x1e4/0x4e4)
      [<c01a5210>] (seq_read) from [<c01fbed8>] (kernfs_fop_read+0x120/0x1a0)
      [<c01fbed8>] (kernfs_fop_read) from [<c017c328>] (__vfs_read+0x3c/0xe0)
      [<c017c328>] (__vfs_read) from [<c017d1ec>] (vfs_read+0x98/0x104)
      [<c017d1ec>] (vfs_read) from [<c017d2a8>] (SyS_read+0x50/0x90)
      [<c017d2a8>] (SyS_read) from [<c000fe40>] (ret_fast_syscall+0x0/0x1c)
      Code: e5903044 e1a00001 e3081df4 e34c1092 (e593300c)
      ---[ end trace 5994b9a5111f35ee ]---
      
      Fix that by making sure, policy->governor_data is updated at the right
      places only.
      
      Cc: 4.2+ <stable@vger.kernel.org> # 4.2+
      Reported-and-tested-by: NJuri Lelli <juri.lelli@arm.com>
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      e4b133cc
  3. 05 1月, 2016 1 次提交
    • C
      cpufreq: governor: Fix negative idle_time when configured with CONFIG_HZ_PERIODIC · 0df35026
      Chen Yu 提交于
      It is reported that, with CONFIG_HZ_PERIODIC=y cpu stays at the
      lowest frequency even if the usage goes to 100%, neither ondemand
      nor conservative governor works, however performance and
      userspace work as expected. If set with CONFIG_NO_HZ_FULL=y,
      everything goes well.
      
      This problem is caused by improper calculation of the idle_time
      when the load is extremely high(near 100%). Firstly, cpufreq_governor
      uses get_cpu_idle_time to get the total idle time for specific cpu, then:
      
      1.If the system is configured with CONFIG_NO_HZ_FULL, the idle time is
        returned by ktime_get, which is always increasing, it's OK.
      2.However, if the system is configured with CONFIG_HZ_PERIODIC,
        get_cpu_idle_time might not guarantee to be always increasing,
        because it will leverage get_cpu_idle_time_jiffy to calculate the
        idle_time, consider the following scenario:
      
      At T1:
      idle_tick_1 = total_tick_1 - user_tick_1
      
      sample period(80ms)...
      
      At T2: ( T2 = T1 + 80ms):
      idle_tick_2 = total_tick_2 - user_tick_2
      
      Currently the algorithm is using (idle_tick_2 - idle_tick_1) to
      get the delta idle_time during the past sample period, however
      it CAN NOT guarantee that idle_tick_2 >= idle_tick_1, especially
      when cpu load is high.
      (Yes, total_tick_2 >= total_tick_1, and user_tick_2 >= user_tick_1,
      but how about idle_tick_2 and idle_tick_1? No guarantee.)
      So governor might get a negative value of idle_time during the past
      sample period, which might mislead the system that the idle time is
      very big(converted to unsigned int), and the busy time is nearly zero,
      which causes the governor to always choose the lowest cpufreq,
      then cause this problem.
      
      In theory there are two solutions:
      
      1.The logic should not rely on the idle tick during every sample period,
        but be based on the busy tick directly, as this is how 'top' is
        implemented.
      
      2.Or the logic must make sure that the idle_time is strictly increasing
        during each sample period, then there would be no negative idle_time
        anymore. This solution requires minimum modification to current code
        and this patch uses method 2.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=69821Reported-by: NJan Fikar <j.fikar@gmail.com>
      Signed-off-by: NChen Yu <yu.c.chen@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      0df35026
  4. 10 12月, 2015 2 次提交
    • R
      cpufreq: governor: Use lockless timer function · 2dd3e724
      Rafael J. Wysocki 提交于
      It is possible to get rid of the timer_lock spinlock used by the
      governor timer function for synchronization, but a couple of races
      need to be avoided.
      
      The first race is between multiple dbs_timer_handler() instances
      that may be running in parallel with each other on different
      CPUs.  Namely, one of them has to queue up the work item, but it
      cannot be queued up more than once.  To achieve that,
      atomic_inc_return() can be used on the skip_work field of
      struct cpu_common_dbs_info.
      
      The second race is between an already running dbs_timer_handler()
      and gov_cancel_work().  In that case the dbs_timer_handler() might
      not notice the skip_work incrementation in gov_cancel_work() and
      it might queue up its work item after gov_cancel_work() had
      returned (and that work item would corrupt skip_work going
      forward).  To prevent that from happening, gov_cancel_work()
      can be made wait for the timer function to complete (on all CPUs)
      right after skip_work has been incremented.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      2dd3e724
    • V
      cpufreq: governor: replace per-CPU delayed work with timers · 70f43e5e
      Viresh Kumar 提交于
      cpufreq governors evaluate load at sampling rate and based on that they
      update frequency for a group of CPUs belonging to the same cpufreq
      policy.
      
      This is required to be done in a single thread for all policy->cpus, but
      because we don't want to wakeup idle CPUs to do just that, we use
      deferrable work for this. If we would have used a single delayed
      deferrable work for the entire policy, there were chances that the CPU
      required to run the handler can be in idle and we might end up not
      changing the frequency for the entire group with load variations.
      
      And so we were forced to keep per-cpu works, and only the one that
      expires first need to do the real work and others are rescheduled for
      next sampling time.
      
      We have been using the more complex solution until now, where we used a
      delayed deferrable work for this, which is a combination of a timer and
      a work.
      
      This could be made lightweight by keeping per-cpu deferred timers with a
      single work item, which is scheduled by the first timer that expires.
      
      This patch does just that and here are important changes:
      - The timer handler will run in irq context and so we need to use a
        spin_lock instead of the timer_mutex. And so a separate timer_lock is
        created. This also makes the use of the mutex and lock quite clear, as
        we know what exactly they are protecting.
      - A new field 'skip_work' is added to track when the timer handlers can
        queue a work. More comments present in code.
      Suggested-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Reviewed-by: NAshwin Chaugule <ashwin.chaugule@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      70f43e5e
  5. 07 12月, 2015 2 次提交
  6. 02 11月, 2015 1 次提交
    • V
      cpufreq: governor: Quit work-handlers early if governor is stopped · 3a91b069
      Viresh Kumar 提交于
      gov_queue_work() acquires cpufreq_governor_lock to allow
      cpufreq_governor_stop() to drain delayed work items possibly scheduled
      on CPUs that share the policy with a CPU being taken offline.
      
      However, the same goal may be achieved in a more straightforward way if
      the policy pointer in the struct cpu_dbs_info matching the policy CPU is
      reset upfront by cpufreq_governor_stop() under the timer_mutex belonging
      to it and checked against NULL, under the same lock, at the beginning of
      dbs_timer().
      
      In that case every instance of dbs_timer() run for a struct cpu_dbs_info
      sharing the policy pointer in question after cpufreq_governor_stop() has
      started will notice that that pointer is NULL and bail out immediately
      without queuing up any new work items.  In turn, gov_cancel_work()
      called by cpufreq_governor_stop() before destroying timer_mutex will
      wait for all of the delayed work items currently running on the CPUs
      sharing the policy to drop the mutex, so it may be destroyed safely.
      
      Make cpufreq_governor_stop() and dbs_timer() work as described and
      modify gov_queue_work() so it does not acquire cpufreq_governor_lock any
      more.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      3a91b069
  7. 28 10月, 2015 1 次提交
  8. 26 9月, 2015 1 次提交
  9. 21 7月, 2015 2 次提交