1. 30 9月, 2017 11 次提交
  2. 12 9月, 2017 2 次提交
  3. 11 9月, 2017 1 次提交
  4. 09 9月, 2017 1 次提交
  5. 07 9月, 2017 1 次提交
  6. 10 8月, 2017 8 次提交
  7. 01 8月, 2017 1 次提交
    • V
      sched: cpufreq: Allow remote cpufreq callbacks · 674e7541
      Viresh Kumar 提交于
      With Android UI and benchmarks the latency of cpufreq response to
      certain scheduling events can become very critical. Currently, callbacks
      into cpufreq governors are only made from the scheduler if the target
      CPU of the event is the same as the current CPU. This means there are
      certain situations where a target CPU may not run the cpufreq governor
      for some time.
      
      One testcase to show this behavior is where a task starts running on
      CPU0, then a new task is also spawned on CPU0 by a task on CPU1. If the
      system is configured such that the new tasks should receive maximum
      demand initially, this should result in CPU0 increasing frequency
      immediately. But because of the above mentioned limitation though, this
      does not occur.
      
      This patch updates the scheduler core to call the cpufreq callbacks for
      remote CPUs as well.
      
      The schedutil, ondemand and conservative governors are updated to
      process cpufreq utilization update hooks called for remote CPUs where
      the remote CPU is managed by the cpufreq policy of the local CPU.
      
      The intel_pstate driver is updated to always reject remote callbacks.
      
      This is tested with couple of usecases (Android: hackbench, recentfling,
      galleryfling, vellamo, Ubuntu: hackbench) on ARM hikey board (64 bit
      octa-core, single policy). Only galleryfling showed minor improvements,
      while others didn't had much deviation.
      
      The reason being that this patch only targets a corner case, where
      following are required to be true to improve performance and that
      doesn't happen too often with these tests:
      
      - Task is migrated to another CPU.
      - The task has high demand, and should take the target CPU to higher
        OPPs.
      - And the target CPU doesn't call into the cpufreq governor until the
        next tick.
      
      Based on initial work from Steve Muckle.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Acked-by: NSaravana Kannan <skannan@codeaurora.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      674e7541
  8. 05 7月, 2017 1 次提交
    • J
      sched/fair: Fix load_balance() affinity redo path · 65a4433a
      Jeffrey Hugo 提交于
      If load_balance() fails to migrate any tasks because all tasks were
      affined, load_balance() removes the source CPU from consideration and
      attempts to redo and balance among the new subset of CPUs.
      
      There is a bug in this code path where the algorithm considers all active
      CPUs in the system (minus the source that was just masked out).  This is
      not valid for two reasons: some active CPUs may not be in the current
      scheduling domain and one of the active CPUs is dst_cpu. These CPUs should
      not be considered, as we cannot pull load from them.
      
      Instead of failing out of load_balance(), we may end up redoing the search
      with no valid CPUs and incorrectly concluding the domain is balanced.
      Additionally, if the group_imbalance flag was just set, it may also be
      incorrectly unset, thus the flag will not be seen by other CPUs in future
      load_balance() runs as that algorithm intends.
      
      Fix the check by removing CPUs not in the current domain and the dst_cpu
      from considertation, thus limiting the evaluation to valid remaining CPUs
      from which load might be migrated.
      Co-authored-by: NAustin Christ <austinwc@codeaurora.org>
      Co-authored-by: NDietmar Eggemann <dietmar.eggemann@arm.com>
      Tested-by: NTyler Baicar <tbaicar@codeaurora.org>
      Signed-off-by: NJeffrey Hugo <jhugo@codeaurora.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Austin Christ <austinwc@codeaurora.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Timur Tabi <timur@codeaurora.org>
      Link: http://lkml.kernel.org/r/1496863138-11322-2-git-send-email-jhugo@codeaurora.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      65a4433a
  9. 29 6月, 2017 1 次提交
    • T
      sched/numa: Hide numa_wake_affine() from UP build · ff801b71
      Thomas Gleixner 提交于
      Stephen reported the following build warning in UP:
      
      kernel/sched/fair.c:2657:9: warning: 'struct sched_domain' declared inside
      parameter list
               ^
      /home/sfr/next/next/kernel/sched/fair.c:2657:9: warning: its scope is only this
      definition or declaration, which is probably not what you want
      
      Hide the numa_wake_affine() inline stub on UP builds to get rid of it.
      
      Fixes: 3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      ff801b71
  10. 24 6月, 2017 4 次提交
  11. 22 6月, 2017 1 次提交
  12. 20 6月, 2017 1 次提交
  13. 11 6月, 2017 1 次提交
  14. 08 6月, 2017 1 次提交
    • P
      sched/core: Implement new approach to scale select_idle_cpu() · 1ad3aaf3
      Peter Zijlstra 提交于
      Hackbench recently suffered a bunch of pain, first by commit:
      
        4c77b18c ("sched/fair: Make select_idle_cpu() more aggressive")
      
      and then by commit:
      
        c743f0a5 ("sched/fair, cpumask: Export for_each_cpu_wrap()")
      
      which fixed a bug in the initial for_each_cpu_wrap() implementation
      that made select_idle_cpu() even more expensive. The bug was that it
      would skip over CPUs when bits were consequtive in the bitmask.
      
      This however gave me an idea to fix select_idle_cpu(); where the old
      scheme was a cliff-edge throttle on idle scanning, this introduces a
      more gradual approach. Instead of stopping to scan entirely, we limit
      how many CPUs we scan.
      
      Initial benchmarks show that it mostly recovers hackbench while not
      hurting anything else, except Mason's schbench, but not as bad as the
      old thing.
      
      It also appears to recover the tbench high-end, which also suffered like
      hackbench.
      Tested-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Cc: kitsunyan <kitsunyan@inbox.ru>
      Cc: linux-kernel@vger.kernel.org
      Cc: lvenanci@redhat.com
      Cc: riel@redhat.com
      Cc: xiaolong.ye@intel.com
      Link: http://lkml.kernel.org/r/20170517105350.hk5m4h4jb6dfr65a@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ad3aaf3
  15. 23 5月, 2017 1 次提交
    • V
      sched/numa: Use down_read_trylock() for the mmap_sem · 8655d549
      Vlastimil Babka 提交于
      A customer has reported a soft-lockup when running an intensive
      memory stress test, where the trace on multiple CPU's looks like this:
      
       RIP: 0010:[<ffffffff810c53fe>]
        [<ffffffff810c53fe>] native_queued_spin_lock_slowpath+0x10e/0x190
      ...
       Call Trace:
        [<ffffffff81182d07>] queued_spin_lock_slowpath+0x7/0xa
        [<ffffffff811bc331>] change_protection_range+0x3b1/0x930
        [<ffffffff811d4be8>] change_prot_numa+0x18/0x30
        [<ffffffff810adefe>] task_numa_work+0x1fe/0x310
        [<ffffffff81098322>] task_work_run+0x72/0x90
      
      Further investigation showed that the lock contention here is pmd_lock().
      
      The task_numa_work() function makes sure that only one thread is let to perform
      the work in a single scan period (via cmpxchg), but if there's a thread with
      mmap_sem locked for writing for several periods, multiple threads in
      task_numa_work() can build up a convoy waiting for mmap_sem for read and then
      all get unblocked at once.
      
      This patch changes the down_read() to the trylock version, which prevents the
      build up. For a workload experiencing mmap_sem contention, it's probably better
      to postpone the NUMA balancing work anyway. This seems to have fixed the soft
      lockups involving pmd_lock(), which is in line with the convoy theory.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170515131316.21909-1-vbabka@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8655d549
  16. 15 5月, 2017 4 次提交