1. 30 9月, 2017 9 次提交
    • P
      sched/fair: More accurate reweight_entity() · 840c5abc
      Peter Zijlstra 提交于
      When a (group) entity changes it's weight we should instantly change
      its load_avg and propagate that change into the sums it is part of.
      Because we use these values to predict future behaviour and are not
      interested in its historical value.
      
      Without this change, the change in load would need to propagate
      through the average, by which time it could again have changed etc..
      always chasing itself.
      
      With this change, the cfs_rq load_avg sum will more accurately reflect
      the current runnable and expected return of blocked load.
      Reported-by: NPaul Turner <pjt@google.com>
      [josef: compile fix !SMP || !FAIR_GROUP]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      840c5abc
    • P
      sched/fair: Introduce {en,de}queue_load_avg() · 8d5b9025
      Peter Zijlstra 提交于
      Analogous to the existing {en,de}queue_runnable_load_avg() add helpers
      for {en,de}queue_load_avg(). More users will follow.
      
      Includes some code movement to avoid fwd declarations.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8d5b9025
    • P
      sched/fair: Rename {en,de}queue_entity_load_avg() · b5b3e35f
      Peter Zijlstra 提交于
      Since they're now purely about runnable_load, rename them.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b5b3e35f
    • P
      sched/fair: Move enqueue migrate handling · b382a531
      Peter Zijlstra 提交于
      Move the entity migrate handling from enqueue_entity_load_avg() to
      update_load_avg(). This has two benefits:
      
       - {en,de}queue_entity_load_avg() will become purely about managing
         runnable_load
      
       - we can avoid a double update_tg_load_avg() and reduce pressure on
         the global tg->shares cacheline
      
      The reason we do this is so that we can change update_cfs_shares() to
      change both weight and (future) runnable_weight. For this to work we
      need to have the cfs_rq averages up-to-date (which means having done
      the attach), but we need the cfs_rq->avg.runnable_avg to not yet
      include the se's contribution (since se->on_rq == 0).
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b382a531
    • P
      sched/fair: Change update_load_avg() arguments · 88c0616e
      Peter Zijlstra 提交于
      Most call sites of update_load_avg() already have cfs_rq_of(se)
      available, pass it down instead of recomputing it.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      88c0616e
    • P
      sched/fair: Remove se->load.weight from se->avg.load_sum · c7b50216
      Peter Zijlstra 提交于
      Remove the load from the load_sum for sched_entities, basically
      turning load_sum into runnable_sum.  This prepares for better
      reweighting of group entities.
      
      Since we now have different rules for computing load_avg, split
      ___update_load_avg() into two parts, ___update_load_sum() and
      ___update_load_avg().
      
      So for se:
      
        ___update_load_sum(.weight = 1)
        ___upate_load_avg(.weight = se->load.weight)
      
      and for cfs_rq:
      
        ___update_load_sum(.weight = cfs_rq->load.weight)
        ___upate_load_avg(.weight = 1)
      
      Since the primary consumable is load_avg, most things will not be
      affected. Only those few sites that initialize/modify load_sum need
      attention.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c7b50216
    • P
      sched/fair: Cure calc_cfs_shares() vs. reweight_entity() · 3d4b60d3
      Peter Zijlstra 提交于
      Vincent reported that when running in a cgroup, his root
      cfs_rq->avg.load_avg dropped to 0 on task idle.
      
      This is because reweight_entity() will now immediately propagate the
      weight change of the group entity to its cfs_rq, and as it happens,
      our approxmation (5) for calc_cfs_shares() results in 0 when the group
      is idle.
      
      Avoid this by using the correct (3) as a lower bound on (5). This way
      the empty cgroup will slowly decay instead of instantly drop to 0.
      Reported-by: NVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      3d4b60d3
    • P
      sched/fair: Add comment to calc_cfs_shares() · cef27403
      Peter Zijlstra 提交于
      Explain the magic equation in calc_cfs_shares() a bit better.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      cef27403
    • P
      sched/fair: Clean up calc_cfs_shares() · 7c80cfc9
      Peter Zijlstra 提交于
      For consistencies sake, we should have only a single reading of tg->shares.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      7c80cfc9
  2. 12 9月, 2017 2 次提交
  3. 11 9月, 2017 1 次提交
  4. 09 9月, 2017 1 次提交
  5. 07 9月, 2017 1 次提交
  6. 10 8月, 2017 8 次提交
  7. 01 8月, 2017 1 次提交
    • V
      sched: cpufreq: Allow remote cpufreq callbacks · 674e7541
      Viresh Kumar 提交于
      With Android UI and benchmarks the latency of cpufreq response to
      certain scheduling events can become very critical. Currently, callbacks
      into cpufreq governors are only made from the scheduler if the target
      CPU of the event is the same as the current CPU. This means there are
      certain situations where a target CPU may not run the cpufreq governor
      for some time.
      
      One testcase to show this behavior is where a task starts running on
      CPU0, then a new task is also spawned on CPU0 by a task on CPU1. If the
      system is configured such that the new tasks should receive maximum
      demand initially, this should result in CPU0 increasing frequency
      immediately. But because of the above mentioned limitation though, this
      does not occur.
      
      This patch updates the scheduler core to call the cpufreq callbacks for
      remote CPUs as well.
      
      The schedutil, ondemand and conservative governors are updated to
      process cpufreq utilization update hooks called for remote CPUs where
      the remote CPU is managed by the cpufreq policy of the local CPU.
      
      The intel_pstate driver is updated to always reject remote callbacks.
      
      This is tested with couple of usecases (Android: hackbench, recentfling,
      galleryfling, vellamo, Ubuntu: hackbench) on ARM hikey board (64 bit
      octa-core, single policy). Only galleryfling showed minor improvements,
      while others didn't had much deviation.
      
      The reason being that this patch only targets a corner case, where
      following are required to be true to improve performance and that
      doesn't happen too often with these tests:
      
      - Task is migrated to another CPU.
      - The task has high demand, and should take the target CPU to higher
        OPPs.
      - And the target CPU doesn't call into the cpufreq governor until the
        next tick.
      
      Based on initial work from Steve Muckle.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Acked-by: NSaravana Kannan <skannan@codeaurora.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      674e7541
  8. 05 7月, 2017 1 次提交
    • J
      sched/fair: Fix load_balance() affinity redo path · 65a4433a
      Jeffrey Hugo 提交于
      If load_balance() fails to migrate any tasks because all tasks were
      affined, load_balance() removes the source CPU from consideration and
      attempts to redo and balance among the new subset of CPUs.
      
      There is a bug in this code path where the algorithm considers all active
      CPUs in the system (minus the source that was just masked out).  This is
      not valid for two reasons: some active CPUs may not be in the current
      scheduling domain and one of the active CPUs is dst_cpu. These CPUs should
      not be considered, as we cannot pull load from them.
      
      Instead of failing out of load_balance(), we may end up redoing the search
      with no valid CPUs and incorrectly concluding the domain is balanced.
      Additionally, if the group_imbalance flag was just set, it may also be
      incorrectly unset, thus the flag will not be seen by other CPUs in future
      load_balance() runs as that algorithm intends.
      
      Fix the check by removing CPUs not in the current domain and the dst_cpu
      from considertation, thus limiting the evaluation to valid remaining CPUs
      from which load might be migrated.
      Co-authored-by: NAustin Christ <austinwc@codeaurora.org>
      Co-authored-by: NDietmar Eggemann <dietmar.eggemann@arm.com>
      Tested-by: NTyler Baicar <tbaicar@codeaurora.org>
      Signed-off-by: NJeffrey Hugo <jhugo@codeaurora.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Austin Christ <austinwc@codeaurora.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Timur Tabi <timur@codeaurora.org>
      Link: http://lkml.kernel.org/r/1496863138-11322-2-git-send-email-jhugo@codeaurora.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      65a4433a
  9. 29 6月, 2017 1 次提交
    • T
      sched/numa: Hide numa_wake_affine() from UP build · ff801b71
      Thomas Gleixner 提交于
      Stephen reported the following build warning in UP:
      
      kernel/sched/fair.c:2657:9: warning: 'struct sched_domain' declared inside
      parameter list
               ^
      /home/sfr/next/next/kernel/sched/fair.c:2657:9: warning: its scope is only this
      definition or declaration, which is probably not what you want
      
      Hide the numa_wake_affine() inline stub on UP builds to get rid of it.
      
      Fixes: 3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      ff801b71
  10. 24 6月, 2017 4 次提交
  11. 22 6月, 2017 1 次提交
  12. 20 6月, 2017 1 次提交
  13. 11 6月, 2017 1 次提交
  14. 08 6月, 2017 1 次提交
    • P
      sched/core: Implement new approach to scale select_idle_cpu() · 1ad3aaf3
      Peter Zijlstra 提交于
      Hackbench recently suffered a bunch of pain, first by commit:
      
        4c77b18c ("sched/fair: Make select_idle_cpu() more aggressive")
      
      and then by commit:
      
        c743f0a5 ("sched/fair, cpumask: Export for_each_cpu_wrap()")
      
      which fixed a bug in the initial for_each_cpu_wrap() implementation
      that made select_idle_cpu() even more expensive. The bug was that it
      would skip over CPUs when bits were consequtive in the bitmask.
      
      This however gave me an idea to fix select_idle_cpu(); where the old
      scheme was a cliff-edge throttle on idle scanning, this introduces a
      more gradual approach. Instead of stopping to scan entirely, we limit
      how many CPUs we scan.
      
      Initial benchmarks show that it mostly recovers hackbench while not
      hurting anything else, except Mason's schbench, but not as bad as the
      old thing.
      
      It also appears to recover the tbench high-end, which also suffered like
      hackbench.
      Tested-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Cc: kitsunyan <kitsunyan@inbox.ru>
      Cc: linux-kernel@vger.kernel.org
      Cc: lvenanci@redhat.com
      Cc: riel@redhat.com
      Cc: xiaolong.ye@intel.com
      Link: http://lkml.kernel.org/r/20170517105350.hk5m4h4jb6dfr65a@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ad3aaf3
  15. 23 5月, 2017 1 次提交
    • V
      sched/numa: Use down_read_trylock() for the mmap_sem · 8655d549
      Vlastimil Babka 提交于
      A customer has reported a soft-lockup when running an intensive
      memory stress test, where the trace on multiple CPU's looks like this:
      
       RIP: 0010:[<ffffffff810c53fe>]
        [<ffffffff810c53fe>] native_queued_spin_lock_slowpath+0x10e/0x190
      ...
       Call Trace:
        [<ffffffff81182d07>] queued_spin_lock_slowpath+0x7/0xa
        [<ffffffff811bc331>] change_protection_range+0x3b1/0x930
        [<ffffffff811d4be8>] change_prot_numa+0x18/0x30
        [<ffffffff810adefe>] task_numa_work+0x1fe/0x310
        [<ffffffff81098322>] task_work_run+0x72/0x90
      
      Further investigation showed that the lock contention here is pmd_lock().
      
      The task_numa_work() function makes sure that only one thread is let to perform
      the work in a single scan period (via cmpxchg), but if there's a thread with
      mmap_sem locked for writing for several periods, multiple threads in
      task_numa_work() can build up a convoy waiting for mmap_sem for read and then
      all get unblocked at once.
      
      This patch changes the down_read() to the trylock version, which prevents the
      build up. For a workload experiencing mmap_sem contention, it's probably better
      to postpone the NUMA balancing work anyway. This seems to have fixed the soft
      lockups involving pmd_lock(), which is in line with the convoy theory.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170515131316.21909-1-vbabka@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8655d549
  16. 15 5月, 2017 6 次提交