1. 08 6月, 2017 2 次提交
    • L
      sched/deadline: Track the active utilization · e36d8677
      Luca Abeni 提交于
      Active utilization is defined as the total utilization of active
      (TASK_RUNNING) tasks queued on a runqueue. Hence, it is increased
      when a task wakes up and is decreased when a task blocks.
      
      When a task is migrated from CPUi to CPUj, immediately subtract the
      task's utilization from CPUi and add it to CPUj. This mechanism is
      implemented by modifying the pull and push functions.
      Note: this is not fully correct from the theoretical point of view
      (the utilization should be removed from CPUi only at the 0 lag
      time), a more theoretically sound solution is presented in the
      next patches.
      Tested-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Signed-off-by: NLuca Abeni <luca.abeni@unitn.it>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NJuri Lelli <juri.lelli@arm.com>
      Cc: Claudio Scordino <claudio@evidence.eu.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tommaso Cucinotta <tommaso.cucinotta@sssup.it>
      Link: http://lkml.kernel.org/r/1495138417-6203-2-git-send-email-luca.abeni@santannapisa.itSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e36d8677
    • P
      sched/core: Implement new approach to scale select_idle_cpu() · 1ad3aaf3
      Peter Zijlstra 提交于
      Hackbench recently suffered a bunch of pain, first by commit:
      
        4c77b18c ("sched/fair: Make select_idle_cpu() more aggressive")
      
      and then by commit:
      
        c743f0a5 ("sched/fair, cpumask: Export for_each_cpu_wrap()")
      
      which fixed a bug in the initial for_each_cpu_wrap() implementation
      that made select_idle_cpu() even more expensive. The bug was that it
      would skip over CPUs when bits were consequtive in the bitmask.
      
      This however gave me an idea to fix select_idle_cpu(); where the old
      scheme was a cliff-edge throttle on idle scanning, this introduces a
      more gradual approach. Instead of stopping to scan entirely, we limit
      how many CPUs we scan.
      
      Initial benchmarks show that it mostly recovers hackbench while not
      hurting anything else, except Mason's schbench, but not as bad as the
      old thing.
      
      It also appears to recover the tbench high-end, which also suffered like
      hackbench.
      Tested-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Cc: kitsunyan <kitsunyan@inbox.ru>
      Cc: linux-kernel@vger.kernel.org
      Cc: lvenanci@redhat.com
      Cc: riel@redhat.com
      Cc: xiaolong.ye@intel.com
      Link: http://lkml.kernel.org/r/20170517105350.hk5m4h4jb6dfr65a@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ad3aaf3
  2. 05 6月, 2017 1 次提交
  3. 24 5月, 2017 1 次提交
  4. 23 5月, 2017 36 次提交