1. 05 5月, 2016 10 次提交
  2. 28 4月, 2016 8 次提交
  3. 25 4月, 2016 1 次提交
  4. 24 4月, 2016 11 次提交
  5. 23 4月, 2016 10 次提交
    • X
      sched/deadline: Fix a bug in dl_overflow() · fec148c0
      Xunlei Pang 提交于
      I got a minus(very big) dl_b->total_bw during my deadline tests.
      
          # grep dl /proc/sched_debug
          dl_rq[0]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : -222297900
      
      Something unusual must have happened.
      
      After some digging, I finally noticed that when changing a deadline
      task to normal(cfs), and changing it back to deadline immediately,
      after it died, we will got the wrong dl_bw->total_bw.
      
      The root cause is in dl_overflow(), it has:
          if (new_bw == p->dl.dl_bw)
      	return 0;
      
      1) When a deadline task is changed to !deadline task, it will start
         dl timer in switched_from_dl(), and retain previous deadline parameter
         till the timer expires.
      
      2) If we change it back to deadline with the same bandwidth parameter
         before the timer expires, as it keeps the old bandwidth although it
         is not a deadline task. dl_overflow() simply returns success without
         updating the right data, and got the wrong dl_bw->total_bw.
      
      The solution is simple, if @p is not deadline, don't return.
      Signed-off-by: NXunlei Pang <xlpang@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NJuri Lelli <juri.lelli@arm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460636368-1993-1-git-send-email-xlpang@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fec148c0
    • F
      sched/fair: Optimize !CONFIG_NO_HZ_COMMON CPU load updates · 9fd81dd5
      Frederic Weisbecker 提交于
      Some code in CPU load update only concern NO_HZ configs but it is
      built on all configurations. When NO_HZ isn't built, that code is harmless
      but just happens to take some useless ressources in CPU and memory:
      
      1) one useless field in struct rq
      2) jiffies record on every tick that is never used (cpu_load_update_periodic)
      3) decay_load_missed is called two times on every tick to eventually
         return immediately with no action taken. And that function is dead
         code.
      
      For pure optimization purposes, lets conditionally build the NO_HZ
      related code.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1461080211-16271-1-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9fd81dd5
    • F
      sched/fair: Correctly handle nohz ticks CPU load accounting · 1f41906a
      Frederic Weisbecker 提交于
      Ticks can happen while the CPU is in dynticks-idle or dynticks-singletask
      mode. In fact "nohz" or "dynticks" only mean that we exit the periodic
      mode and we try to minimize the ticks as much as possible. The nohz
      subsystem uses a confusing terminology with the internal state
      "ts->tick_stopped" which is also available through its public interface
      with tick_nohz_tick_stopped(). This is a misnomer as the tick is instead
      reduced with the best effort rather than stopped. In the best case the
      tick can indeed be actually stopped but there is no guarantee about that.
      If a timer needs to fire one second later, a tick will fire while the
      CPU is in nohz mode and this is a very common scenario.
      
      Now this confusion happens to be a problem with CPU load updates:
      cpu_load_update_active() doesn't handle nohz ticks correctly because it
      assumes that ticks are completely stopped in nohz mode and that
      cpu_load_update_active() can't be called in dynticks mode. When that
      happens, the whole previous tickless load is ignored and the function
      just records the load for the current tick, ignoring potentially long
      idle periods behind.
      
      In order to solve this, we could account the current load for the
      previous nohz time but there is a risk that we account the load of a
      task that got freshly enqueued for the whole nohz period.
      
      So instead, lets record the dynticks load on nohz frame entry so we know
      what to record in case of nohz ticks, then use this record to account
      the tickless load on nohz ticks and nohz frame end.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460555812-25375-3-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f41906a
    • F
      sched/fair: Gather CPU load functions under a more conventional namespace · cee1afce
      Frederic Weisbecker 提交于
      The CPU load update related functions have a weak naming convention
      currently, starting with update_cpu_load_*() which isn't ideal as
      "update" is a very generic concept.
      
      Since two of these functions are public already (and a third is to come)
      that's enough to introduce a more conventional naming scheme. So let's
      do the following rename instead:
      
      	update_cpu_load_*() -> cpu_load_update_*()
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460555812-25375-2-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cee1afce
    • S
      sched/fair: Call cpufreq hook in additional paths · a2c6c91f
      Steve Muckle 提交于
      The cpufreq hook should be called any time the root CFS rq utilization
      changes. This can occur when a task is switched to or from the fair
      class, or a task moves between groups or CPUs, but these paths
      currently do not call the cpufreq hook.
      
      Fix this by adding the hook to attach_entity_load_avg() and
      detach_entity_load_avg().
      Suggested-by: NVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: NSteve Muckle <smuckle@linaro.org>
      [ Added the .update_freq argument to update_cfs_rq_load_avg() to avoid a double cpufreq call. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Juri Lelli <Juri.Lelli@arm.com>
      Cc: Michael Turquette <mturquette@baylibre.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Patrick Bellasi <patrick.bellasi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1458858367-2831-1-git-send-email-smuckle@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a2c6c91f
    • S
      sched/fair: Do not call cpufreq hook unless util changed · 41e0d37f
      Steve Muckle 提交于
      There's no reason to call the cpufreq hook if the root cfs_rq
      utilization has not been modified.
      Signed-off-by: NSteve Muckle <smuckle@linaro.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Juri Lelli <Juri.Lelli@arm.com>
      Cc: Michael Turquette <mturquette@baylibre.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Patrick Bellasi <patrick.bellasi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Link: http://lkml.kernel.org/r/1458606068-7476-2-git-send-email-smuckle@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      41e0d37f
    • S
      sched/fair: Move cpufreq hook to update_cfs_rq_load_avg() · 21e96f88
      Steve Muckle 提交于
      The cpufreq hook should be called whenever the root cfs_rq
      utilization changes so update_cfs_rq_load_avg() is a better
      place for it. The current location is not invoked in the
      enqueue_entity() or update_blocked_averages() paths.
      Suggested-by: NVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: NSteve Muckle <smuckle@linaro.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Juri Lelli <Juri.Lelli@arm.com>
      Cc: Michael Turquette <mturquette@baylibre.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Patrick Bellasi <patrick.bellasi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1458606068-7476-1-git-send-email-smuckle@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      21e96f88
    • S
      sched/fair: Fix asym packing to select correct CPU · 1f621e02
      Srikar Dronamraju 提交于
      When asymmetric packing is set in the sched_domain and target CPU is
      busy, update_sd_pick_busiest() may not select the busiest runqueue.
      When target CPU is busy, find_busiest_group() will ignore checks for
      asym packing and may continue to load balance using the currently
      selected not-the-busiest runqueue as source runqueue.
      Selecting the busiest runqueue as source when the target CPU is busy,
      should result in achieving much better load balance.
      
      Also when target CPU is not busy and asymmetric packing is set in sd,
      select higher CPU as source CPU for load balancing.
      
      While doing this change, move the check to see if target CPU is busy
      into check_asym_packing().
      
      The extent of performance benefit from this change decreases with the
      increasing load. However there is benefit in undercommit as well as
      overcommit conditions.
      
      1. Record per second ebizzy (32 threads) on a 64 CPU power 7 box. (5 iterations)
      4.6.0-rc2
      	Testcase:         Min         Max         Avg      StdDev
      	  ebizzy:  5223767.00 10368236.00  7946971.00  1753094.76
      
      4.6.0-rc2+asym-changes
      	Testcase:         Min         Max         Avg      StdDev     %Change
      	  ebizzy:  8617191.00 13872356.00 11383980.00  1783400.89     +24.78%
      
      2. Record per second ebizzy (64 threads) on a 64 CPU power 7 box. (5 iterations)
      4.6.0-rc2
      	Testcase:         Min         Max         Avg      StdDev
      	  ebizzy:  6497666.00 18399783.00 10818093.20  4051452.08
      
      4.6.0-rc2+asym-changes
      	Testcase:         Min         Max         Avg      StdDev     %Change
      	  ebizzy:  7567365.00 19456937.00 11674063.60  4295407.48      +4.40%
      
      3. Record per second ebizzy (128 threads) on a 64 CPU power 7 box. (5 iterations)
      4.6.0-rc2
      	Testcase:         Min         Max         Avg      StdDev
      	  ebizzy: 37073983.00 40341911.00 38776241.80  1259766.82
      
      4.6.0-rc2+asym-changes
      	Testcase:         Min         Max         Avg      StdDev     %Change
      	  ebizzy: 38030399.00 41333378.00 39827404.40  1255001.86      +2.54%
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1459948660-16073-1-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f621e02
    • I
      84eaae15
    • L
      Merge tag 'pinctrl-v4.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl · 09502d9f
      Linus Torvalds 提交于
      Pull pin control fixes from Linus Walleij:
       "Some pin control driver fixes came in.  One headed for stable and the
        other two are just ordinary merge window fixes.
      
         - Make the i.MX driver select REGMAP as a dependency
         - Fix up the Mediatek debounce time unit
         - Fix a real hairy ffs vs __ffs issue in the Single pinctrl driver"
      
      * tag 'pinctrl-v4.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
        pinctrl: single: Fix pcs_parse_bits_in_pinctrl_entry to use __ffs than ffs
        pinctrl: mediatek: correct debounce time unit in mtk_gpio_set_debounce
        pinctrl: imx: Kconfig: PINCTRL_IMX select REGMAP
      09502d9f