1. 14 5月, 2012 2 次提交
    • P
      sched/nohz: Fix rq->cpu_load[] calculations · 556061b0
      Peter Zijlstra 提交于
      While investigating why the load-balancer did funny I found that the
      rq->cpu_load[] tables were completely screwy.. a bit more digging
      revealed that the updates that got through were missing ticks followed
      by a catchup of 2 ticks.
      
      The catchup assumes the cpu was idle during that time (since only nohz
      can cause missed ticks and the machine is idle etc..) this means that
      esp. the higher indices were significantly lower than they ought to
      be.
      
      The reason for this is that its not correct to compare against jiffies
      on every jiffy on any other cpu than the cpu that updates jiffies.
      
      This patch cludges around it by only doing the catch-up stuff from
      nohz_idle_balance() and doing the regular stuff unconditionally from
      the tick.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: pjt@google.com
      Cc: Venkatesh Pallipadi <venki@google.com>
      Link: http://lkml.kernel.org/n/tip-tp4kj18xdd5aj4vvj0qg55s2@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      556061b0
    • P
      sched/fair: Revert sched-domain iteration breakage · 04f733b4
      Peter Zijlstra 提交于
      Patches c22402a2 ("sched/fair: Let minimally loaded cpu balance the
      group") and 0ce90475 ("sched/fair: Add some serialization to the
      sched_domain load-balance walk") are horribly broken so revert them.
      
      The problem is that while it sounds good to have the minimally loaded
      cpu do the pulling of more load, the way we walk the domains there is
      absolutely no guarantee this cpu will actually get to the domain. In
      fact its very likely it wont. Therefore the higher up the tree we get,
      the less likely it is we'll balance at all.
      
      The first of mask always walks up, while sucky in that it accumulates
      load on the first cpu and needs extra passes to spread it out at least
      guarantees a cpu gets up that far and load-balancing happens at all.
      
      Since its now always the first and idle cpus should always be able to
      balance so they get a task as fast as possible we can also do away
      with the added serialization.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/n/tip-rpuhs5s56aiv1aw7khv9zkw6@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      04f733b4
  2. 09 5月, 2012 4 次提交
  3. 26 4月, 2012 1 次提交
  4. 23 3月, 2012 1 次提交
  5. 13 3月, 2012 2 次提交
  6. 01 3月, 2012 3 次提交
  7. 24 2月, 2012 1 次提交
    • I
      static keys: Introduce 'struct static_key', static_key_true()/false() and... · c5905afb
      Ingo Molnar 提交于
      static keys: Introduce 'struct static_key', static_key_true()/false() and static_key_slow_[inc|dec]()
      
      So here's a boot tested patch on top of Jason's series that does
      all the cleanups I talked about and turns jump labels into a
      more intuitive to use facility. It should also address the
      various misconceptions and confusions that surround jump labels.
      
      Typical usage scenarios:
      
              #include <linux/static_key.h>
      
              struct static_key key = STATIC_KEY_INIT_TRUE;
      
              if (static_key_false(&key))
                      do unlikely code
              else
                      do likely code
      
      Or:
      
              if (static_key_true(&key))
                      do likely code
              else
                      do unlikely code
      
      The static key is modified via:
      
              static_key_slow_inc(&key);
              ...
              static_key_slow_dec(&key);
      
      The 'slow' prefix makes it abundantly clear that this is an
      expensive operation.
      
      I've updated all in-kernel code to use this everywhere. Note
      that I (intentionally) have not pushed through the rename
      blindly through to the lowest levels: the actual jump-label
      patching arch facility should be named like that, so we want to
      decouple jump labels from the static-key facility a bit.
      
      On non-jump-label enabled architectures static keys default to
      likely()/unlikely() branches.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NJason Baron <jbaron@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: a.p.zijlstra@chello.nl
      Cc: mathieu.desnoyers@efficios.com
      Cc: davem@davemloft.net
      Cc: ddaney.cavm@gmail.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20120222085809.GA26397@elte.huSigned-off-by: NIngo Molnar <mingo@elte.hu>
      c5905afb
  8. 22 2月, 2012 2 次提交
  9. 31 1月, 2012 1 次提交
  10. 27 1月, 2012 2 次提交
  11. 12 1月, 2012 1 次提交
  12. 24 12月, 2011 1 次提交
  13. 21 12月, 2011 6 次提交
  14. 08 12月, 2011 1 次提交
  15. 07 12月, 2011 3 次提交
  16. 06 12月, 2011 8 次提交
  17. 17 11月, 2011 1 次提交