提交 beed33a8 编写于 作者: N Nick Piggin 提交者: Linus Torvalds

[PATCH] sched: likely profiling

This likely profiling is pretty fun. I found a few possible problems
in sched.c.

This patch may be not measurable, but when I did measure long ago,
nooping (un)likely cost a couple of % on scheduler heavy benchmarks, so
it all adds up.

Tweak some branch hints:

- the 2nd 64 bits in the bitmask is likely to be populated, because it
  contains the first 28 bits (nearly 3/4) of the normal priorities.
  (ratio of 669669:691 ~= 1000:1).

- it isn't unlikely that context switching switches to another process. it
  might be very rapidly switching to and from the idle process (ratio of
  475815:419004 and 471330:423544). Let the branch predictor decide.

- preempt_enable seems to be very often called in a nested preempt_disable
  or with interrupts disabled (ratio of 3567760:87965 ~= 40:1)
Signed-off-by: NNick Piggin <npiggin@suse.de>
Acked-by: NIngo Molnar <mingo@elte.hu>
Cc: Daniel Walker <dwalker@mvista.com>
Cc: Hua Zhong <hzhong@gmail.com>
Signed-off-by: NAndrew Morton <akpm@osdl.org>
Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
上级 f33d9bd5
...@@ -15,7 +15,7 @@ static inline int sched_find_first_bit(const unsigned long *b) ...@@ -15,7 +15,7 @@ static inline int sched_find_first_bit(const unsigned long *b)
#if BITS_PER_LONG == 64 #if BITS_PER_LONG == 64
if (unlikely(b[0])) if (unlikely(b[0]))
return __ffs(b[0]); return __ffs(b[0]);
if (unlikely(b[1])) if (likely(b[1]))
return __ffs(b[1]) + 64; return __ffs(b[1]) + 64;
return __ffs(b[2]) + 128; return __ffs(b[2]) + 128;
#elif BITS_PER_LONG == 32 #elif BITS_PER_LONG == 32
......
...@@ -1822,14 +1822,14 @@ context_switch(struct rq *rq, struct task_struct *prev, ...@@ -1822,14 +1822,14 @@ context_switch(struct rq *rq, struct task_struct *prev,
struct mm_struct *mm = next->mm; struct mm_struct *mm = next->mm;
struct mm_struct *oldmm = prev->active_mm; struct mm_struct *oldmm = prev->active_mm;
if (unlikely(!mm)) { if (!mm) {
next->active_mm = oldmm; next->active_mm = oldmm;
atomic_inc(&oldmm->mm_count); atomic_inc(&oldmm->mm_count);
enter_lazy_tlb(oldmm, next); enter_lazy_tlb(oldmm, next);
} else } else
switch_mm(oldmm, mm, next); switch_mm(oldmm, mm, next);
if (unlikely(!prev->mm)) { if (!prev->mm) {
prev->active_mm = NULL; prev->active_mm = NULL;
WARN_ON(rq->prev_mm); WARN_ON(rq->prev_mm);
rq->prev_mm = oldmm; rq->prev_mm = oldmm;
...@@ -3491,7 +3491,7 @@ asmlinkage void __sched preempt_schedule(void) ...@@ -3491,7 +3491,7 @@ asmlinkage void __sched preempt_schedule(void)
* If there is a non-zero preempt_count or interrupts are disabled, * If there is a non-zero preempt_count or interrupts are disabled,
* we do not want to preempt the current task. Just return.. * we do not want to preempt the current task. Just return..
*/ */
if (unlikely(ti->preempt_count || irqs_disabled())) if (likely(ti->preempt_count || irqs_disabled()))
return; return;
need_resched: need_resched:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册