• C
    [PATCH] sched: fix smt nice lock contention and optimization · c96d145e
    Chen, Kenneth W 提交于
    Initial report and lock contention fix from Chris Mason:
    
    Recent benchmarks showed some performance regressions between 2.6.16 and
    2.6.5.  We tracked down one of the regressions to lock contention in
    schedule heavy workloads (~70,000 context switches per second)
    
    kernel/sched.c:dependent_sleeper() was responsible for most of the lock
    contention, hammering on the run queue locks.  The patch below is more of a
    discussion point than a suggested fix (although it does reduce lock
    contention significantly).  The dependent_sleeper code looks very expensive
    to me, especially for using a spinlock to bounce control between two
    different siblings in the same cpu.
    
    It is further optimized:
    
    * perform dependent_sleeper check after next task is determined
    * convert wake_sleeping_dependent to use trylock
    * skip smt runqueue check if trylock fails
    * optimize double_rq_lock now that smt nice is converted to trylock
    * early exit in searching first SD_SHARE_CPUPOWER domain
    * speedup fast path of dependent_sleeper
    
    [akpm@osdl.org: cleanup]
    Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
    Acked-by: NIngo Molnar <mingo@elte.hu>
    Acked-by: NCon Kolivas <kernel@kolivas.org>
    Signed-off-by: NNick Piggin <npiggin@suse.de>
    Acked-by: NChris Mason <mason@suse.com>
    Signed-off-by: NAndrew Morton <akpm@osdl.org>
    Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
    c96d145e
sched.c 153.6 KB