• J
    [POWERPC] spufs: fix scheduler starvation by idle contexts · 4ef11014
    Jeremy Kerr 提交于
    2.6.25 has a regression where we can starve the scheduler by creating
    (N_SPES+1) contexts, then running them one at a time.
    
    The final context will never be run, as the other contexts are loaded on
    the SPEs, none of which are repoted as free (ie, spu->alloc_state !=
    SPU_FREE), so spu_get_idle() doesn't give us a spu to run on. Because
    all of the contexts are stopped, none are descheduled by the scheduler
    tick, as spusched_tick returns if spu_stopped(ctx).
    
    This change replaces the spu_stopped() check with checking for SCHED_IDLE
    in ctx->policy. We set a context's policy to SCHED_IDLE when we're not
    in spu_run(). We also favour SCHED_IDLE contexts when looking for contexts
    to unbind, but leave their timeslice intact for later resumption.
    
    This patch fixes the following test in the spufs-testsuite:
      tests/20-scheduler/02-yield-starvation
    Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
    4ef11014
sched.c 27.2 KB