• D
    sched: rework of "prioritize non-migratable tasks over migratable ones" · 20b6331b
    Dmitry Adamushko 提交于
    regarding this commit: 45c01e82
    
    I think we can do it simpler. Please take a look at the patch below.
    
    Instead of having 2 separate arrays (which is + ~800 bytes on x86_32 and
    twice so on x86_64), let's add "exclusive" (the ones that are bound to
    this CPU) tasks to the head of the queue and "shared" ones -- to the
    end.
    
    In case of a few newly woken up "exclusive" tasks, they are 'stacked'
    (not queued as now), meaning that a task {i+1} is being placed in front
    of the previously woken up task {i}. But I don't think that this
    behavior may cause any realistic problems.
    
    There are a couple of changes on top of this one.
    
    (1) in check_preempt_curr_rt()
    
    I don't think there is a need for the "pick_next_rt_entity(rq, &rq->rt)
    != &rq->curr->rt" check.
    
    enqueue_task_rt(p) and check_preempt_curr_rt() are always called one
    after another with rq->lock being held so the following check
    "p->rt.nr_cpus_allowed == 1 && rq->curr->rt.nr_cpus_allowed != 1" should
    be enough (well, just its left part) to guarantee that 'p' has been
    queued in front of the 'curr'.
    
    (2) in set_cpus_allowed_rt()
    
    I don't thinks there is a need for requeue_task_rt() here.
    
    Perhaps, the only case when 'requeue' (+ reschedule) might be useful is
    as follows:
    
    i) weight == 1 && cpu_isset(task_cpu(p), *new_mask)
    
    i.e. a task is being bound to this CPU);
    
    ii) 'p' != rq->curr
    
    but here, 'p' has already been on this CPU for a while and was not
    migrated. i.e. it's possible that 'rq->curr' would not have high chances
    to be migrated right at this particular moment (although, has chance in
    a bit longer term), should we allow it to be preempted.
    
    Anyway, I think we should not perhaps make it more complex trying to
    address some rare corner cases. For instance, that's why a single queue
    approach would be preferable. Unless I'm missing something obvious, this
    approach gives us similar functionality at lower cost.
    
    Verified only compilation-wise.
    
    (Almost)-Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
    Signed-off-by: NIngo Molnar <mingo@elte.hu>
    20b6331b
sched.c 214.6 KB