• P
    sched: Add NEED_RESCHED to the preempt_count · f27dde8d
    Peter Zijlstra 提交于
    In order to combine the preemption and need_resched test we need to
    fold the need_resched information into the preempt_count value.
    
    Since the NEED_RESCHED flag is set across CPUs this needs to be an
    atomic operation, however we very much want to avoid making
    preempt_count atomic, therefore we keep the existing TIF_NEED_RESCHED
    infrastructure in place but at 3 sites test it and fold its value into
    preempt_count; namely:
    
     - resched_task() when setting TIF_NEED_RESCHED on the current task
     - scheduler_ipi() when resched_task() sets TIF_NEED_RESCHED on a
                       remote task it follows it up with a reschedule IPI
                       and we can modify the cpu local preempt_count from
                       there.
     - cpu_idle_loop() for when resched_task() found tsk_is_polling().
    
    We use an inverted bitmask to indicate need_resched so that a 0 means
    both need_resched and !atomic.
    
    Also remove the barrier() in preempt_enable() between
    preempt_enable_no_resched() and preempt_check_resched() to avoid
    having to reload the preemption value and allow the compiler to use
    the flags of the previuos decrement. I couldn't come up with any sane
    reason for this barrier() to be there as preempt_enable_no_resched()
    already has a barrier() before doing the decrement.
    Suggested-by: NIngo Molnar <mingo@kernel.org>
    Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
    Link: http://lkml.kernel.org/n/tip-7a7m5qqbn5pmwnd4wko9u6da@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
    f27dde8d
idle.c 3.2 KB