提交 8592e648 编写于 作者: T Tejun Heo 提交者: Ingo Molnar

sched: Revert 498657a4

498657a4 incorrectly assumed
that preempt wasn't disabled around context_switch() and thus
was fixing imaginary problem.  It also broke KVM because it
depended on ->sched_in() to be called with irq enabled so that
it can do smp calls from there.

Revert the incorrect commit and add comment describing different
contexts under with the two callbacks are invoked.

Avi: spotted transposed in/out in the added comment.
Signed-off-by: NTejun Heo <tj@kernel.org>
Acked-by: NAvi Kivity <avi@redhat.com>
Cc: peterz@infradead.org
Cc: efault@gmx.de
Cc: rusty@rustcorp.com.au
LKML-Reference: <1259726212-30259-2-git-send-email-tj@kernel.org>
Signed-off-by: NIngo Molnar <mingo@elte.hu>
上级 b7b20df9
...@@ -105,6 +105,11 @@ struct preempt_notifier; ...@@ -105,6 +105,11 @@ struct preempt_notifier;
* @sched_out: we've just been preempted * @sched_out: we've just been preempted
* notifier: struct preempt_notifier for the task being preempted * notifier: struct preempt_notifier for the task being preempted
* next: the task that's kicking us out * next: the task that's kicking us out
*
* Please note that sched_in and out are called under different
* contexts. sched_out is called with rq lock held and irq disabled
* while sched_in is called without rq lock and irq enabled. This
* difference is intentional and depended upon by its users.
*/ */
struct preempt_ops { struct preempt_ops {
void (*sched_in)(struct preempt_notifier *notifier, int cpu); void (*sched_in)(struct preempt_notifier *notifier, int cpu);
......
...@@ -2768,9 +2768,9 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev) ...@@ -2768,9 +2768,9 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
prev_state = prev->state; prev_state = prev->state;
finish_arch_switch(prev); finish_arch_switch(prev);
perf_event_task_sched_in(current, cpu_of(rq)); perf_event_task_sched_in(current, cpu_of(rq));
fire_sched_in_preempt_notifiers(current);
finish_lock_switch(rq, prev); finish_lock_switch(rq, prev);
fire_sched_in_preempt_notifiers(current);
if (mm) if (mm)
mmdrop(mm); mmdrop(mm);
if (unlikely(prev_state == TASK_DEAD)) { if (unlikely(prev_state == TASK_DEAD)) {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册