You need to sign in or sign up before continuing.
提交 a76b9fb2 编写于 作者: R Reinette Chatre 提交者: sanglipeng

x86/resctrl: Use task_curr() instead of task_struct->on_cpu to prevent unnecessary IPI

stable inclusion
from stable-v5.10.164
commit 446c7251f007282b5f9dd9853be8d6737cb3c14d
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I7T7G4

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=446c7251f007282b5f9dd9853be8d6737cb3c14d

--------------------------------

[ Upstream commit e0ad6dc8 ]

James reported in [1] that there could be two tasks running on the same CPU
with task_struct->on_cpu set. Using task_struct->on_cpu as a test if a task
is running on a CPU may thus match the old task for a CPU while the
scheduler is running and IPI it unnecessarily.

task_curr() is the correct helper to use. While doing so move the #ifdef
check of the CONFIG_SMP symbol to be a C conditional used to determine
if this helper should be used to ensure the code is always checked for
correctness by the compiler.

[1] https://lore.kernel.org/lkml/a782d2f3-d2f6-795f-f4b1-9462205fd581@arm.comReported-by: NJames Morse <james.morse@arm.com>
Signed-off-by: NReinette Chatre <reinette.chatre@intel.com>
Signed-off-by: NBorislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/e9e68ce1441a73401e08b641cc3b9a3cf13fe6d4.1608243147.git.reinette.chatre@intel.com
Stable-dep-of: fe1f0714 ("x86/resctrl: Fix task CLOSID/RMID update race")
Signed-off-by: NSasha Levin <sashal@kernel.org>
Signed-off-by: Nsanglipeng <sanglipeng1@jd.com>
上级 6b042ddd
...@@ -2313,19 +2313,15 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, ...@@ -2313,19 +2313,15 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
t->closid = to->closid; t->closid = to->closid;
t->rmid = to->mon.rmid; t->rmid = to->mon.rmid;
#ifdef CONFIG_SMP
/* /*
* This is safe on x86 w/o barriers as the ordering * If the task is on a CPU, set the CPU in the mask.
* of writing to task_cpu() and t->on_cpu is * The detection is inaccurate as tasks might move or
* reverse to the reading here. The detection is * schedule before the smp function call takes place.
* inaccurate as tasks might move or schedule * In such a case the function call is pointless, but
* before the smp function call takes place. In
* such a case the function call is pointless, but
* there is no other side effect. * there is no other side effect.
*/ */
if (mask && t->on_cpu) if (IS_ENABLED(CONFIG_SMP) && mask && task_curr(t))
cpumask_set_cpu(task_cpu(t), mask); cpumask_set_cpu(task_cpu(t), mask);
#endif
} }
} }
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册