提交 554b0d1c 编写于 作者: L Li Zefan 提交者: Tejun Heo

cpuset: inherit ancestor's masks if effective_{cpus, mems} becomes empty

We're going to have separate user-configured masks and effective ones.

Eventually configured masks can only be changed by writing cpuset.cpus
and cpuset.mems, and they won't be restricted by parent cpuset. While
effective masks reflect cpu/memory hotplug and hierachical restriction,
and these are the real masks that apply to the tasks in the cpuset.

We calculate effective mask this way:
  - top cpuset's effective_mask == online_mask, otherwise
  - cpuset's effective_mask == configured_mask & parent effective_mask,
    if the result is empty, it inherits parent effective mask.

Those behavior changes are for default hierarchy only. For legacy
hierarchy, effective_mask and configured_mask are the same, so we won't
break old interfaces.

To make cs->effective_{cpus,mems} to be effective masks, we need to
  - update the effective masks at hotplug
  - update the effective masks at config change
  - take on ancestor's mask when the effective mask is empty

The last item is done here.

This won't introduce behavior change.
Signed-off-by: NLi Zefan <lizefan@huawei.com>
Signed-off-by: NTejun Heo <tj@kernel.org>
上级 734d4513
...@@ -877,6 +877,13 @@ static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus) ...@@ -877,6 +877,13 @@ static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus)
cpumask_and(new_cpus, cp->cpus_allowed, parent->effective_cpus); cpumask_and(new_cpus, cp->cpus_allowed, parent->effective_cpus);
/*
* If it becomes empty, inherit the effective mask of the
* parent, which is guaranteed to have some CPUs.
*/
if (cpumask_empty(new_cpus))
cpumask_copy(new_cpus, parent->effective_cpus);
/* Skip the whole subtree if the cpumask remains the same. */ /* Skip the whole subtree if the cpumask remains the same. */
if (cpumask_equal(new_cpus, cp->effective_cpus)) { if (cpumask_equal(new_cpus, cp->effective_cpus)) {
pos_css = css_rightmost_descendant(pos_css); pos_css = css_rightmost_descendant(pos_css);
...@@ -1123,6 +1130,13 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems) ...@@ -1123,6 +1130,13 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
nodes_and(*new_mems, cp->mems_allowed, parent->effective_mems); nodes_and(*new_mems, cp->mems_allowed, parent->effective_mems);
/*
* If it becomes empty, inherit the effective mask of the
* parent, which is guaranteed to have some MEMs.
*/
if (nodes_empty(*new_mems))
*new_mems = parent->effective_mems;
/* Skip the whole subtree if the nodemask remains the same. */ /* Skip the whole subtree if the nodemask remains the same. */
if (nodes_equal(*new_mems, cp->effective_mems)) { if (nodes_equal(*new_mems, cp->effective_mems)) {
pos_css = css_rightmost_descendant(pos_css); pos_css = css_rightmost_descendant(pos_css);
...@@ -2102,7 +2116,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs) ...@@ -2102,7 +2116,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs)
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
cpumask_andnot(cs->cpus_allowed, cs->cpus_allowed, &off_cpus); cpumask_andnot(cs->cpus_allowed, cs->cpus_allowed, &off_cpus);
/* Inherit the effective mask of the parent, if it becomes empty. */
cpumask_andnot(cs->effective_cpus, cs->effective_cpus, &off_cpus); cpumask_andnot(cs->effective_cpus, cs->effective_cpus, &off_cpus);
if (on_dfl && cpumask_empty(cs->effective_cpus))
cpumask_copy(cs->effective_cpus, parent_cs(cs)->effective_cpus);
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
/* /*
...@@ -2117,7 +2135,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs) ...@@ -2117,7 +2135,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs)
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
nodes_andnot(cs->mems_allowed, cs->mems_allowed, off_mems); nodes_andnot(cs->mems_allowed, cs->mems_allowed, off_mems);
/* Inherit the effective mask of the parent, if it becomes empty */
nodes_andnot(cs->effective_mems, cs->effective_mems, off_mems); nodes_andnot(cs->effective_mems, cs->effective_mems, off_mems);
if (on_dfl && nodes_empty(cs->effective_mems))
cs->effective_mems = parent_cs(cs)->effective_mems;
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
/* /*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册