提交 22259a6e 编写于 作者: A Aneesh Kumar K.V 提交者: Michael Ellerman

powerpc/mm/cxl: Add barrier when setting mm cpumask

We need to add memory barrier so that the page table walk doesn't happen
before the cpumask is set and made visible to the other cpus. We need
to use a sync here instead of lwsync because lwsync is not sufficient for
store/load ordering.

We also need to add an if (mm) check so that we do the right thing when called
with a kernel context. For kernel context, we have mm = NULL. W.r.t kernel
address we can skip setting the mm cpumask.

Fixes: 0f4bc093 ("powerpc/mm/cxl: Add the fault handling cpu to mm cpumask")
Cc: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
上级 2392c8c8
......@@ -141,9 +141,19 @@ int cxl_handle_mm_fault(struct mm_struct *mm, u64 dsisr, u64 dar)
/*
* Add the fault handling cpu to task mm cpumask so that we
* can do a safe lockless page table walk when inserting the
* hash page table entry.
* hash page table entry. This function get called with a
* valid mm for user space addresses. Hence using the if (mm)
* check is sufficient here.
*/
cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
if (mm && !cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) {
cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
/*
* We need to make sure we walk the table only after
* we update the cpumask. The other side of the barrier
* is explained in serialize_against_pte_lookup()
*/
smp_mb();
}
if ((result = copro_handle_mm_fault(mm, dar, dsisr, &flt))) {
pr_devel("copro_handle_mm_fault failed: %#x\n", result);
return result;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册