提交 d7df2443 编写于 作者: B Benjamin Herrenschmidt 提交者: Michael Ellerman

powerpc/mm: Fix spurrious segfaults on radix with autonuma

When autonuma (Automatic NUMA balancing) marks a PTE inaccessible it
clears all the protection bits but leave the PTE valid.

With the Radix MMU, an attempt at executing from such a PTE will
take a fault with bit 35 of SRR1 set "SRR1_ISI_N_OR_G".

It is thus incorrect to treat all such faults as errors. We should
pass them to handle_mm_fault() for autonuma to deal with. The case
of pages that are really not executable is handled by the existing
test for VM_EXEC further down.

That leaves us with catching the kernel attempts at executing user
pages. We can catch that earlier, even before we do find_vma.

It is never valid on powerpc for the kernel to take an exec fault
to begin with. So fold that test with the existing test for the
kernel faulting on kernel addresses to bail out early.

Fixes: 1d18ad02 ("powerpc/mm: Detect instruction fetch denied and report")
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: NBalbir Singh <bsingharora@gmail.com>
Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
上级 a0615a16
......@@ -253,8 +253,11 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
if (unlikely(debugger_fault_handler(regs)))
goto bail;
/* On a kernel SLB miss we can only check for a valid exception entry */
if (!user_mode(regs) && (address >= TASK_SIZE)) {
/*
* The kernel should never take an execute fault nor should it
* take a page fault to a kernel address.
*/
if (!user_mode(regs) && (is_exec || (address >= TASK_SIZE))) {
rc = SIGSEGV;
goto bail;
}
......@@ -390,20 +393,6 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
#endif /* CONFIG_8xx */
if (is_exec) {
/*
* An execution fault + no execute ?
*
* On CPUs that don't have CPU_FTR_COHERENT_ICACHE we
* deliberately create NX mappings, and use the fault to do the
* cache flush. This is usually handled in hash_page_do_lazy_icache()
* but we could end up here if that races with a concurrent PTE
* update. In that case we need to fall through here to the VMA
* check below.
*/
if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE) &&
(regs->msr & SRR1_ISI_N_OR_G))
goto bad_area;
/*
* Allow execution from readable areas if the MMU does not
* provide separate controls over reading and executing.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册