- 21 4月, 2022 1 次提交
-
-
由 Chuanhua Han 提交于
Since all processes share the kernel address space, we only need to copy pgd in case of a vmalloc page fault exception, the other levels of page tables are shared, so the operation of copying pmd is unnecessary. Signed-off-by: NChuanhua Han <hanchuanhua@oppo.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 15 1月, 2022 1 次提交
-
-
由 Qi Zheng 提交于
Since commit 4064b982 ("mm: allow VM_FAULT_RETRY for multiple times") allowed VM_FAULT_RETRY for multiple times, the FAULT_FLAG_ALLOW_RETRY bit of fault_flag will not be changed in the page fault path, so the following check is no longer needed: flags & FAULT_FLAG_ALLOW_RETRY So just remove it. [akpm@linux-foundation.org: coding style fixes] Link: https://lkml.kernel.org/r/20211110123358.36511-1-zhengqi.arch@bytedance.comSigned-off-by: NQi Zheng <zhengqi.arch@bytedance.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Kirill Shutemov <kirill@shutemov.name> Cc: Peter Xu <peterx@redhat.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 1月, 2022 1 次提交
-
-
由 Alexandre Ghiti 提交于
We used to define VMALLOC_END equal to the start of the next region *minus one* which is inconsistent with the use of this define in the core code (for example, see the definitions of VMALLOC_TOTAL and is_vmalloc_addr). And then make the definition of VMEMMAP_END consistent with VMALLOC_END and all other regions actually. Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Reviewed-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 14 12月, 2021 1 次提交
-
-
由 Eric W. Biederman 提交于
There are two big uses of do_exit. The first is it's design use to be the guts of the exit(2) system call. The second use is to terminate a task after something catastrophic has happened like a NULL pointer in kernel code. Add a function make_task_dead that is initialy exactly the same as do_exit to cover the cases where do_exit is called to handle catastrophic failure. In time this can probably be reduced to just a light wrapper around do_task_dead. For now keep it exactly the same so that there will be no behavioral differences introducing this new concept. Replace all of the uses of do_exit that use it for catastraphic task cleanup with make_task_dead to make it clear what the code is doing. As part of this rename rewind_stack_do_exit rewind_stack_and_make_dead. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 01 7月, 2021 1 次提交
-
-
由 Liu Shixin 提交于
Add architecture specific implementation details for KFENCE and enable KFENCE for the riscv64 architecture. In particular, this implements the required interface in <asm/kfence.h>. KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the kfence pool to be mapped at page granularity. Testing this patch using the testcases in kfence_test.c and all passed. Signed-off-by: NLiu Shixin <liushixin2@huawei.com> Acked-by: NMarco Elver <elver@google.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 26 4月, 2021 1 次提交
-
-
由 Alexandre Ghiti 提交于
This is a preparatory patch for relocatable kernel and sv48 support. The kernel used to be linked at PAGE_OFFSET address therefore we could use the linear mapping for the kernel mapping. But the relocated kernel base address will be different from PAGE_OFFSET and since in the linear mapping, two different virtual addresses cannot point to the same physical address, the kernel mapping needs to lie outside the linear mapping so that we don't have to copy it at the same physical offset. The kernel mapping is moved to the last 2GB of the address space, BPF is now always after the kernel and modules use the 2GB memory range right before the kernel, so BPF and modules regions do not overlap. KASLR implementation will simply have to move the kernel in the last 2GB range and just take care of leaving enough space for BPF. In addition, by moving the kernel to the end of the address space, both sv39 and sv48 kernels will be exactly the same without needing to be relocated at runtime. Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> [Palmer: Squash the STRICT_RWX fix, and a !MMU fix] Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 16 4月, 2021 1 次提交
-
-
由 Jisheng Zhang 提交于
These two functions are used to implement the kprobes feature so they can't be kprobed. Fixes: c22b0bcb ("riscv: Add kprobes supported") Cc: stable@vger.kernel.org Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 15 1月, 2021 2 次提交
-
-
由 Guo Ren 提交于
This patch adds support for uprobes on riscv architecture. Just like kprobe, it support single-step and simulate instructions. Signed-off-by: NGuo Ren <guoren@linux.alibaba.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Guo Ren 提交于
This patch enables "kprobe & kretprobe" to work with ftrace interface. It utilized software breakpoint as single-step mechanism. Some instructions which can't be single-step executed must be simulated in kernel execution slot, such as: branch, jal, auipc, la ... Some instructions should be rejected for probing and we use a blacklist to filter, such as: ecall, ebreak, ... We use ebreak & c.ebreak to replace origin instruction and the kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed. In execution slot we add ebreak behind original instruction to simulate a single-setp mechanism. The patch is based on packi's work [1] and csky's work [2]. - The kprobes_trampoline.S is all from packi's patch - The single-step mechanism is new designed for riscv without hw single-step trap - The simulation codes are from csky - Frankly, all codes refer to other archs' implementation [1] https://lore.kernel.org/linux-riscv/20181113195804.22825-1-me@packi.ch/ [2] https://lore.kernel.org/linux-csky/20200403044150.20562-9-guoren@kernel.org/Signed-off-by: NGuo Ren <guoren@linux.alibaba.com> Co-developed-by: NPatrick Stählin <me@packi.ch> Signed-off-by: NPatrick Stählin <me@packi.ch> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Tested-by: NZong Li <zong.li@sifive.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Cc: Patrick Stählin <me@packi.ch> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 08 1月, 2021 2 次提交
-
-
由 Eric Lin 提交于
We found this issue in an legacy out-of-tree kernel module which didn't properly access user space pointer by get/put_user(). Such an illegal access loops in the page fault handler. To resolve this, let it die here. Signed-off-by: NEric Lin <tesheng@andestech.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Eric Lin 提交于
Like arm64, this patch adds a die_kernel_fault() helper to ensure the same semantics for the different kernel faults. Signed-off-by: NEric Lin <tesheng@andestech.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 06 11月, 2020 1 次提交
-
-
由 Liu Shaohua 提交于
The argument to pfn_to_virt() should be pfn not the value of CSR_SATP. Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: Nliush <liush@allwinnertech.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 16 9月, 2020 11 次提交
-
-
由 Pekka Enberg 提交于
If the page fault "cause" is EXC_INST_PAGE_FAULT, set the FAULT_FLAG_INSTRUCTION flag to let handle_mm_fault() and friends know about it. This has no functional changes because RISC-V uses the default arch_vma_access_permitted() implementation, which always returns true. However, dax_pmd_fault(), for example, has a tracepoint that uses FAULT_FLAG_INSTRUCTION, so we might as well set it. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
The "inline" keyword is in the wrong place in vmalloc_fault() declaration: >> arch/riscv/mm/fault.c:56:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration] 56 | static void inline vmalloc_fault(struct pt_regs *regs, int code, unsigned long addr) | ^~~~~~ Fix that up. Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com> -
由 Pekka Enberg 提交于
Move the access error check into a access_error() function to simplify the control flow in do_page_fault(). Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Let's handle the translation of EXC_STORE_PAGE_FAULT to FAULT_FLAG_WRITE once before looking up the VMA. This makes it easier to extract access error logic in the next patch. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Simplify the mm_fault_error() handling function by eliminating the unnecessary gotos. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the fault error handling to mm_fault_error() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Move fault error handling after retry logic. This simplifies the code flow and makes it easier to move fault error handling to its own function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the vmalloc fault handling in do_page_fault() to vmalloc_fault() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the bad area handling in do_page_fault() to bad_area() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the no context handling in do_page_fault() to no_context() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Let's combine the two retry logic if statements in do_page_fault() to simplify the code. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 13 8月, 2020 2 次提交
-
-
由 Peter Xu 提交于
Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NPekka Enberg <penberg@kernel.org> Acked-by: NPalmer Dabbelt <palmerdabbelt@google.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Link: http://lkml.kernel.org/r/20200707225021.200906-18-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Peter Xu 提交于
Patch series "mm: Page fault accounting cleanups", v5. This is v5 of the pf accounting cleanup series. It originates from Gerald Schaefer's report on an issue a week ago regarding to incorrect page fault accountings for retried page fault after commit 4064b982 ("mm: allow VM_FAULT_RETRY for multiple times"): https://lore.kernel.org/lkml/20200610174811.44b94525@thinkpad/ What this series did: - Correct page fault accounting: we do accounting for a page fault (no matter whether it's from #PF handling, or gup, or anything else) only with the one that completed the fault. For example, page fault retries should not be counted in page fault counters. Same to the perf events. - Unify definition of PERF_COUNT_SW_PAGE_FAULTS: currently this perf event is used in an adhoc way across different archs. Case (1): for many archs it's done at the entry of a page fault handler, so that it will also cover e.g. errornous faults. Case (2): for some other archs, it is only accounted when the page fault is resolved successfully. Case (3): there're still quite some archs that have not enabled this perf event. Since this series will touch merely all the archs, we unify this perf event to always follow case (1), which is the one that makes most sense. And since we moved the accounting into handle_mm_fault, the other two MAJ/MIN perf events are well taken care of naturally. - Unify definition of "major faults": the definition of "major fault" is slightly changed when used in accounting (not VM_FAULT_MAJOR). More information in patch 1. - Always account the page fault onto the one that triggered the page fault. This does not matter much for #PF handlings, but mostly for gup. More information on this in patch 25. Patchset layout: Patch 1: Introduced the accounting in handle_mm_fault(), not enabled. Patch 2-23: Enable the new accounting for arch #PF handlers one by one. Patch 24: Enable the new accounting for the rest outliers (gup, iommu, etc.) Patch 25: Cleanup GUP task_struct pointer since it's not needed any more This patch (of 25): This is a preparation patch to move page fault accountings into the general code in handle_mm_fault(). This includes both the per task flt_maj/flt_min counters, and the major/minor page fault perf events. To do this, the pt_regs pointer is passed into handle_mm_fault(). PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault handlers. So far, all the pt_regs pointer that passed into handle_mm_fault() is NULL, which means this patch should have no intented functional change. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Greentime Hu <green.hu@gmail.com> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200707225021.200906-1-peterx@redhat.com Link: http://lkml.kernel.org/r/20200707225021.200906-2-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 8月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
Patch series "mm: cleanup usage of <asm/pgalloc.h>" Most architectures have very similar versions of pXd_alloc_one() and pXd_free_one() for intermediate levels of page table. These patches add generic versions of these functions in <asm-generic/pgalloc.h> and enable use of the generic functions where appropriate. In addition, functions declared and defined in <asm/pgalloc.h> headers are used mostly by core mm and early mm initialization in arch and there is no actual reason to have the <asm/pgalloc.h> included all over the place. The first patch in this series removes unneeded includes of <asm/pgalloc.h> In the end it didn't work out as neatly as I hoped and moving pXd_alloc_track() definitions to <asm-generic/pgalloc.h> would require unnecessary changes to arches that have custom page table allocations, so I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local to mm/. This patch (of 8): In most cases <asm/pgalloc.h> header is required only for allocations of page table memory. Most of the .c files that include that header do not use symbols declared in <asm/pgalloc.h> and do not require that header. As for the other header files that used to include <asm/pgalloc.h>, it is possible to move that include into the .c file that actually uses symbols from <asm/pgalloc.h> and drop the include from the header file. The process was somewhat automated using sed -i -E '/[<"]asm\/pgalloc\.h/d' \ $(grep -L -w -f /tmp/xx \ $(git grep -E -l '[<"]asm/pgalloc\.h')) where /tmp/xx contains all the symbols defined in arch/*/include/asm/pgalloc.h. [rppt@linux.ibm.com: fix powerpc warning] Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NPekka Enberg <penberg@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Joerg Roedel <joro@8bytes.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Cc: Stafford Horne <shorne@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Joerg Roedel <jroedel@suse.de> Cc: Matthew Wilcox <willy@infradead.org> Link: http://lkml.kernel.org/r/20200627143453.31835-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200627143453.31835-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 6月, 2020 3 次提交
-
-
由 Michel Lespinasse 提交于
Convert comments that reference mmap_sem to reference mmap_lock instead. [akpm@linux-foundation.org: fix up linux-next leftovers] [akpm@linux-foundation.org: s/lockaphore/lock/, per Vlastimil] [akpm@linux-foundation.org: more linux-next fixups, per Michel] Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-13-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
Convert comments that reference old mmap_sem APIs to reference corresponding new mmap locking APIs instead. Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Reviewed-by: NDavidlohr Bueso <dbueso@suse.de> Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-12-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 4月, 2020 3 次提交
-
-
由 Peter Xu 提交于
The idea comes from a discussion between Linus and Andrea [1]. Before this patch we only allow a page fault to retry once. We achieved this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing handle_mm_fault() the second time. This was majorly used to avoid unexpected starvation of the system by looping over forever to handle the page fault on a single page. However that should hardly happen, and after all for each code path to return a VM_FAULT_RETRY we'll first wait for a condition (during which time we should possibly yield the cpu) to happen before VM_FAULT_RETRY is really returned. This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY flag when we receive VM_FAULT_RETRY. It means that the page fault handler now can retry the page fault for multiple times if necessary without the need to generate another page fault event. Meanwhile we still keep the FAULT_FLAG_TRIED flag so page fault handler can still identify whether a page fault is the first attempt or not. Then we'll have these combinations of fault flags (only considering ALLOW_RETRY flag and TRIED flag): - ALLOW_RETRY and !TRIED: this means the page fault allows to retry, and this is the first try - ALLOW_RETRY and TRIED: this means the page fault allows to retry, and this is not the first try - !ALLOW_RETRY and !TRIED: this means the page fault does not allow to retry at all - !ALLOW_RETRY and TRIED: this is forbidden and should never be used In existing code we have multiple places that has taken special care of the first condition above by checking against (fault_flags & FAULT_FLAG_ALLOW_RETRY). This patch introduces a simple helper to detect the first retry of a page fault by checking against both (fault_flags & FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now even the 2nd try will have the ALLOW_RETRY set, then use that helper in all existing special paths. One example is in __lock_page_or_retry(), now we'll drop the mmap_sem only in the first attempt of page fault and we'll keep it in follow up retries, so old locking behavior will be retained. This will be a nice enhancement for current code [2] at the same time a supporting material for the future userfaultfd-writeprotect work, since in that work there will always be an explicit userfault writeprotect retry for protected pages, and if that cannot resolve the page fault (e.g., when userfaultfd-writeprotect is used in conjunction with swapped pages) then we'll possibly need a 3rd retry of the page fault. It might also benefit other potential users who will have similar requirement like userfault write-protection. GUP code is not touched yet and will be covered in follow up patch. Please read the thread below for more information. [1] https://lore.kernel.org/lkml/20171102193644.GB22686@redhat.com/ [2] https://lore.kernel.org/lkml/20181230154648.GB9832@redhat.com/Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Suggested-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220160246.9790-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org> -
由 Peter Xu 提交于
Although there're tons of arch-specific page fault handlers, most of them are still sharing the same initial value of the page fault flags. Say, merely all of the page fault handlers would allow the fault to be retried, and they also allow the fault to respond to SIGKILL. Let's define a default value for the fault flags to replace those initial page fault flags that were copied over. With this, it'll be far easier to introduce new fault flag that can be used by all the architectures instead of touching all the archs. Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220160238.9694-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Peter Xu 提交于
For most architectures, we've got a quick path to detect fatal signal after a handle_mm_fault(). Introduce a helper for that quick path. It cleans the current codes a bit so we don't need to duplicate the same check across archs. More importantly, this will be an unified place that we handle the signal immediately right after an interrupted page fault, so it'll be much easier for us if we want to change the behavior of handling signals later on for all the archs. Note that currently only part of the archs are using this new helper, because some archs have their own way to handle signals. In the follow up patches, we'll try to apply this helper to all the rest of archs. Another note is that the "regs" parameter in the new helper is not used yet. It'll be used very soon. Now we kept it in this patch only to avoid touching all the archs again in the follow up patches. [peterx@redhat.com: fix sparse warnings] Link: http://lkml.kernel.org/r/20200311145921.GD479302@xz-x1Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220155353.8676-4-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 11月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
Many of the privileged CSRs exist in a supervisor and machine version that are used very similarly. Provide versions of the CSR names and fields that map to either the S-mode or M-mode variant depending on a new CONFIG_RISCV_M_MODE kconfig symbol. Contains contributions from Damien Le Moal <Damien.LeMoal@wdc.com> and Paul Walmsley <paul.walmsley@sifive.com>. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> # for drivers/clocksource, drivers/irqchip [paul.walmsley@sifive.com: updated to apply] Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com>
-
- 28 10月, 2019 1 次提交
-
-
由 Paul Walmsley 提交于
Add prototypes for assembly language functions defined in head.S, and include these prototypes into C source files that call those functions. This patch resolves the following warnings from sparse: arch/riscv/kernel/setup.c:39:10: warning: symbol 'hart_lottery' was not declared. Should it be static? arch/riscv/kernel/setup.c:42:13: warning: symbol 'parse_dtb' was not declared. Should it be static? arch/riscv/kernel/smpboot.c:33:6: warning: symbol '__cpu_up_stack_pointer' was not declared. Should it be static? arch/riscv/kernel/smpboot.c:34:6: warning: symbol '__cpu_up_task_pointer' was not declared. Should it be static? arch/riscv/mm/fault.c:25:17: warning: symbol 'do_page_fault' was not declared. Should it be static? This change should have no functional impact. Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com>
-
- 27 6月, 2019 1 次提交
-
-
由 ShihPo Hung 提交于
Fix the comment since vmalloc_fault doesn't reach flush_tlb_fix_spurious_fault. Signed-off-by: NShihPo Hung <shihpo.hung@sifive.com> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-riscv@lists.infradead.org Reviewed-by: NPalmer Dabbelt <palmer@sifive.com> Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com>
-
- 17 6月, 2019 1 次提交
-
-
由 ShihPo Hung 提交于
Because RISC-V compliant implementations can cache invalid entries in TLB, an SFENCE.VMA is necessary after changes to the page table. This patch adds an SFENCE.vma for the vmalloc_fault path. Signed-off-by: NShihPo Hung <shihpo.hung@sifive.com> [paul.walmsley@sifive.com: reversed tab->whitespace conversion, wrapped comment lines] Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-riscv@lists.infradead.org Cc: stable@vger.kernel.org
-
- 29 5月, 2019 1 次提交
-
-
由 Eric W. Biederman 提交于
The do_trap function is always called with tsk == current. Make that obvious by removing the tsk parameter. This also makes it clear that do_trap calls force_sig_fault on the current task. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 24 5月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see the file copying or write to the free software foundation inc extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 12 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NRichard Fontana <rfontana@redhat.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190523091651.231300438@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 5月, 2019 2 次提交
-
-
由 Andreas Schwab 提交于
When a user mode process accesses an address in the vmalloc area do_page_fault tries to unlock the mmap semaphore when it isn't locked. Signed-off-by: NAndreas Schwab <schwab@suse.de> [Palmer: Duplicated code instead of a goto] Signed-off-by: NPalmer Dabbelt <palmer@sifive.com>
-
由 Anup Patel 提交于
We should prefer accessing CSRs using their CSR numbers because: 1. It compiles fine with older toolchains. 2. We can use latest CSR names in #define macro names of CSR numbers as-per RISC-V spec. 3. We can access newly added CSRs even if toolchain does not recognize newly addes CSRs by name. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NPalmer Dabbelt <palmer@sifive.com>
-