- 04 5月, 2020 3 次提交
-
-
由 Mark Brown 提交于
Now that we have converted arm64 over to the new style SYM_ assembler annotations select ARCH_USE_SYM_ANNOTATIONS so the old macros aren't available and we don't regress. Signed-off-by: NMark Brown <broonie@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-4-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: NMark Brown <broonie@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-3-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
As part of an effort to clarify and clean up the assembler annotations new macros have been introduced which annotate the start and end of blocks of code in assembler files. Currently ret_to_user has an out of line slow path work_pending placed above the main function which makes annotating the start and end of these blocks of code awkward. Since work_pending is only referenced from within ret_to_user try to make things a bit clearer by moving it after the current ret_to_user and then marking both ret_to_user and work_pending as part of a single ret_to_user code block. Signed-off-by: NMark Brown <broonie@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-2-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 23 4月, 2020 1 次提交
-
-
由 Masahiro Yamada 提交于
As the bug report [1] pointed out, <linux/vermagic.h> must be included after <linux/module.h>. I believe we should not impose any include order restriction. We often sort include directives alphabetically, but it is just coding style convention. Technically, we can include header files in any order by making every header self-contained. Currently, arch-specific MODULE_ARCH_VERMAGIC is defined in <asm/module.h>, which is not included from <linux/vermagic.h>. Hence, the straight-forward fix-up would be as follows: |--- a/include/linux/vermagic.h |+++ b/include/linux/vermagic.h |@@ -1,5 +1,6 @@ | /* SPDX-License-Identifier: GPL-2.0 */ | #include <generated/utsrelease.h> |+#include <linux/module.h> | | /* Simply sanity version stamp for modules. */ | #ifdef CONFIG_SMP This works enough, but for further cleanups, I split MODULE_ARCH_VERMAGIC definitions into <asm/vermagic.h>. With this, <linux/module.h> and <linux/vermagic.h> will be orthogonal, and the location of MODULE_ARCH_VERMAGIC definitions will be consistent. For arc and ia64, MODULE_PROC_FAMILY is only used for defining MODULE_ARCH_VERMAGIC. I squashed it. For hexagon, nds32, and xtensa, I removed <asm/modules.h> entirely because they contained nothing but MODULE_ARCH_VERMAGIC definition. Kbuild will automatically generate <asm/modules.h> at build-time, wrapping <asm-generic/module.h>. [1] https://lore.kernel.org/lkml/20200411155623.GA22175@zn.tnicReported-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Acked-by: NJessica Yu <jeyu@kernel.org>
-
- 21 4月, 2020 1 次提交
-
-
由 Mark Rutland 提交于
A direct write to a APxxKey_EL1 register requires a context synchronization event to ensure that indirect reads made by subsequent instructions (e.g. AUTIASP, PACIASP) observe the new value. When we initialize the boot task's APIAKey in boot_init_stack_canary() via ptrauth_keys_switch_kernel() we miss the necessary ISB, and so there is a window where instructions are not guaranteed to use the new APIAKey value. This has been observed to result in boot-time crashes where PACIASP and AUTIASP within a function used a mixture of the old and new key values. Fix this by having ptrauth_keys_switch_kernel() synchronize the new key value with an ISB. At the same time, __ptrauth_key_install() is renamed to __ptrauth_key_install_nosync() so that it is obvious that this performs no synchronization itself. Fixes: 28321582 ("arm64: initialize ptrauth keys for kernel booting task") Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NWill Deacon <will@kernel.org> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NWill Deacon <will@kernel.org>
-
- 15 4月, 2020 2 次提交
-
-
由 Fangrui Song 提交于
In assembly, many instances of __emit_inst(x) expand to a directive. In a few places __emit_inst(x) is used as an assembler macro argument. For example, in arch/arm64/kvm/hyp/entry.S ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) expands to the following by the C preprocessor: alternative_insn nop, .inst (0xd500401f | ((0) << 16 | (4) << 5) | ((!!1) << 8)), 4, 1 Both comma and space are separators, with an exception that content inside a pair of parentheses/quotes is not split, so the clang integrated assembler splits the arguments to: nop, .inst, (0xd500401f | ((0) << 16 | (4) << 5) | ((!!1) << 8)), 4, 1 GNU as preprocesses the input with do_scrub_chars(). Its arm64 backend (along with many other non-x86 backends) sees: alternative_insn nop,.inst(0xd500401f|((0)<<16|(4)<<5)|((!!1)<<8)),4,1 # .inst(...) is parsed as one argument while its x86 backend sees: alternative_insn nop,.inst (0xd500401f|((0)<<16|(4)<<5)|((!!1)<<8)),4,1 # The extra space before '(' makes the whole .inst (...) parsed as two arguments The non-x86 backend's behavior is considered unintentional (https://sourceware.org/bugzilla/show_bug.cgi?id=25750). So drop the space separator inside `.inst (...)` to make the clang integrated assembler work. Suggested-by: NIlie Halip <ilie.halip@gmail.com> Signed-off-by: NFangrui Song <maskray@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Link: https://github.com/ClangBuiltLinux/linux/issues/939Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The aarch32_vdso_pages[] array never has entries allocated in the C_VVAR or C_VDSO slots, and as the array is zero initialized these contain NULL. However in __aarch32_alloc_vdso_pages() when aarch32_alloc_kuser_vdso_page() fails we attempt to free the page whose struct page is at NULL, which is obviously nonsensical. This patch removes the erroneous page freeing. Fixes: 7c1deeeb ("arm64: compat: VDSO setup for compat layer") Cc: <stable@vger.kernel.org> # 5.3.x- Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 4月, 2020 5 次提交
-
-
由 Logan Gunthorpe 提交于
devm_memremap_pages() is currently used by the PCI P2PDMA code to create struct page mappings for IO memory. At present, these mappings are created with PAGE_KERNEL which implies setting the PAT bits to be WB. However, on x86, an mtrr register will typically override this and force the cache type to be UC-. In the case firmware doesn't set this register it is effectively WB and will typically result in a machine check exception when it's accessed. Other arches are not currently likely to function correctly seeing they don't have any MTRR registers to fall back on. To solve this, provide a way to specify the pgprot value explicitly to arch_add_memory(). Of the arches that support MEMORY_HOTPLUG: x86_64, and arm64 need a simple change to pass the pgprot_t down to their respective functions which set up the page tables. For x86_32, set the page tables explicitly using _set_memory_prot() (seeing they are already mapped). For ia64, s390 and sh, reject anything but PAGE_KERNEL settings -- this should be fine, for now, seeing these architectures don't support ZONE_DEVICE. A check in __add_pages() is also added to ensure the pgprot parameter was set for all arches. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NDan Williams <dan.j.williams@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-7-logang@deltatee.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Logan Gunthorpe 提交于
The mhp_restrictions struct really doesn't specify anything resembling a restriction anymore so rename it to be mhp_params as it is a list of extended parameters. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-3-logang@deltatee.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anshuman Khandual 提交于
There are many places where all basic VMA access flags (read, write, exec) are initialized or checked against as a group. One such example is during page fault. Existing vma_is_accessible() wrapper already creates the notion of VMA accessibility as a group access permissions. Hence lets just create VM_ACCESS_FLAGS (VM_READ|VM_WRITE|VM_EXEC) which will not only reduce code duplication but also extend the VMA accessibility concept in general. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Springer <rspringer@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Link: http://lkml.kernel.org/r/1583391014-8170-3-git-send-email-anshuman.khandual@arm.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anshuman Khandual 提交于
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros with standard VMA access flag combinations that are used frequently across many platforms. Apart from simplification, this reduces code duplication as well. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Brian Cain <bcain@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paulburton@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Rich Felker <dalias@libc.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jeff Dike <jdike@addtoit.com> Cc: Chris Zankel <chris@zankel.net> Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Roman Gushchin 提交于
Commit 944d9fec ("hugetlb: add support for gigantic page allocation at runtime") has added the run-time allocation of gigantic pages. However it actually works only at early stages of the system loading, when the majority of memory is free. After some time the memory gets fragmented by non-movable pages, so the chances to find a contiguous 1GB block are getting close to zero. Even dropping caches manually doesn't help a lot. At large scale rebooting servers in order to allocate gigantic hugepages is quite expensive and complex. At the same time keeping some constant percentage of memory in reserved hugepages even if the workload isn't using it is a big waste: not all workloads can benefit from using 1 GB pages. The following solution can solve the problem: 1) On boot time a dedicated cma area* is reserved. The size is passed as a kernel argument. 2) Run-time allocations of gigantic hugepages are performed using the cma allocator and the dedicated cma area In this case gigantic hugepages can be allocated successfully with a high probability, however the memory isn't completely wasted if nobody is using 1GB hugepages: it can be used for pagecache, anon memory, THPs, etc. * On a multi-node machine a per-node cma area is allocated on each node. Following gigantic hugetlb allocation are using the first available numa node if the mask isn't specified by a user. Usage: 1) configure the kernel to allocate a cma area for hugetlb allocations: pass hugetlb_cma=10G as a kernel argument 2) allocate hugetlb pages as usual, e.g. echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages If the option isn't enabled or the allocation of the cma area failed, the current behavior of the system is preserved. x86 and arm-64 are covered by this patch, other architectures can be trivially added later. The patch contains clean-ups and fixes proposed and implemented by Aslan Bakirov and Randy Dunlap. It also contains ideas and suggestions proposed by Rik van Riel, Michal Hocko and Mike Kravetz. Thanks! Signed-off-by: NRoman Gushchin <guro@fb.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NAndreas Schaufler <andreas.schaufler@gmx.de> Acked-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NMichal Hocko <mhocko@kernel.org> Cc: Aslan Bakirov <aslan@fb.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Joonsoo Kim <js1304@gmail.com> Link: http://lkml.kernel.org/r/20200407163840.92263-3-guro@fb.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 4月, 2020 1 次提交
-
-
由 Fredrik Strupe 提交于
For thumb instructions, call_undef_hook() in traps.c first reads a u16, and if the u16 indicates a T32 instruction (u16 >= 0xe800), a second u16 is read, which then makes up the the lower half-word of a T32 instruction. For T16 instructions, the second u16 is not read, which makes the resulting u32 opcode always have the upper half set to 0. However, having the upper half of instr_mask in the undef_hook set to 0 masks out the upper half of all thumb instructions - both T16 and T32. This results in trapped T32 instructions with the lower half-word equal to the T16 encoding of setend (b650) being matched, even though the upper half-word is not 0000 and thus indicates a T32 opcode. An example of such a T32 instruction is eaa0b650, which should raise a SIGILL since T32 instructions with an eaa prefix are unallocated as per Arm ARM, but instead works as a SETEND because the second half-word is set to b650. This patch fixes the issue by extending instr_mask to include the upper u32 half, which will still match T16 instructions where the upper half is 0, but not T32 instructions. Fixes: 2d888f48 ("arm64: Emulate SETEND for AArch32 tasks") Cc: <stable@vger.kernel.org> # 4.0.x- Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NFredrik Strupe <fredrik@strupe.net> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 4月, 2020 4 次提交
-
-
由 Peter Xu 提交于
The idea comes from a discussion between Linus and Andrea [1]. Before this patch we only allow a page fault to retry once. We achieved this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing handle_mm_fault() the second time. This was majorly used to avoid unexpected starvation of the system by looping over forever to handle the page fault on a single page. However that should hardly happen, and after all for each code path to return a VM_FAULT_RETRY we'll first wait for a condition (during which time we should possibly yield the cpu) to happen before VM_FAULT_RETRY is really returned. This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY flag when we receive VM_FAULT_RETRY. It means that the page fault handler now can retry the page fault for multiple times if necessary without the need to generate another page fault event. Meanwhile we still keep the FAULT_FLAG_TRIED flag so page fault handler can still identify whether a page fault is the first attempt or not. Then we'll have these combinations of fault flags (only considering ALLOW_RETRY flag and TRIED flag): - ALLOW_RETRY and !TRIED: this means the page fault allows to retry, and this is the first try - ALLOW_RETRY and TRIED: this means the page fault allows to retry, and this is not the first try - !ALLOW_RETRY and !TRIED: this means the page fault does not allow to retry at all - !ALLOW_RETRY and TRIED: this is forbidden and should never be used In existing code we have multiple places that has taken special care of the first condition above by checking against (fault_flags & FAULT_FLAG_ALLOW_RETRY). This patch introduces a simple helper to detect the first retry of a page fault by checking against both (fault_flags & FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now even the 2nd try will have the ALLOW_RETRY set, then use that helper in all existing special paths. One example is in __lock_page_or_retry(), now we'll drop the mmap_sem only in the first attempt of page fault and we'll keep it in follow up retries, so old locking behavior will be retained. This will be a nice enhancement for current code [2] at the same time a supporting material for the future userfaultfd-writeprotect work, since in that work there will always be an explicit userfault writeprotect retry for protected pages, and if that cannot resolve the page fault (e.g., when userfaultfd-writeprotect is used in conjunction with swapped pages) then we'll possibly need a 3rd retry of the page fault. It might also benefit other potential users who will have similar requirement like userfault write-protection. GUP code is not touched yet and will be covered in follow up patch. Please read the thread below for more information. [1] https://lore.kernel.org/lkml/20171102193644.GB22686@redhat.com/ [2] https://lore.kernel.org/lkml/20181230154648.GB9832@redhat.com/Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Suggested-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220160246.9790-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Peter Xu 提交于
Although there're tons of arch-specific page fault handlers, most of them are still sharing the same initial value of the page fault flags. Say, merely all of the page fault handlers would allow the fault to be retried, and they also allow the fault to respond to SIGKILL. Let's define a default value for the fault flags to replace those initial page fault flags that were copied over. With this, it'll be far easier to introduce new fault flag that can be used by all the architectures instead of touching all the archs. Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220160238.9694-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Peter Xu 提交于
Let the arm64 fault handling to use the new fault_signal_pending() helper, by moving the signal handling out of the retry logic. Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220155927.9264-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Masahiro Yamada 提交于
Change a header to mandatory-y if both of the following are met: [1] At least one architecture (except um) specifies it as generic-y in arch/*/include/asm/Kbuild [2] Every architecture (except um) either has its own implementation (arch/*/include/asm/*.h) or specifies it as generic-y in arch/*/include/asm/Kbuild This commit was generated by the following shell script. ----------------------------------->8----------------------------------- arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d') tmpfile=$(mktemp) grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile find arch -path 'arch/*/include/asm/Kbuild' | xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u | while read header do mandatory=yes for arch in $arches do if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild && ! [ -f arch/$arch/include/asm/$header ]; then mandatory=no break fi done if [ "$mandatory" = yes ]; then echo "mandatory-y += $header" >> $tmpfile for arch in $arches do sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild done fi done sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild LANG=C sort $tmpfile >> include/asm-generic/Kbuild ----------------------------------->8----------------------------------- One obvious benefit is the diff stat: 25 files changed, 52 insertions(+), 557 deletions(-) It is tedious to list generic-y for each arch that needs it. So, mandatory-y works like a fallback default (by just wrapping asm-generic one) when arch does not have a specific header implementation. See the following commits: def3f7ce a1b39bae It is tedious to convert headers one by one, so I processed by a shell script. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Link: http://lkml.kernel.org/r/20200210175452.5030-1-masahiroy@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 4月, 2020 3 次提交
-
-
由 Ard Biesheuvel 提交于
When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with different permissions (r-x for .text, r-- for .rodata, rw- for .data, etc) are rounded up to 2 MiB so they can be mapped more efficiently. In particular, it permits the segments to be mapped using level 2 block entries when using 4k pages, which is expected to result in less TLB pressure. However, the mappings for the bulk of the kernel will use level 2 entries anyway, and the misaligned fringes are organized such that they can take advantage of the contiguous bit, and use far fewer level 3 entries than would be needed otherwise. This makes the value of this feature dubious at best, and since it is not enabled in defconfig or in the distro configs, it does not appear to be in wide use either. So let's just remove it. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLaura Abbott <labbott@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
Compilers with branch protection support can be configured to enable it by default, it is likely that distributions will do this as part of deploying branch protection system wide. As well as the slight overhead from having some extra NOPs for unused branch protection features this can cause more serious problems when the kernel is providing pointer authentication to userspace but not built for pointer authentication itself. In that case our switching of keys for userspace can affect the kernel unexpectedly, causing pointer authentication instructions in the kernel to corrupt addresses. To ensure that we get consistent and reliable behaviour always explicitly initialise the branch protection mode, ensuring that the kernel is built the same way regardless of the compiler defaults. Fixes: 75031975 (arm64: add basic pointer authentication support) Reported-by: NSzabolcs Nagy <szabolcs.nagy@arm.com> Signed-off-by: NMark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org [catalin.marinas@arm.com: remove Kconfig option in favour of Makefile check] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jason Wang 提交于
Currently, CONFIG_VHOST depends on CONFIG_VIRTUALIZATION. But vhost is not necessarily for VM since it's a generic userspace and kernel communication protocol. Such dependency may prevent archs without virtualization support from using vhost. To solve this, a dedicated vhost menu is created under drivers so CONIFG_VHOST can be decoupled out of CONFIG_VIRTUALIZATION. While at it, also squash Kconfig.vringh into vhost Kconfig file. This avoids the trick of conditional inclusion from VOP or CAIF. Then it will be easier to introduce new vringh users and common dependency for both vringh and vhost. Signed-off-by: NJason Wang <jasowang@redhat.com> Link: https://lore.kernel.org/r/20200326140125.19794-2-jasowang@redhat.comSigned-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 01 4月, 2020 1 次提交
-
-
由 Amit Daniel Kachhap 提交于
Recent addition of ARM64_PTR_AUTH exposed a mismatch issue with binutils. 9.1+ versions of gcc inserts a section note .note.gnu.property but this can be used properly by binutils version greater than 2.33.1. If older binutils are used then the following warnings are generated, aarch64-linux-ld: warning: arch/arm64/kernel/vdso/vgettimeofday.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 aarch64-linux-objdump: warning: arch/arm64/lib/csum.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 aarch64-linux-nm: warning: .tmp_vmlinux1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 This patch enables ARM64_PTR_AUTH when gcc and binutils versions are compatible with each other. Older gcc which do not insert such section continue to work as before. This scenario may not occur with clang as a recent commit 3b446c7d ("arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH") masks binutils version lesser then 2.34. Reported-by: Nkbuild test robot <lkp@intel.com> Suggested-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> [catalin.marinas@arm.com: slight adjustment to the comment] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 3月, 2020 1 次提交
-
-
由 Al Viro 提交于
Move access_ok() in and pagefault_enable()/pagefault_disable() out. Mechanical conversion only - some instances don't really need a separate access_ok() at all (e.g. the ones only using get_user()/put_user(), or architectures where access_ok() is always true); we'll deal with that in followups. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 3月, 2020 5 次提交
-
-
由 Grygorii Strashko 提交于
Enable TI K3 AM654x/J721E DMA and networking options. Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
The TI J721E EVM base board has TI DP83867 PHY connected to external CPSW NUSS Port 1 in rgmii-rxid mode. Hence, add pinmux and Ethernet PHY configuration for TI j721e SoC MCU Gigabit Ethernet two ports Switch subsystem (CPSW NUSS). Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
Add DT node for The TI J721E MCU SoC Gigabit Ethernet subsystem (MCU CPSW NUSS). Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
AM654 EVM base board has TI DP83867 PHY connected to external CPSW NUSS Port 1 in rgmii-rxid mode. Hence, add pinmux and Ethernet PHY configuration for TI am654 SoC Gigabit Ethernet two ports Switch subsystem (CPSW NUSS). Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
Add DT node for the TI AM65x SoC Gigabit Ethernet two ports Switch subsystem (CPSW NUSS). Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 3月, 2020 2 次提交
-
-
由 Arnd Bergmann 提交于
I accidentally merged this after requesting a different solution, reverting now. Link: https://patchwork.kernel.org/patch/11431397/ Fixes: 2cedfe12 ("arm64: dts: specify console via command line") Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Chunyan Zhang 提交于
The SPRD serial driver need to know which serial port would be used as console in an early period during initialization, otherwise console init would fail since we added this feature[1]. So this patch add console to command line via devicetree. [1] https://lore.kernel.org/lkml/20190826072929.7696-4-zhang.lyra@gmail.com/ Link: https://lore.kernel.org/r/20200311112120.30890-1-zhang.lyra@gmail.comSigned-off-by: NChunyan Zhang <chunyan.zhang@unisoc.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 25 3月, 2020 9 次提交
-
-
由 Linus Walleij 提交于
These two device trees were either missed or added after the commit correcting the "entry-method" from "arm,psci" to just "psci" as per the binding. Link: https://lore.kernel.org/r/20200322115846.16265-1-linus.walleij@linaro.org Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Fabio Estevam <festevam@gmail.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Chunyan Zhang <chunyan.zhang@unisoc.com> Reviewed-by: NAmit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Qais Yousef 提交于
Use bringup_hibernate_cpu() instead of open coding it. [ tglx: Split out the core change ] Signed-off-by: NQais Yousef <qais.yousef@arm.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200323135110.30522-9-qais.yousef@arm.com
-
由 Qais Yousef 提交于
Use `reboot_cpu` variable instead of hardcoding 0 as the reboot cpu in machine_shutdown(). Signed-off-by: NQais Yousef <qais.yousef@arm.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200323135110.30522-8-qais.yousef@arm.com
-
由 Qais Yousef 提交于
disable_nonboot_cpus() is not safe to use when doing machine_down(), because it relies on freeze_secondary_cpus() which in turn is a suspend/resume related freeze and could abort if the logic detects any pending activities that can prevent finishing the offlining process. Beside disable_nonboot_cpus() is dependent on CONFIG_PM_SLEEP_SMP which is an othogonal config to rely on to ensure this function works correctly. Signed-off-by: NQais Yousef <qais.yousef@arm.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200323135110.30522-7-qais.yousef@arm.com
-
由 Mark Brown 提交于
New assembly annotations have recently been introduced which aim to make the way we describe symbols in assembly more consistent. Recently the arm64 assembler was converted to use these but install_el2_stub was missed. Signed-off-by: NMark Brown <broonie@kernel.org> [catalin.marinas@arm.com: changed to SYM_L_LOCAL] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Masahiro Yamada 提交于
Add SPDX License Identifier to all .gitignore files. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gavin Shan 提交于
This introduces get_cpu_ops() to return the CPU operations according to the given CPU index. For now, it simply returns the @cpu_ops[cpu] as before. Also, helper function __cpu_try_die() is introduced to be shared by cpu_die() and ipi_cpu_crash_stop(). So it shouldn't introduce any functional changes. Signed-off-by: NGavin Shan <gshan@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
由 Gavin Shan 提交于
This renames cpu_read_ops() to init_cpu_ops() as the function is only called in initialization phase. Also, we will introduce get_cpu_ops() in the subsequent patches, to retireve the CPU operation by the given CPU index. The usage of cpu_read_ops() and get_cpu_ops() are difficult to be distinguished from their names. Signed-off-by: NGavin Shan <gshan@redhat.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Gavin Shan 提交于
It's obvious we needn't declare the corresponding CPU operation when CONFIG_ARM64_ACPI_PARKING_PROTOCOL is disabled, even it doesn't cause any compiling warnings. Signed-off-by: NGavin Shan <gshan@redhat.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 3月, 2020 2 次提交
-
-
由 Marc Zyngier 提交于
Just like for VLPIs, it is beneficial to avoid trapping on WFI when the vcpu is using the GICv4.1 SGIs. Add such a check to vcpu_clear_wfx_traps(). Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-23-maz@kernel.org
-
由 Marc Zyngier 提交于
Each time a Group-enable bit gets flipped, the state of these bits needs to be forwarded to the hardware. This is a pretty heavy handed operation, requiring all vcpus to reload their GICv4 configuration. It is thus implemented as a new request type. These enable bits are programmed into the HW by setting the VGrp{0,1}En fields of GICR_VPENDBASER when the vPEs are made resident again. Of course, we only support Group-1 for now... Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-22-maz@kernel.org
-