- 02 11月, 2017 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org> Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 10月, 2017 1 次提交
-
-
由 Arnd Bergmann 提交于
The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b1 ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Cc: stable@vger.kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 18 9月, 2017 1 次提交
-
-
由 Thomas Garnier 提交于
This reverts commit 73ac5d6a. The work pending loop can call set_fs after addr_limit_user_check removed the _TIF_FSCHECK flag. This may happen at anytime based on how ARM handles alignment exceptions. It leads to an infinite loop condition. After discussion, it has been agreed that the generic approach is not tailored to the ARM architecture and any fix might not be complete. This patch will be replaced by an architecture specific implementation. The work flag approach will be kept for other architectures. Reported-by: NLeonard Crestez <leonard.crestez@nxp.com> Signed-off-by: NThomas Garnier <thgarnie@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Pratyush Anand <panand@redhat.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Will Drewry <wad@chromium.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: David Howells <dhowells@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: linux-api@vger.kernel.org Cc: Yonghong Song <yhs@fb.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1504798247-48833-3-git-send-email-keescook@chromium.org
-
- 09 9月, 2017 1 次提交
-
-
由 Matthew Wilcox 提交于
Reuse the existing optimised memset implementation to implement an optimised memset32 and memset64. Link: http://lkml.kernel.org/r/20170720184539.31609-5-willy@infradead.orgSigned-off-by: NMatthew Wilcox <mawilcox@microsoft.com> Reviewed-by: NRussell King <rmk+kernel@armlinux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: David Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Minchan Kim <minchan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 9月, 2017 1 次提交
-
-
由 James Morse 提交于
The ARM-ARM has two bits in the ESR/HSR relevant to external aborts. A range of {I,D}FSC values (of which bit 5 is always set) and bit 9 'EA' which provides: > an IMPLEMENTATION DEFINED classification of External Aborts. This bit is in addition to the {I,D}FSC range, and has an implementation defined meaning. KVM should always ignore this bit when handling external aborts from a guest. Remove the ESR_ELx_EA definition and rewrite its helper kvm_vcpu_dabt_isextabt() to check the {I,D}FSC range. This merges kvm_vcpu_dabt_isextabt() and the recently added is_abort_sea() helper. CC: Tyler Baicar <tbaicar@codeaurora.org> Reported-by: Ngengdongjiu <gengdj.1984@gmail.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 01 9月, 2017 1 次提交
-
-
由 Jérôme Glisse 提交于
Calls to mmu_notifier_invalidate_page() were replaced by calls to mmu_notifier_invalidate_range() and are now bracketed by calls to mmu_notifier_invalidate_range_start()/end() Remove now useless invalidate_page callback. Changed since v1 (Linus Torvalds) - remove now useless kvm_arch_mmu_notifier_invalidate_page() Signed-off-by: NJérôme Glisse <jglisse@redhat.com> Tested-by: NMike Galbraith <efault@gmx.de> Tested-by: NAdam Borowski <kilobyte@angband.pl> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: kvm@vger.kernel.org Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 8月, 2017 2 次提交
-
-
由 Marc Zyngier 提交于
When masking/unmasking a doorbell interrupt, it is necessary to issue an invalidation to the corresponding redistributor. We use the DirectLPI feature by writting directly to the corresponding redistributor. Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
V{PEND,PROP}BASER being 64bit registers, they need some ad-hoc accessors on 32bit, specially given that VPENDBASER contains a Valid bit, making the access a bit convoluted. Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 26 8月, 2017 1 次提交
-
-
由 Jiri Slaby 提交于
There is code duplicated over all architecture's headers for futex_atomic_op_inuser. Namely op decoding, access_ok check for uaddr, and comparison of the result. Remove this duplication and leave up to the arches only the needed assembly which is now in arch_futex_atomic_op_inuser. This effectively distributes the Will Deacon's arm64 fix for undefined behaviour reported by UBSAN to all architectures. The fix was done in commit 5f16a046 (arm64: futex: Fix undefined behaviour with FUTEX_OP_OPARG_SHIFT usage). Look there for an example dump. And as suggested by Thomas, check for negative oparg too, because it was also reported to cause undefined behaviour report. Note that s390 removed access_ok check in d12a2970 ("s390/uaccess: remove pointless access_ok() checks") as access_ok there returns true. We introduce it back to the helper for the sake of simplicity (it gets optimized away anyway). Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> [s390] Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Reviewed-by: NDarren Hart (VMware) <dvhart@infradead.org> Reviewed-by: Will Deacon <will.deacon@arm.com> [core/arm64] Cc: linux-mips@linux-mips.org Cc: Rich Felker <dalias@libc.org> Cc: linux-ia64@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: peterz@infradead.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: sparclinux@vger.kernel.org Cc: Jonas Bonn <jonas@southpole.se> Cc: linux-s390@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: linux-hexagon@vger.kernel.org Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-snps-arc@lists.infradead.org Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: linux-xtensa@linux-xtensa.org Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: openrisc@lists.librecores.org Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Stafford Horne <shorne@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Richard Henderson <rth@twiddle.net> Cc: Chris Zankel <chris@zankel.net> Cc: Michal Simek <monstr@monstr.eu> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-parisc@vger.kernel.org Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: linux-alpha@vger.kernel.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: "David S. Miller" <davem@davemloft.net> Link: http://lkml.kernel.org/r/20170824073105.3901-1-jslaby@suse.cz
-
- 17 8月, 2017 1 次提交
-
-
由 Paul E. McKenney 提交于
There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes the underlying arch-specific arch_spin_unlock_wait() for all architectures providing them. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: <linux-arch@vger.kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrea Parri <parri.andrea@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NBoqun Feng <boqun.feng@gmail.com>
-
- 11 8月, 2017 2 次提交
-
-
由 Minchan Kim 提交于
Nadav reported parallel MADV_DONTNEED on same range has a stale TLB problem and Mel fixed it[1] and found same problem on MADV_FREE[2]. Quote from Mel Gorman: "The race in question is CPU 0 running madv_free and updating some PTEs while CPU 1 is also running madv_free and looking at the same PTEs. CPU 1 may have writable TLB entries for a page but fail the pte_dirty check (because CPU 0 has updated it already) and potentially fail to flush. Hence, when madv_free on CPU 1 returns, there are still potentially writable TLB entries and the underlying PTE is still present so that a subsequent write does not necessarily propagate the dirty bit to the underlying PTE any more. Reclaim at some unknown time at the future may then see that the PTE is still clean and discard the page even though a write has happened in the meantime. I think this is possible but I could have missed some protection in madv_free that prevents it happening." This patch aims for solving both problems all at once and is ready for other problem with KSM, MADV_FREE and soft-dirty story[3]. TLB batch API(tlb_[gather|finish]_mmu] uses [inc|dec]_tlb_flush_pending and mmu_tlb_flush_pending so that when tlb_finish_mmu is called, we can catch there are parallel threads going on. In that case, forcefully, flush TLB to prevent for user to access memory via stale TLB entry although it fail to gather page table entry. I confirmed this patch works with [4] test program Nadav gave so this patch supersedes "mm: Always flush VMA ranges affected by zap_page_range v2" in current mmotm. NOTE: This patch modifies arch-specific TLB gathering interface(x86, ia64, s390, sh, um). It seems most of architecture are straightforward but s390 need to be careful because tlb_flush_mmu works only if mm->context.flush_mm is set to non-zero which happens only a pte entry really is cleared by ptep_get_and_clear and friends. However, this problem never changes the pte entries but need to flush to prevent memory access from stale tlb. [1] http://lkml.kernel.org/r/20170725101230.5v7gvnjmcnkzzql3@techsingularity.net [2] http://lkml.kernel.org/r/20170725100722.2dxnmgypmwnrfawp@suse.de [3] http://lkml.kernel.org/r/BD3A0EBE-ECF4-41D4-87FA-C755EA9AB6BD@gmail.com [4] https://patchwork.kernel.org/patch/9861621/ [minchan@kernel.org: decrease tlb flush pending count in tlb_finish_mmu] Link: http://lkml.kernel.org/r/20170808080821.GA31730@bbox Link: http://lkml.kernel.org/r/20170802000818.4760-7-namit@vmware.comSigned-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NNadav Amit <namit@vmware.com> Reported-by: NNadav Amit <namit@vmware.com> Reported-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NMel Gorman <mgorman@techsingularity.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Jeff Dike <jdike@addtoit.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Rik van Riel <riel@redhat.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
This patch is a preparatory patch for solving race problems caused by TLB batch. For that, we will increase/decrease TLB flush pending count of mm_struct whenever tlb_[gather|finish]_mmu is called. Before making it simple, this patch separates architecture specific part and rename it to arch_tlb_[gather|finish]_mmu and generic part just calls it. It shouldn't change any behavior. Link: http://lkml.kernel.org/r/20170802000818.4760-5-namit@vmware.comSigned-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NNadav Amit <namit@vmware.com> Acked-by: NMel Gorman <mgorman@techsingularity.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Jeff Dike <jdike@addtoit.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Rik van Riel <riel@redhat.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 8月, 2017 1 次提交
-
-
由 Masami Hiramatsu 提交于
Generate irqentry and softirqentry text sections without any Kconfig dependencies. This will add extra sections, but there should be no performace impact. Suggested-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Chris Zankel <chris@zankel.net> Cc: David S . Miller <davem@davemloft.net> Cc: Francis Deslauriers <francis.deslauriers@efficios.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: linux-arch@vger.kernel.org Cc: linux-cris-kernel@axis.com Cc: mathieu.desnoyers@efficios.com Link: http://lkml.kernel.org/r/150172789110.27216.3955739126693102122.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 8月, 2017 2 次提交
-
-
由 Johan Hovold 提交于
Add missing errno include to make the header self-contained and avoid compilation breakage when compiling shared code without CONFIG_HAVE_ARM_SCU. Signed-off-by: NJohan Hovold <johan@kernel.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Johan Hovold 提交于
Add missing types.h include to make the suspend header self-contained and avoid compilation breakage due to include-directive ordering. Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NJohan Hovold <johan@kernel.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 24 7月, 2017 1 次提交
-
-
由 Dave Martin 提交于
In kernels with CONFIG_IWMMXT=y running on non-iWMMXt hardware, the signal frame can be left partially uninitialised in such a way that userspace cannot parse uc_regspace[] safely. In particular, this means that the VFP registers cannot be located reliably in the signal frame when a multi_v7_defconfig kernel is run on the majority of platforms. The cause is that the uc_regspace[] is laid out statically based on the kernel config, but the decision of whether to save/restore the iWMMXt registers must be a runtime decision. To minimise breakage of software that may assume a fixed layout, this patch emits a dummy block of the same size as iwmmxt_sigframe, for non-iWMMXt threads. However, the magic and size of this block are now filled in to help parsers skip over it. A new DUMMY_MAGIC is defined for this purpose. It is probably legitimate (if non-portable) for userspace to manufacture its own sigframe for sigreturn, and there is no obvious reason why userspace should be required to insert a DUMMY_MAGIC block when running on non-iWMMXt hardware, when omitting it has worked just fine forever in other configurations. So in this case, sigreturn does not require this block to be present. Reported-by: NEdmund Grimley-Evans <Edmund.Grimley-Evans@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 20 7月, 2017 2 次提交
-
-
由 Russell King 提交于
When kexec was converted to DTB, the dtb address was passed between machine_kexec_prepare() and machine_kexec() using a static variable. This is bad news if you load a crash kernel followed by a normal kernel or vice versa - the last loaded kernel overwrites the dtb address. This can result in kexec failures, as (eg) we try to boot the crash kernel with the last loaded dtb. For example, with: the crash kernel fails to find the dtb. Avoid this by defining a kimage architecture structure, and store the address to be passed in r2 there, which will either be the ATAGs or the dtb blob. Fixes: 4cabd1d9 ("ARM: 7539/1: kexec: scan for dtb magic in segments") Fixes: 42d720d1 ("ARM: kexec: Make .text R/W in machine_kexec") Reported-by: NKeerthy <j-keerthy@ti.com> Tested-by: NKeerthy <j-keerthy@ti.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Josh Poimboeuf 提交于
Mike Galbraith reported a situation where a WARN_ON_ONCE() call in DRM code turned into an oops. As it turns out, WARN_ON_ONCE() seems to be completely broken when called from a module. The bug was introduced with the following commit: 19d43626 ("debug: Add _ONCE() logic to report_bug()") That commit changed WARN_ON_ONCE() to move its 'once' logic into the bug trap handler. It requires a writable bug table so that the BUGFLAG_DONE bit can be written to the flags to indicate the first warning has occurred. The bug table was made writable for vmlinux, which relies on vmlinux.lds.S and vmlinux.lds.h for laying out the sections. However, it wasn't made writable for modules, which rely on the ELF section header flags. Reported-by: NMike Galbraith <efault@gmx.de> Tested-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 19d43626 ("debug: Add _ONCE() logic to report_bug()") Link: http://lkml.kernel.org/r/a53b04235a65478dd9afc51f5b329fdc65c84364.1500095401.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 13 7月, 2017 1 次提交
-
-
由 Joe Perches 提交于
asmlinkage is either 'extern "C"' or blank. Move the uses of asmlinkage before the return types to be similar to the rest of the kernel. Link: http://lkml.kernel.org/r/005b8e120650c6a13b541e420f4e3605603fe9e6.1499284835.git.joe@perches.comSigned-off-by: NJoe Perches <joe@perches.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krcmar <rkrcmar@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2017 1 次提交
-
-
由 Kees Cook 提交于
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. 4MB is chosen here mainly to have parity with x86, where this is the traditional minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). For ARM the position could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running PIE on 32-bit ARM will have an MMU, so the tight mapping is not needed. Link: http://lkml.kernel.org/r/1498154792-49952-2-git-send-email-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Qualys Security Advisory <qsa@qualys.com> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 7月, 2017 1 次提交
-
-
由 Masahiro Yamada 提交于
Since commit fcc8487d ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 08 7月, 2017 1 次提交
-
-
由 Thomas Garnier 提交于
Ensure the address limit is a user-mode segment before returning to user-mode. Otherwise a process can corrupt kernel-mode memory and elevate privileges [1]. The set_fs function sets the TIF_SETFS flag to force a slow path on return. In the slow path, the address limit is checked to be USER_DS if needed. The TIF_SETFS flag is added to _TIF_WORK_MASK shifting _TIF_SYSCALL_WORK for arm instruction immediate support. The global work mask is too big to used on a single instruction so adapt ret_fast_syscall. [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990Signed-off-by: NThomas Garnier <thgarnie@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: kernel-hardening@lists.openwall.com Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: David Howells <dhowells@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Petr Mladek <pmladek@suse.com> Cc: Rik van Riel <riel@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Will Drewry <wad@chromium.org> Cc: linux-api@vger.kernel.org Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Paolo Bonzini <pbonzini@redhat.com> Link: http://lkml.kernel.org/r/20170615011203.144108-2-thgarnie@google.com
-
- 04 7月, 2017 2 次提交
-
-
由 Al Viro 提交于
no users left Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
on MMU targets EFAULT is possible here. Make both return 0 or error, passing what used to be the return value of flat_get_addr_from_rp() by reference. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 01 7月, 2017 3 次提交
-
-
由 Kees Cook 提交于
Some function pointer structures are used externally to the kernel, like the paravirt structures. These should never be randomized, so mark them as such, in preparation for enabling randstruct's automatic selection of all-function-pointer structures. These markings are verbatim from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: NKees Cook <keescook@chromium.org> -
由 Arnd Bergmann 提交于
With the new task struct randomization, we can run into a build failure for certain random seeds, which will place fields beyond the allow immediate size in the assembly: arch/arm/kernel/entry-armv.S: Assembler messages: arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096) Only two constants in asm-offset.h are affected, and I'm changing both of them here to work correctly in all configurations. One more macro has the problem, but is currently unused, so this removes it instead of adding complexity. Suggested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de> [kees: Adjust commit log slightly] Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Vladimir Murzin 提交于
R/M classes of cpus can have memory covered by MPU which in turn might configure RAM as Normal i.e. bufferable and cacheable. It breaks dma_alloc_coherent() and friends, since data can stuck in caches now or be buffered. This patch factors out DMA support for NOMMU configuration into separate entity which provides dedicated dma_ops. We have to handle there several cases: - configurations with MMU/MPU setup - configurations without MMU/MPU setup - special case for M-class, since caches and MPU there are optional In general we rely on default DMA area for coherent allocations or/and per-device memory reserves suitable for coherent DMA, so if such regions are set coherent allocations go from there. In case MMU/MPU was not setup we fallback to normal page allocator for DMA memory allocation. In case we run M-class cpus, for configuration without cache support (like Cortex-M3/M4) dma operations are forced to be coherent and wired with dma-noop (such decision is made based on cacheid global variable); however, if caches are detected there and no DMA coherent region is given (either default or per-device), dma is disallowed even MPU is not set - it is because M-class implement system memory map which defines part of address space as Normal memory. Reported-by: NAlexandre Torgue <alexandre.torgue@st.com> Reported-by: NAndras Szemzo <sza@esh.hu> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NAndras Szemzo <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> [hch: removed the dma_supported() implementation that isn't required anymore] Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 29 6月, 2017 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The introduction of pci_scan_root_bus_bridge() provides a PCI core API to scan a PCI root bus backed by an already initialized struct pci_host_bridge object, which simplifies the bus scan interface and makes the PCI scan root bus interface easier to generalize as members are added to the struct pci_host_bridge. Convert ARM bios32 code to pci_scan_root_bus_bridge() to improve the PCI root bus scanning interface. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> [bhelgaas: fold in warning fix from Arnd Bergmann <arnd@arndb.de>: http://lkml.kernel.org/r/20170621215323.3921382-1-arnd@arndb.de] [bhelgaas: set bridge->ops for mv78xx0] [bhelgaas: fold in fixes from Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>: http://lkml.kernel.org/r/20170701135457.GB8977@red-moon] Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Russell King <linux@armlinux.org.uk> Cc: Andrew Lunn <andrew@lunn.ch>
-
- 28 6月, 2017 2 次提交
-
-
由 Christoph Hellwig 提交于
And instead wire it up as method for all the dma_map_ops instances. Note that the code seems a little fishy for dmabounce and iommu, but for now I'd like to preserve the existing behavior 1:1. Signed-off-by: NChristoph Hellwig <hch@lst.de> -
由 Christoph Hellwig 提交于
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 27 6月, 2017 1 次提交
-
-
由 Jérémy Lefaure 提交于
I didn't find any use of this macro in the current kernel tree (with git grep). KTHREAD_SIZE is no longer used for a very very long time. So let's remove this definition. Signed-off-by: NJérémy Lefaure <jeremy.lefaure@lse.epita.fr> Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 23 6月, 2017 1 次提交
-
-
由 Tyler Baicar 提交于
Currently external aborts are unsupported by the guest abort handling. Add handling for SEAs so that the host kernel reports SEAs which occur in the guest kernel. When an SEA occurs in the guest kernel, the guest exits and is routed to kvm_handle_guest_abort(). Prior to this patch, a print message of an unsupported FSC would be printed and nothing else would happen. With this patch, the code gets routed to the APEI handling of SEAs in the host kernel to report the SEA information. Signed-off-by: NTyler Baicar <tbaicar@codeaurora.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 6月, 2017 2 次提交
-
-
由 Marc Gonzalez 提交于
include/asm-generic/bitops/find.h declares: extern unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size); while arch/arm/include/asm/bitops.h declares: #define find_first_zero_bit(p,sz) _find_first_zero_bit_le(p,sz) extern int _find_first_zero_bit_le(const void * p, unsigned size); Align the arm prototypes to the generic API, to have gcc report inadequate arguments, such as pointer to u32. Signed-off-by: NMarc Gonzalez <marc_gonzalez@sigmadesigns.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Abel Vesa 提交于
The DYNAMIC_FTRACE_WITH_REGS configuration makes it possible for a ftrace operation to specify if registers need to saved/restored by the ftrace handler. This is needed by kgraft and possibly other ftrace-based tools, and the ARM architecture is currently lacking this feature. It would also be the first step to support the "Kprobes-on-ftrace" optimization on ARM. This patch introduces a new ftrace handler that stores the registers on the stack before calling the next stage. The registers are restored from the stack before going back to the instrumented function. A side-effect of this patch is to activate the support for ftrace_modify_call() as it defines ARCH_SUPPORTS_FTRACE_OPS for the ARM architecture. Signed-off-by: NAbel Vesa <abelvesa@linux.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 09 6月, 2017 1 次提交
-
-
由 Arnd Bergmann 提交于
The improved type-checking version of container_of() triggers a warning for xchg_xen_ulong, pointing out that 'xen_ulong_t' is unsigned, but atomic64_t contains a signed value: drivers/xen/events/events_2l.c: In function 'evtchn_2l_handle_events': drivers/xen/events/events_2l.c:187:1020: error: call to '__compiletime_assert_187' declared with attribute error: pointer type mismatch in container_of() This adds a cast to work around the warning. Cc: Ian Abbott <abbotti@mev.co.uk> Fixes: 85323a99 ("xen: arm: mandate EABI and use generic atomic operations.") Fixes: daa2ac80834d ("kernel.h: handle pointers to arrays better in container_of()") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NStefano Stabellini <sstabellini@kernel.org> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org> Acked-by: NIan Abbott <abbotti@mev.co.uk>
-
- 08 6月, 2017 1 次提交
-
-
由 Christoffer Dall 提交于
As we are about to support VCPU attributes to set the timer IRQ numbers in guest.c, move the static inlines for the VCPU attributes handlers from the header file to guest.c. Signed-off-by: NChristoffer Dall <cdall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 05 6月, 2017 2 次提交
-
-
由 Ard Biesheuvel 提交于
Wire up the existing arm64 support for SMBIOS tables (aka DMI) for ARM as well, by moving the arm64 init code to drivers/firmware/efi/arm-runtime.c (which is shared between ARM and arm64), and adding a asm/dmi.h header to ARM that defines the mapping routines for the firmware tables. This allows userspace to access these tables to discover system information exposed by the firmware. It also sets the hardware name used in crash dumps, e.g.: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = ed3c0000 [00000000] *pgd=bf1f3835 Internal error: Oops: 817 [#1] SMP THUMB2 Modules linked in: CPU: 0 PID: 759 Comm: bash Not tainted 4.10.0-09601-g0e8f38792120-dirty #112 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 ^^^ NOTE: This does *NOT* enable or encourage the use of DMI quirks, i.e., the the practice of identifying the platform via DMI to decide whether certain workarounds for buggy hardware and/or firmware need to be enabled. This would require the DMI subsystem to be enabled much earlier than we do on ARM, which is non-trivial. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20170602135207.21708-14-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Vladimir Murzin 提交于
NOMMU build leads to the following error: CC drivers/pci/mmap.o drivers/pci/mmap.c: In function 'pci_mmap_resource_range': drivers/pci/mmap.c:60:3: error: implicit declaration of function 'pgprot_device' [-Werror=implicit-function-declaration] vma->vm_page_prot = pgprot_device(vma->vm_page_prot); ^ cc1: some warnings being treated as errors scripts/Makefile.build:302: recipe for target 'drivers/pci/mmap.o' failed make[2]: *** [drivers/pci/mmap.o] Error 1 scripts/Makefile.build:561: recipe for target 'drivers/pci' failed make[1]: *** [drivers/pci] Error 2 Makefile:1016: recipe for target 'drivers' failed make: *** [drivers] Error 2 Fix it with support of pgprot_device() macro for NOMMU. Fixes: 00d2904f ("ARM/PCI: Use generic pci_mmap_resource_range()") Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 04 6月, 2017 2 次提交
-
-
由 Andrew Jones 提交于
Don't use request-less VCPU kicks when injecting IRQs, as a VCPU kick meant to trigger the interrupt injection could be sent while the VCPU is outside guest mode, which means no IPI is sent, and after it has called kvm_vgic_flush_hwstate(), meaning it won't see the updated GIC state until its next exit some time later for some other reason. The receiving VCPU only needs to check this request in VCPU RUN to handle it. By checking it, if it's pending, a memory barrier will be issued that ensures all state is visible. See "Ensuring Requests Are Seen" of Documentation/virtual/kvm/vcpu-requests.rst Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Andrew Jones 提交于
A request called EXIT is too generic. All requests are meant to cause exits, but different requests have different flags. Let's not make it difficult to decide if the EXIT request is correct for some case by just always providing unique requests for each case. This patch changes EXIT to SLEEP, because that's what the request is asking the VCPU to do. Signed-off-by: NAndrew Jones <drjones@redhat.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-