- 21 11月, 2017 2 次提交
-
-
由 Philip Derrin 提交于
Currently, for ARM kernels with CONFIG_ARM_LPAE and CONFIG_STRICT_KERNEL_RWX enabled, the 2MiB pages mapping the kernel code and rodata are writable. They are marked read-only in a software bit (L_PMD_SECT_RDONLY) but the hardware read-only bit is not set (PMD_SECT_AP2). For user mappings, the logic that propagates the software bit to the hardware bit is in set_pmd_at(); but for the kernel, section_update() writes the PMDs directly, skipping this logic. The fix is to set PMD_SECT_AP2 for read-only sections in section_update(), at the same time as L_PMD_SECT_RDONLY. Fixes: 1e347922 ("ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error") Signed-off-by: NPhilip Derrin <philip@cog.systems> Reported-by: NNeil Dick <neil@cog.systems> Tested-by: NNeil Dick <neil@cog.systems> Tested-by: NLaura Abbott <labbott@redhat.com> Reviewed-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Philip Derrin 提交于
When CONFIG_ARM_LPAE is set, the PMD dump relies on the software read-only bit to determine whether a page is writable. This concealed a bug which left the kernel text section writable (AP2=0) while marked read-only in the software bit. In a kernel with the AP2 bug, the dump looks like this: ---[ Kernel Mapping ]--- 0xc0000000-0xc0200000 2M RW NX SHD 0xc0200000-0xc0600000 4M ro x SHD 0xc0600000-0xc0800000 2M ro NX SHD 0xc0800000-0xc4800000 64M RW NX SHD The fix is to check that the software and hardware bits are both set before displaying "ro". The dump then shows the true perms: ---[ Kernel Mapping ]--- 0xc0000000-0xc0200000 2M RW NX SHD 0xc0200000-0xc0600000 4M RW x SHD 0xc0600000-0xc0800000 2M RW NX SHD 0xc0800000-0xc4800000 64M RW NX SHD Fixes: ded94779 ("ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE") Signed-off-by: NPhilip Derrin <philip@cog.systems> Tested-by: NNeil Dick <neil@cog.systems> Reviewed-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 16 11月, 2017 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Let's add wrappers for ->nr_ptes with the same interface as for nr_pmd and nr_pud. The patch also makes nr_ptes accounting dependent onto CONFIG_MMU. Page table accounting doesn't make sense if you don't have page tables. It's preparation for consolidation of page-table counters in mm_struct. Link: http://lkml.kernel.org/r/20171006100651.44742-1-kirill.shutemov@linux.intel.comSigned-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 11月, 2017 1 次提交
-
-
由 Arnd Bergmann 提交于
The reworked MPU code produces a new warning in some configurations, presumably starting with the code move after the compiler now makes different inlining decisions: arch/arm/mm/pmsa-v7.c: In function 'adjust_lowmem_bounds_mpu': arch/arm/mm/pmsa-v7.c:310:5: error: 'specified_mem_size' may be used uninitialized in this function [-Werror=maybe-uninitialized] This appears to be harmless, as we know that there is always at least one memblock, and the only way this could get triggered is if the for_each_memblock() loop was never entered. I could not come up with a better workaround than initializing the specified_mem_size to zero, but at least that is the value that the variable would have in the hypothetical case of no memblocks. Fixes: 877ec119 ("ARM: 8706/1: NOMMU: Move out MPU setup in separate module") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 02 11月, 2017 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org> Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 10月, 2017 6 次提交
-
-
由 Vladimir Murzin 提交于
Currently, there is assumption in early MPU setup code that kernel image is located in RAM, which is obviously not true for XIP. To run code from ROM we need to make sure that it is covered by MPU. However, due to we allocate regions (semi-)dynamically we can run into issue of trimming region we are running from in case ROM spawns several MPU regions. To help deal with that we enforce minimum alignments for start end end of XIP address space as 1MB and 128Kb correspondingly. Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
PMSAv7 defines curious alignment requirements to the regions: - size must be power of 2, and - region start must be aligned to the region size Because of that we currently adjust lowmem bounds plus we assign only one MPU region to cover memory all these lead to significant amount of memory could be wasted. As an example, consider 64Mb of memory at 0x70000000 - it fits alignment requirements nicely; now, imagine that 2Mb of memory is reserved for coherent DMA allocation, so now Linux is expected to see 62Mb of memory... and here annoying thing happens - memory gets truncated to 32Mb (we've lost 30Mb!), i.e. MPU layout looks like: 0: base 0x70000000, size 0x2000000 This patch tries to allocate as much as possible MPU slots to minimise amount of truncated memory. Moreover, with this patch MPU subregions starting to get used. MPU subregions allow us reduce the number of MPU slots used. For example given above, MPU layout looks like: 0: base 0x70000000, size 0x2000000 1: base 0x72000000, size 0x1000000 2: base 0x73000000, size 0x1000000, disable subreg 7 (0x73e00000 - 0x73ffffff) Where without subregions we'd get: 0: base 0x70000000, size 0x2000000 1: base 0x72000000, size 0x1000000 2: base 0x73000000, size 0x800000 3: base 0x73800000, size 0x400000 4: base 0x73c00000, size 0x200000 To achieve better layout we fist try to cover specified memory as is (maybe with help of subregions) and if we failed, we truncate memory to fit alignment requirements (so it occupies one MPU slot) and perform one more attempt with the reminder, and so on till we either cover all memory or run out of MPU slots. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
This patch makes it possible to use MPU with v7M cores. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
Currently, there are several issues with how MPU is setup: 1. We won't boot if MPU is missing 2. We won't boot if use XIP 3. Further extension of MPU setup requires asm skills The 1st point can be relaxed, so we can continue with boot CPU even if MPU is missed and fail boot for secondaries only. To address the 2nd point we could create region covering CONFIG_XIP_PHYS_ADDR - _end and that might work for the first stage of MPU enable, but due to MPU's alignment requirement we could cover too much, IOW we need more flexibility in how we're partitioning memory regions... and it'd be hardly possible to archive because of the 3rd point. This patch is trying to address 1st and 3rd issues and paves the path for 2nd and further improvements. The most visible change introduced with this patch is that we start using mpu_rgn_info array (as it was supposed?), so change in MPU setup done by boot CPU is recorded there and feed to secondaries. It allows us to keep minimal region setup for boot CPU and do the rest in C. Since we start programming MPU regions in C evaluation of MPU constrains (number of regions supported and minimal region order) can be done once, which in turn open possibility to free-up "probe" region early. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
Currently, inline assembly for accessing to MPU's cp15 lacks volatile keyword which opens possibility to compiler to optimise such accesses as soon as we start using them more intensively. Rather than fixing inline asm, lets move MPU accessors to use cp15 helpers which do the right thing. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
Having MPU handling code in dedicated module makes it easier to enhance/maintain it. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 12 10月, 2017 1 次提交
-
-
由 Nicolas Pitre 提交于
Some nommu systems have RAM at address 0. When vectors are not located there, the very beginning of memory remains available for dynamic allocations. The memblock allocator explicitly skips the first page but the standard page allocator does not, and while it correctly returns a non-null struct page pointer for that page, page_address() gives 0 which gets confused with NULL (out of memory) by callers despite having plenty of free memory left. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 28 9月, 2017 4 次提交
-
-
由 Vladimir Murzin 提交于
There are no users of init_dma_coherent_pool_size() left due to 387870f2 ("mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls"), so remove it. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
atomic_pool is setup once while init stage and never changed after that, so it is good candidate for __ro_after_init. Since we are here mark atomic_pool_size with __init_data. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
gen_pool_first_fit_order_align() does not make use of additional data, so pass plain NULL there. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
The code in question checks memory constrains to set default policy for overcommit; however we support page size of 4K only thus condition is always evaluated to false. Remove that dead code. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 29 8月, 2017 2 次提交
-
-
由 Mark Rutland 提交于
When there's a fatal signal pending, arm's do_page_fault() implementation returns 0. The intent is that we'll return to the faulting userspace instruction, delivering the signal on the way. However, if we take a fatal signal during fixing up a uaccess, this results in a return to the faulting kernel instruction, which will be instantly retried, resulting in the same fault being taken forever. As the task never reaches userspace, the signal is not delivered, and the task is left unkillable. While the task is stuck in this state, it can inhibit the forward progress of the system. To avoid this, we must ensure that when a fatal signal is pending, we apply any necessary fixup for a faulting kernel instruction. Thus we will return to an error path, and it is up to that code to make forward progress towards delivering the fatal signal. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSteve Capper <steve.capper@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Hoeun Ryu 提交于
Reading TTBCR in early boot stage might return the value of the previous kernel's configuration, especially in case of kexec. For example, if normal kernel (first kernel) had run on a configuration of PHYS_OFFSET <= PAGE_OFFSET and crash kernel (second kernel) is running on a configuration PHYS_OFFSET > PAGE_OFFSET, which can happen because it depends on the reserved area for crash kernel, reading TTBCR and using the value to OR other bit fields might be risky because it doesn't have a reset value for TTBCR. Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NHoeun Ryu <hoeun.ryu@gmail.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 14 8月, 2017 1 次提交
-
-
由 Russell King 提交于
Robert Jarzmik reports that his PXA25x system fails to boot with 4.12, failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215: 0xc0019e20 <+0>: ldr r1, [pc, #788] 0xc0019e24 <+4>: ldr r0, [r1] <== here with r1 containing 0xc06f82cd, which is the address of "clean_addr". Examination of the System.map shows: c06f22c8 D user_pmd_table c06f22cc d __warned.19178 c06f22cd d clean_addr indicating that a .data.unlikely section has appeared just before the .data section from proc-xscale.S. According to objdump -h, it appears that our assembly files default their .data alignment to 2**0, which is bad news if the preceding .data section size is not power-of-2 aligned at link time. Add the appropriate .align directives to all assembly files in arch/arm that are missing them where we require an appropriate alignment. Reported-by: NRobert Jarzmik <robert.jarzmik@free.fr> Tested-by: NRobert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 20 7月, 2017 2 次提交
-
-
由 Vladimir Murzin 提交于
The way how default DMA pool is exposed has changed and now we need to use dedicated interface to work with it. This patch makes alloc/release operations to use such interface. Since, default DMA pool is not handled by generic code anymore we have to implement our own mmap operation. Tested-by: NAndras Szemzo <sza@esh.hu> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Vladimir Murzin 提交于
Christoph noticed [1] that default DMA pool in current form overload the DMA coherent infrastructure. In reply, Robin suggested [2] to split the per-device vs. global pool interfaces, so allocation/release from default DMA pool is driven by dma ops implementation. This patch implements Robin's idea and provide interface to allocate/release/mmap the default (aka global) DMA pool. To make it clear that existing *_from_coherent routines work on per-device pool rename them to *_from_dev_coherent. [1] https://lkml.org/lkml/2017/7/7/370 [2] https://lkml.org/lkml/2017/7/7/431 Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Suggested-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NAndras Szemzo <sza@esh.hu> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 01 7月, 2017 4 次提交
-
-
由 Arnd Bergmann 提交于
With the new task struct randomization, we can run into a build failure for certain random seeds, which will place fields beyond the allow immediate size in the assembly: arch/arm/kernel/entry-armv.S: Assembler messages: arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096) Only two constants in asm-offset.h are affected, and I'm changing both of them here to work correctly in all configurations. One more macro has the problem, but is currently unused, so this removes it instead of adding complexity. Suggested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de> [kees: Adjust commit log slightly] Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Vladimir Murzin 提交于
DMA operations for NOMMU case have been just factored out into separate compilation unit, so don't keep dead code. Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NAndras Szemzo <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Vladimir Murzin 提交于
Now, we have dedicated non-cacheable region for consistent DMA operations. However, that region can still be marked as bufferable by MPU, so it'd be safer to have barriers by default. M-class machines that didn't need it until now also likely won't need it in the future, therefore, we offer this as an option. Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NAndras Szemzo <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Vladimir Murzin 提交于
R/M classes of cpus can have memory covered by MPU which in turn might configure RAM as Normal i.e. bufferable and cacheable. It breaks dma_alloc_coherent() and friends, since data can stuck in caches now or be buffered. This patch factors out DMA support for NOMMU configuration into separate entity which provides dedicated dma_ops. We have to handle there several cases: - configurations with MMU/MPU setup - configurations without MMU/MPU setup - special case for M-class, since caches and MPU there are optional In general we rely on default DMA area for coherent allocations or/and per-device memory reserves suitable for coherent DMA, so if such regions are set coherent allocations go from there. In case MMU/MPU was not setup we fallback to normal page allocator for DMA memory allocation. In case we run M-class cpus, for configuration without cache support (like Cortex-M3/M4) dma operations are forced to be coherent and wired with dma-noop (such decision is made based on cacheid global variable); however, if caches are detected there and no DMA coherent region is given (either default or per-device), dma is disallowed even MPU is not set - it is because M-class implement system memory map which defines part of address space as Normal memory. Reported-by: NAlexandre Torgue <alexandre.torgue@st.com> Reported-by: NAndras Szemzo <sza@esh.hu> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NAndras Szemzo <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> [hch: removed the dma_supported() implementation that isn't required anymore] Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 30 6月, 2017 1 次提交
-
-
由 Doug Berger 提交于
The pmd containing memblock_limit is cleared by prepare_page_table() which creates the opportunity for early_alloc() to allocate unmapped memory if memblock_limit is not pmd aligned causing a boot-time hang. Commit 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") attempted to resolve this problem, but there is a path through the adjust_lowmem_bounds() routine where if all memory regions start and end on pmd-aligned addresses the memblock_limit will be set to arm_lowmem_limit. Since arm_lowmem_limit can be affected by the vmalloc early parameter, the value of arm_lowmem_limit may not be pmd-aligned. This commit corrects this oversight such that memblock_limit is always rounded down to pmd-alignment. Fixes: 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") Signed-off-by: NDoug Berger <opendmb@gmail.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 28 6月, 2017 2 次提交
-
-
由 Christoph Hellwig 提交于
And instead wire it up as method for all the dma_map_ops instances. Note that the code seems a little fishy for dmabounce and iommu, but for now I'd like to preserve the existing behavior 1:1. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 19 6月, 2017 1 次提交
-
-
由 Hugh Dickins 提交于
Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: NOleg Nesterov <oleg@redhat.com> Original-patch-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NHugh Dickins <hughd@google.com> Acked-by: NMichal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 5月, 2017 3 次提交
-
-
由 Russell King 提交于
David Mosberger reports random segfaults and other problems when running his buildroot userspace. It turns out that his kernel did not have support for Thumb userspace, nor did his application, but glibc made use of Thumb instructions in glibc. The kernel Thumb support option already recommends being enabled, and is also so biased, but clearly this is not enough of a recommendation. So, hide this behind CONFIG_EXPERT as well, and include a note to indicate the potential issues if it's turned off and userspace Thumb mode is made use of. Reported-by: NDavid Mosberger <davidm@egauge.net> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Sricharan R 提交于
arch_teardown_dma_ops() being the inverse of arch_setup_dma_ops() ,dma_ops should be cleared in the teardown path. Currently, only the device's iommu mapping structures are cleared in arch_teardown_dma_ops, but not the dma_ops. So on the next reprobe, dma_ops left in place is stale from the first IOMMU setup, but iommu mappings has been disposed of. This is a problem when the probe of the device is deferred and recalled with the IOMMU probe deferral. So for fixing this, slightly refactor by moving the code from __arm_iommu_detach_device to arm_iommu_detach_device and cleanup the former. This takes care of resetting the dma_ops in the teardown path. Fixes: 09515ef5 ("of/acpi: Configure dma operations at probe time for platform/amba/pci bus devices") Reviewed-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Laurent Pinchart 提交于
arch_setup_dma_ops() is used in device probe code paths to create an IOMMU mapping and attach it to the device. The function assumes that the device is attached to a device-specific IOMMU instance (or at least a device-specific TLB in a shared IOMMU instance) and thus creates a separate mapping for every device. On several systems (Renesas R-Car Gen2 being one of them), that assumption is not true, and IOMMU mappings must be shared between multiple devices. In those cases the IOMMU driver knows better than the generic ARM dma-mapping layer and attaches mapping to devices manually with arm_iommu_attach_device(), which sets the DMA ops for the device. The arch_setup_dma_ops() function takes this into account and bails out immediately if the device already has DMA ops assigned. However, the corresponding arch_teardown_dma_ops() function, called from driver unbind code paths (including probe deferral), will tear the mapping down regardless of who created it. When the device is reprobed arch_setup_dma_ops() will be called again but won't perform any operation as the DMA ops will still be set. We need to reset the DMA ops in arch_teardown_dma_ops() to fix this. However, we can't do so unconditionally, as then a new mapping would be created by arch_setup_dma_ops() when the device is reprobed, regardless of whether the device needs to share a mapping or not. We must thus keep track of whether arch_setup_dma_ops() created the mapping, and only in that case tear it down in arch_teardown_dma_ops(). Keep track of that information in the dev_archdata structure. As the structure is embedded in all instances of struct device let's not grow it, but turn the existing dma_coherent bool field into a bitfield that can be used for other purposes. Fixes: 09515ef5 ("of/acpi: Configure dma operations at probe time for platform/amba/pci bus devices") Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 5月, 2017 1 次提交
-
-
由 Laura Abbott 提交于
set_memory_* functions have moved to set_memory.h. Switch to this explicitly Link: http://lkml.kernel.org/r/1488920133-27229-3-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 5月, 2017 1 次提交
-
-
由 Stefano Stabellini 提交于
The following commit: commit 815dd187 Author: Bart Van Assche <bart.vanassche@sandisk.com> Date: Fri Jan 20 13:04:04 2017 -0800 treewide: Consolidate get_dma_ops() implementations rearranges get_dma_ops in a way that xen_dma_ops are not returned when running on Xen anymore, dev->dma_ops is returned instead (see arch/arm/include/asm/dma-mapping.h:get_arch_dma_ops and include/linux/dma-mapping.h:get_dma_ops). Fix the problem by storing dev->dma_ops in dev_archdata, and setting dev->dma_ops to xen_dma_ops. This way, xen_dma_ops is returned naturally by get_dma_ops. The Xen code can retrieve the original dev->dma_ops from dev_archdata when needed. It also allows us to remove __generic_dma_ops from common headers. Signed-off-by: NStefano Stabellini <sstabellini@kernel.org> Tested-by: NJulien Grall <julien.grall@arm.com> Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> [4.11+] CC: linux@armlinux.org.uk CC: catalin.marinas@arm.com CC: will.deacon@arm.com CC: boris.ostrovsky@oracle.com CC: jgross@suse.com CC: Julien Grall <julien.grall@arm.com>
-
- 29 4月, 2017 1 次提交
-
-
由 Laurent Pinchart 提交于
The arch_setup_dma_ops() function is in charge of setting dma_ops with a call to set_dma_ops(). set_dma_ops() is also called from - highbank and mvebu bus notifiers - dmabounce (to be replaced with swiotlb) - arm_iommu_attach_device (arm_iommu_attach_device is itself called from IOMMU and bus master device drivers) To allow the arch_setup_dma_ops() call to be moved from device add time to device probe time we must ensure that dma_ops already setup by any of the above callers will not be overriden. Aftering replacing dmabounce with swiotlb, converting IOMMU drivers to of_xlate and taking care of highbank and mvebu, the workaround should be removed. Signed-off-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Tested-by: NRalph Sennhauser <ralph.sennhauser@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 4月, 2017 3 次提交
-
-
由 Grygorii Strashko 提交于
The below backtrace can be observed on -rt kernel with CONFIG_DEBUG_MODULE_RONX (4.9 kernel CONFIG_DEBUG_RODATA) option enabled: BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:993 in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0 1 lock held by migration/0/14: #0: (tasklist_lock){+.+...}, at: [<c01183e8>] update_sections_early+0x24/0xdc irq event stamp: 38 hardirqs last enabled at (37): [<c08f6f7c>] _raw_spin_unlock_irq+0x24/0x68 hardirqs last disabled at (38): [<c01fdfe8>] multi_cpu_stop+0xd8/0x138 softirqs last enabled at (0): [<c01303ec>] copy_process.part.5+0x238/0x1b64 softirqs last disabled at (0): [< (null)>] (null) Preemption disabled at: [<c01fe244>] cpu_stopper_thread+0x80/0x10c CPU: 0 PID: 14 Comm: migration/0 Not tainted 4.9.21-rt16-02220-g49e319c #15 Hardware name: Generic DRA74X (Flattened Device Tree) [<c0112014>] (unwind_backtrace) from [<c010d370>] (show_stack+0x10/0x14) [<c010d370>] (show_stack) from [<c049beb8>] (dump_stack+0xa8/0xd4) [<c049beb8>] (dump_stack) from [<c01631a0>] (___might_sleep+0x1bc/0x2ac) [<c01631a0>] (___might_sleep) from [<c08f7244>] (__rt_spin_lock+0x1c/0x30) [<c08f7244>] (__rt_spin_lock) from [<c08f77a4>] (rt_read_lock+0x54/0x68) [<c08f77a4>] (rt_read_lock) from [<c01183e8>] (update_sections_early+0x24/0xdc) [<c01183e8>] (update_sections_early) from [<c01184b0>] (__fix_kernmem_perms+0x10/0x1c) [<c01184b0>] (__fix_kernmem_perms) from [<c01fe010>] (multi_cpu_stop+0x100/0x138) [<c01fe010>] (multi_cpu_stop) from [<c01fe24c>] (cpu_stopper_thread+0x88/0x10c) [<c01fe24c>] (cpu_stopper_thread) from [<c015edc4>] (smpboot_thread_fn+0x174/0x31c) [<c015edc4>] (smpboot_thread_fn) from [<c015a988>] (kthread+0xf0/0x108) [<c015a988>] (kthread) from [<c0108818>] (ret_from_fork+0x14/0x3c) Freeing unused kernel memory: 1024K (c0d00000 - c0e00000) The stop_machine() is called with cpus = NULL from fix_kernmem_perms() and mark_rodata_ro() which means only one CPU will execute update_sections_early() while all other CPUs will spin and wait. Hence, it's safe to remove tasklist locking from update_sections_early(). As part of this change also mark functions which are local to this module as static. Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Acked-by: NLaura Abbott <labbott@redhat.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
According to ARMv7 ARM, when exception is taken content of r0-r3, r12 is unknown (see ExceptionTaken() pseudocode). Even though existent implementations keep these register unchanged, preserve them to be in line with architecture. Reported-by: NDobromir Stefanov <dobromir.stefanov@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
We save/restore registers around v7m_invalidate_l1 to address pointed by r12, which is vector table, so the first eight entries are overwritten with a garbage. We already have stack setup at that stage, so use it to save/restore register. Fixes: 6a8146f4 ("ARM: 8609/1: V7M: Add support for the Cortex-M7 processor") Cc: <stable@vger.kernel.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 25 4月, 2017 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The PCI bus specification (rev 3.0, 3.2.5 "Transaction Ordering and Posting") defines rules for PCI configuration space transactions ordering and posting, that state that configuration writes have to be non-posted transactions. Current ioremap interface on ARM provides mapping functions that provide "bufferable" writes transactions (ie ioremap uses MT_DEVICE memory type) aka posted writes, so PCI host controller drivers have no arch interface to remap PCI configuration space with memory attributes that comply with the PCI specifications for configuration space. Implement an ARM specific pci_remap_cfgspace() interface that allows to map PCI config memory regions with MT_UNCACHED memory type (ie strongly ordered - non-posted writes), providing a remap function that complies with PCI specifications for config space transactions. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk>
-
- 20 4月, 2017 1 次提交
-
-
由 Jon Medhurst 提交于
To cope with the variety in ARM architectures and configurations, the pagetable attributes for kernel memory are generated at runtime to match the system the kernel finds itself on. This calculated value is stored in pgprot_kernel. However, when early fixmap support was added for ARM (commit a5f4c561) the attributes used for mappings were hard coded because pgprot_kernel is not set up early enough. Unfortunately, when fixmap is used after early boot this means the memory being mapped can have different attributes to existing mappings, potentially leading to unpredictable behaviour. A specific problem also exists due to the hard coded values not include the 'shareable' attribute which means on systems where this matters (e.g. those with multiple CPU clusters) the cache contents for a memory location can become inconsistent between CPUs. To resolve these issues we change fixmap to use the same memory attributes (from pgprot_kernel) that the rest of the kernel uses. To enable this we need to refactor the initialisation code so build_mem_type_table() is called early enough. Note, that relies on early param parsing for memory type overrides passed via the kernel command line, so we need to make sure this call is still after parse_early_params(). [ardb: keep early_fixmap_init() before param parsing, for earlycon] Fixes: a5f4c561 ("ARM: 8415/1: early fixmap support for earlycon") Cc: <stable@vger.kernel.org> # v4.3+ Tested-by: Nafzal mohammed <afzal.mohd.ma@gmail.com> Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-