- 25 7月, 2015 1 次提交
-
-
由 Russell King 提交于
The existing memory barrier macro causes a significant amount of code to be inserted inline at every call site. For example, in gpio_set_irq_type(), we have this for mb(): c0344c08: f57ff04e dsb st c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230> c0344c10: e3590004 cmp r9, #4 c0344c14: e5983014 ldr r3, [r8, #20] c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc> c0344c1c: e3530000 cmp r3, #0 c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4> c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0 c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc c0344c2c: e12fff33 blx r3 c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 c0344c38: e5963004 ldr r3, [r6, #4] Moving the outer_cache_sync() call out of line reduces the impact of the barrier: c0344968: f57ff04e dsb st c034496c: e35a0004 cmp sl, #4 c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0 c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8> c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb> c034497c: e5953004 ldr r3, [r5, #4] This should reduce the cache footprint of this code. Overall, this results in a reduction of around 20K in the kernel size: text data bss dec hex filename 10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old 10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new Another advantage to this approach is that we can finally resolve the issue of SoCs which have their own memory barrier requirements within multiplatform kernels (such as OMAP.) Here, the bus interconnects need additional handling to ensure that writes become visible in the correct order (eg, between dma_map() operations, writes to DMA coherent memory, and MMIO accesses.) Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 6月, 2015 1 次提交
-
-
由 Zhang Zhen 提交于
Currently we have many duplicates in definitions of huge_pmd_unshare. In all architectures this function just returns 0 when CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N. This patch puts the default implementation in mm/hugetlb.c and lets these architectures use the common code. Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: David Rientjes <rientjes@google.com> Cc: James Yang <James.Yang@freescale.com> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 6月, 2015 1 次提交
-
-
由 Hauke Mehrtens 提交于
These options make it possible to overwrites the data and instruction prefetching behavior of the arm pl310 cache controller. Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 6月, 2015 1 次提交
-
-
由 Mike Looijmans 提交于
When dma-coherent transfers are enabled, the mmap call must not change the pg_prot flags in the vma struct. Split the arm_dma_mmap into a common and specific parts, and add a "arm_coherent_dma_mmap" implementation that does not alter the page protection flags. Tested on a topic-miami board (Zynq) using the ACP port to transfer data between FPGA and CPU using the Dyplo framework. Without this patch, byte-wise access to mmapped coherent DMA memory was about 20x slower because of the memory being marked as non-cacheable, and transfer speeds would not exceed 240MB/s. After this patch, the mapped memory is cacheable and the transfer speed is again 600MB/s (limited by the FPGA) when the data is in the L2 cache, while data integrity is being maintained. The patch has no effect on non-coherent DMA. Signed-off-by: NMike Looijmans <mike.looijmans@topic.nl> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 6月, 2015 11 次提交
-
-
由 Ard Biesheuvel 提交于
This splits off the reservation of the memory occupied by the FDT binary itself from the processing of the memory reservations it contains. This is necessary because the physical address of the FDT, which is needed to perform the reservation, may not be known to the FDT driver core, i.e., it may be mapped outside the linear direct mapping, in which case __pa() returns a bogus value. Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: NRob Herring <robh@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stefan Agner 提交于
Vybrids has 112 peripheral interrupts which can be routed to the Cortex-M4's NVIC interrupt controller. Signed-off-by: NStefan Agner <stefan@agner.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Document that r13 is not a stack in the initialisation function, in case anyone gets other ideas. Document the registers available for the errata workarounds, and specifically which registers contain parts of the MIDR register, as well as which registers must be preserved. Lastly, use the lowest numbered available register (r0) rather than r10 for temporary storage. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We already have the main ID register available in r9, there's no need to refetch it. Use the saved value. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than having a long sprawling __v7_setup function, which is hard to maintain properly, move the CPU errata out of line. While doing this, it was discovered that the Cortex-A15 errata had been incorrectly added: ldr r10, =0x00000c08 @ Cortex-A8 primary part number teq r0, r10 bne 2f /* Cortex-A8 errata */ b 3f 2: ldr r10, =0x00000c09 @ Cortex-A9 primary part number teq r0, r10 bne 3f /* Cortex-A9 errata */ 3: ldr r10, =0x00000c0f @ Cortex-A15 primary part number teq r0, r10 bne 4f /* Cortex-A15 errata */ 4: This results in the Cortex-A15 test always being executed after the Cortex-A8 and Cortex-A9 errata, which is obviously not what is intended. The 'b 3f' labels should have been updated to 'b 4f'. The new structure of: /* Cortex-A8 Errata */ ldr r10, =0x00000c08 @ Cortex-A8 primary part number teq r0, r10 beq __ca8_errata /* Cortex-A9 Errata */ ldr r10, =0x00000c09 @ Cortex-A9 primary part number teq r0, r10 beq __ca9_errata /* Cortex-A15 Errata */ ldr r10, =0x00000c0f @ Cortex-A15 primary part number teq r0, r10 beq __ca15_errata __errata_finish: is much cleaner and easier to see that this kind of thing doesn't happen. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Re-engineer the LPAE TTBR setup code. Rather than passing some shifted address in order to fit in a CPU register, pass either a full physical address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1). This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of cpu_set_ttbr() in the secondary CPU startup code path (which was there to re-set TTBR1 to the appropriate high physical address space on Keystone2.) Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Eliminate the needless nommu version of this function, and get rid of the proc_info_list structure argument - we no longer need this in order to fix up the page table entries. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Re-implement the physical address space switching to be architecturally compliant. This involves flushing the caches, disabling the MMU, and only then updating the page tables. Once that is complete, the system can be brought back up again. Since we disable the MMU, we need to do the update in assembly code. Luckily, the entries which need updating are fairly trivial, and are all setup by the early assembly code. We can merely adjust each entry by the delta required. Not only does this fix the code to be architecturally compliant, but it fixes a couple of bugs too: 1. The original code would only ever update the first L2 entry covering a fraction of the kernel; the remainder were left untouched. 2. The L2 entries covering the DTB blob were likewise untouched. This solution fixes up all entries. Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The init_meminfo() method is not about initialising meminfo - it's about fixing up the physical to virtual translation so that we use a different physical address space, possibly above the 4GB physical address space. Therefore, the name "init_meminfo()" is confusing. Rename it to pv_fixup() instead. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There is no point platform code doing this, let's move it into the generic code so it doesn't get duplicated. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make the init_meminfo function return the offset to be applied to the phys-to-virt translation constants. This allows us to move the update into generic code, along with the requirements for this update. This avoids platforms having to know the details of the phys-to-virt translation support. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 6月, 2015 1 次提交
-
-
由 Russell King 提交于
All ARMv5 and older CPUs invalidate their caches in the early assembly setup function, prior to enabling the MMU. This is because the L1 cache should not contain any data relevant to the execution of the kernel at this point; all data should have been flushed out to memory. This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed, these typically do not search their caches when caching is disabled (as it needs to be when the MMU is disabled) so this change should be safe. ARMv7 allows there to be CPUs which search their caches while caching is disabled, and it's permitted that the cache is uninitialised at boot; for these, the architecture reference manual requires that an implementation specific code sequence is used immediately after reset to ensure that the cache is placed into a sane state. Such functionality is definitely outside the remit of the Linux kernel, and must be done by the SoC's firmware before _any_ CPU gets to the Linux kernel. Changing the data cache clean+invalidate to a mere invalidate allows us to get rid of a lot of platform specific hacks around this issue for their secondary CPU bringup paths - some of which were buggy. Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Tested-by: NFlorian Fainelli <f.fainelli@gmail.com> Tested-by: NHeiko Stuebner <heiko@sntech.de> Tested-by: NDinh Nguyen <dinguyen@opensource.altera.com> Acked-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NThierry Reding <treding@nvidia.com> Acked-by: NThierry Reding <treding@nvidia.com> Tested-by: NGeert Uytterhoeven <geert+renesas@glider.be> Tested-by: NMichal Simek <michal.simek@xilinx.com> Tested-by: NWei Xu <xuwei5@hisilicon.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 5月, 2015 2 次提交
-
-
由 Arnd Bergmann 提交于
The feroceon copypage implementation cannot be built when targetting an ARMv4 CPU, so we need to pass the march=armv5te flag manually to gcc when building this file. This is obviously safe since that code will not be executed on ARMv4. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Arnd Bergmann 提交于
Atmel at91x40 is gone, so we no longer have any platform using either of these two, and we get randconfig failures on NOMMU kernels if they accidentally get enabled on something that conflicts with ARMv4T. This stops short of removing the entire CPU support for now, but as nothing selects these, it is basically dead code. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 5月, 2015 2 次提交
-
-
由 David Hildenbrand 提交于
Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 David Hildenbrand 提交于
The existing code relies on pagefault_disable() implicitly disabling preemption, so that no schedule will happen between kmap_atomic() and kunmap_atomic(). Let's make this explicit, to prepare for pagefault_disable() not touching preemption anymore. Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-5-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 16 5月, 2015 2 次提交
-
-
由 Russell King 提交于
Avoid passing the auxiliary control register value through the enable method. In the resume path, we have to read the value stored in l2x0_saved_regs.aux_ctrl, only to have it immediately written back by l2c_enable(). We can avoid this if we have __l2c_init() save the value directly to l2x0_saved_regs.aux_ctrl before calling the specific enable method. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Some L2C caches have a bit which allows non-secure software to control the cache lockdown. Some platforms are unable to set this bit. To avoid receiving an abort while trying to unlock the cache lines, check the state of this bit before unlocking. We do this by providing a new method in the l2c_init_data to perform the unlocking. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 5月, 2015 3 次提交
-
-
由 Russell King 提交于
l2c_configure() does not follow the pattern of other l2c_* functions. Fix this so that it does to avoid future confusion. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Before calling the controller specific configuration function, write the auxiliary control register first, so that bits shared with other registers (such as the prefetch control register) are not overwritten by the later write to the auxctrl register. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
l2c_enable() is documented that it must not be called if the cache has already been enabled. Unfortunately, commit 6b49241a ("ARM: 8259/1: l2c: Refactor the driver to use commit-like interface") changed this without updating the comment, for very little reason. Revert this change and restore the expected behaviour. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 5月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior to allocating from it. When mapping RAM we iterate over PMD-sized chunks, creating these section mappings. Section mappings are only created when the end of a chunk is aligned to section size. Unfortunately, with classic page tables (where PMD_SIZE is 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in size the first 1M will not be mapped despite having been accounted for in the memblock limit. This has been observed to result in page tables being allocated from unmapped memory, causing boot-time hangs. This patch modifies the memblock limit rounding to always round down to PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we will round the memblock limit down to a 2M boundary, matching the limits on section mappings, and preventing allocations from unmapped memory. For LPAE there should be no change as PMD_SIZE == SECTION_SIZE. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NStefan Agner <stefan@agner.ch> Tested-by: NStefan Agner <stefan@agner.ch> Acked-by: NLaura Abbott <labbott@redhat.com> Tested-by: NHans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 5月, 2015 1 次提交
-
-
由 Russell King 提交于
BSYM() was invented to allow us to work around a problem with the assembler, where local symbols resolved by the assembler for the 'adr' instruction did not take account of their ISA. Since we don't want BSYM() used elsewhere, replace BSYM() with a new macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM() mechanics integrated into it. This ensures that the BSYM()-ification is only used in conjunction with 'adr'. Acked-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 5月, 2015 2 次提交
-
-
由 Tony Lindgren 提交于
Looks like apps can be made to segfault easily on armhf distros just by running cpuburn-a8 in the background, then starting apt get update unless erratum 430973 workaround is enabled. This happens on r3p2 also, which has 430973 fixed in hardware. Turns out the reason for this is some bootloaders incorrectly setting the auxilary register IBE bit, which probably causes us to hit erratum 687067 on Cortex-A8 later than r1p2. If the bootloader incorrectly sets the IBE bit in the auxilary control register for Cortex-A8 revisions with 430973 fixed in hardware, we need to call flush BTAC/BTB to avoid segfaults probably caused by erratum 687067. So let's flush BTAC/BTB unconditionally for Cortex-A8. It won't do anything unless the IBE bit is set. Note that we keep the erratum 430973 Kconfig option still around and disabled for multiarch as it may be unsafe to enable for some secure SoC. It is known safe to be enabled for n900, but won't do anything on n900 as the IBE bit needs to be set with SMC. Also note that SoCs probably should also add checks and print warnings for the misconfigured IBE bit depending on the Cortex-A8 revision so the bootloaders can be fixed Cortex-A8 revisions later than r1p2 to not set the IBE bit. Tested-by: NSebastian Reichel <sre@kernel.org> Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Maxime Coquelin stm32 提交于
From Cortex-M reference manuals, the nvic supports up to 240 interrupts. So the number of entries in vectors table is up to 256. This patch adds a new config flag to specify the number of external interrupts. Some ifdeferies are added in order to respect the natural alignment without wasting too much space on smaller systems. Acked-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: NStefan Agner <stefan@agner.ch> Tested-by: NChanwoo Choi <cw00.choi@samsung.com> Signed-off-by: NMaxime Coquelin <mcoquelin.stm32@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 5月, 2015 4 次提交
-
-
由 Andrew Lunn 提交于
bf35706f ("ARM: 8314/1: replace PROCINFO embedded branch with relative offset") broke booting for Kirkwood. The kernel would say: Starting kernel ... Uncompressing Linux... done, booting the kernel. Error: unrecognized/unsupported processor variant (0x56251311). Fix it by removing the extraneous .long __feroceon_setup from the feroceon_proc_info macro. Fixes: bf35706f ("ARM: 8314/1: replace PROCINFO embedded branch with relative offset") Reported-by: NFlorian Fainelli <f.fainelli@gmail.com> Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NFlorian Fainelli <f.fainelli@gmail.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Valentin Rothberg 提交于
The block could never be compiled; CPU_ICACHE_STREAMING_DISABLE has not been defined in Kconfig since the very first Git commit. Hence, we can safely remove the entire block. Signed-off-by: NValentin Rothberg <valentinrothberg@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Valentin Rothberg 提交于
CPU_ARM1020_CPU_IDLE is not defined in Kconfig. The last reference on LKML dates back to 2001, so we can safely remove the comments to make static analysis tools happy. Signed-off-by: NValentin Rothberg <valentinrothberg@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marek Szyprowski 提交于
Patch 22b3c181 ("arm: dma-mapping: limit IOMMU mapping size") added a check for IO address space size. However this patch broke IOMMU initialization for typical platforms initialized from device tree, which get the default IO address space size of 4GiB. This value doesn't fit into size_t and fails a check introduced by that commit resulting in failed dma-mapping/iommu initialization. This patch fixes this issue by adding proper support for full 4GiB address space size. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 4月, 2015 1 次提交
-
-
由 Nathan Lynch 提交于
When targeting ARMv3 (e.g. rpc) and enabling CONFIG_VDSO we get: arch/arm/vdso/datapage.S:13: Error: selected processor does not support ARM mode `bx lr' One fix considered was to use 'ldr pc,lr' for such configurations, but since the VDSO is unlikely to be useful for pre-v7 hardware, just make it depend on CONFIG_CPU_V7. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 4月, 2015 6 次提交
-
-
由 Vladimir Murzin 提交于
Add support for memtest command line option. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
When an architecture fully supports randomizing the ELF load location, a per-arch mmap_rnd() function is used to find a randomized mmap base. In preparation for randomizing the location of ET_DYN binaries separately from mmap, this renames and exports these functions as arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE for describing this feature on architectures that support it (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390 already supports a separated ET_DYN ASLR from mmap ASLR without the ARCH_BINFMT_ELF_RANDOMIZE_PIE logic). Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: NIngo Molnar <mingo@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "David A. Long" <dave.long@linaro.org> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Arun Chandran <achandran@mvista.com> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: Min-Hua Chen <orca.chen@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Vineeth Vijayan <vvijayan@mvista.com> Cc: Jeff Bailey <jeffbailey@google.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Behan Webster <behanw@converseincode.com> Cc: Ismael Ripoll <iripoll@upv.es> Cc: Jan-Simon Mller <dl9pf@gmx.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
To address the "offset2lib" ASLR weakness[1], this separates ET_DYN ASLR from mmap ASLR, as already done on s390. The architectures that are already randomizing mmap (arm, arm64, mips, powerpc, s390, and x86), have their various forms of arch_mmap_rnd() made available via the new CONFIG_ARCH_HAS_ELF_RANDOMIZE. For these architectures, arch_randomize_brk() is collapsed as well. This is an alternative to the solutions in: https://lkml.org/lkml/2015/2/23/442 I've been able to test x86 and arm, and the buildbot (so far) seems happy with building the rest. [1] http://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html This patch (of 10): In preparation for splitting out ET_DYN ASLR, this moves the ASLR calculations for mmap on ARM into a separate routine, similar to x86. This also removes the redundant check of personality (PF_RANDOMIZE is already set before calling arch_pick_mmap_layout). Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: NIngo Molnar <mingo@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "David A. Long" <dave.long@linaro.org> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Arun Chandran <achandran@mvista.com> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: Min-Hua Chen <orca.chen@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Vineeth Vijayan <vvijayan@mvista.com> Cc: Jeff Bailey <jeffbailey@google.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Behan Webster <behanw@converseincode.com> Cc: Ismael Ripoll <iripoll@upv.es> Cc: Jan-Simon Mller <dl9pf@gmx.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Russell King 提交于
Switch ARM to use the generic show_mem() implementation, which displays the statistics from the mm zone rather than walking the page arrays. Acked-by: NMel Gorman <mgorman <mgorman@suse.de> Tested-by: NGregory Fong <gregory.0xf0@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Avoid the errata 430973 workaround for non-Cortex A8 CPUs. Having this workaround enabled introduces an additional branch target buffer flush into the context switching path, something we wish to avoid. To allow this errata to be enabled in multiplatform kernels while reducing its impact, rearrange the Cortex-A8 CPU support to avoid impacting on other Version 7 CPUs. Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Eliminate one unnecessary instruction from this test by pre-shifting the Cortex A9 ID - we can shift the actual ID in the teq instruction thereby losing the pX bit of the ID at no cost. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-