- 10 4月, 2015 1 次提交
-
-
由 Russell King 提交于
Both ARM946 and ARM940 setup functions were corrupting r1 and r2, which is not permissible - these are used to carry the machine ID and boot data into the kernel, and must be preserved. The code responsible for this was the same in both files: they were using the registers to generate a protection region register value. Fix this by turning this process into a macro, and using that macro in both these files with an alternative register allocation. r0, r3 and r7 can be used for temporary values here. Reported-by: NAlex Dumitrache <broscutamaker@gmail.com> Tested-by: NGeorg Hofstetter <g3gg0.de@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 4月, 2015 2 次提交
-
-
由 Florian Fainelli 提交于
Enabling CPU_DCACHE_DISABLE on a SMP capable system will prevent the kernel from booting because of the following ldrex instruction in arch_spin_lock: (gdb) x/10i $pc => 0xc053cfa8 <_raw_spin_lock+4>: ldrex r3, [r0] 0xc053cfac <_raw_spin_lock+8>: add r2, r3, #65536 ; 0x10000 which is taken by the very first printk call: at /home/fainelli/work/linux/arch/arm/include/asm/spinlock.h:65 fmt=0xc0637650 " 01 66Booting Linux on physical CPU 0x%xn", args=<incomplete type>) at kernel/printk/printk.c:1525 fmt=0xc05370f4 <printk+52> " 24320215342 04340235344 20320215342 36377/341 17") at kernel/printk/printk.c:1688 ldrex requires exclusive monitor(s) (local or global) which are no longer working when the Data cache is disabled in CP15 and will just hang the CPU there. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tomasz Figa 提交于
IOMMU should be able to use single pages as well as bigger blocks, so if higher order allocations fail, we should not affect state of the system, with events such as OOM killer, but rather fall back to order 0 allocations. This patch changes the behavior of ARM IOMMU DMA allocator to use __GFP_NORETRY, which bypasses OOM invocation, for orders higher than zero and, only if that fails, fall back to normal order 0 allocation which might invoke OOM killer. Signed-off-by: NTomasz Figa <tfiga@chromium.org> Reviewed-by: NDoug Anderson <dianders@chromium.org> Acked-by: NDavid Rientjes <rientjes@google.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 3月, 2015 2 次提交
-
-
由 Laura Abbott 提交于
The set_memory_* functions currently only support module addresses. The addresses are validated using is_module_addr. That function is special though and relies on internal state in the module subsystem to work properly. At the time of module initialization and calling set_memory_*, it's too early for is_module_addr to work properly so it always returns false. Rather than be subject to the whims of the module state, just bounds check against the module virtual address range. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Fabrice Gasnier 提交于
Allow prefetch settings overriding by device tree, in case l2x0_cache_size_of_parse() returns value, prefetch tuning properties are silently ignored. E.g. arm,double-linefill* and arm,prefetch*. This happens for example, when "cache-size" or "cache-sets" properties haven't been filled in l2c dt node. Comments from Fabrice Gasnier: Allow device tree to override the L2C prefetch settings, even when l2x0_cache_size_of_parse() fails to parse the cache geometry due to (eg) missing "cache-size" or "cache-sets" properties. Signed-off-by: NFabrice Gasnier <fabrice.gasnier@st.com> Reviewed-by: NTomasz Figa <tomasz.figa@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 3月, 2015 2 次提交
-
-
由 Russell King 提交于
It can be useful to dump the page table entries when an unhandled data abort fault occurs. This can aid debugging of these situations, for example, a STREX instruction causing an external abort on non-linefetch fault, as has been reported recently. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
When validating the mask against the amount of memory we have available (so that we can trap 32-bit DMA addresses with >32-bits memory), we had not taken account of the fact that max_pfn is the maximum PFN number plus one that would be in the system. There are several references in the code which bear this out: mm/page_owner.c: for (; pfn < max_pfn; pfn++) { } arch/x86/kernel/setup.c: high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 2月, 2015 1 次提交
-
-
由 Alexandre Courbot 提交于
There doesn't seem to be any valid reason to allocate the pages array with the same flags as the buffer itself. Doing so can eventually lead to the following safeguard in mm/slab.c's cache_grow() to be hit: if (unlikely(flags & GFP_SLAB_BUG_MASK)) { pr_emerg("gfp: %un", flags & GFP_SLAB_BUG_MASK); BUG(); } This happens when buffers are allocated with __GFP_DMA32 or __GFP_HIGHMEM. Fix this by allocating the pages array with GFP_KERNEL to follow what is done elsewhere in this file. Using GFP_KERNEL in __iommu_alloc_buffer() is safe because atomic allocations are handled by __iommu_alloc_atomic(). Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 2月, 2015 1 次提交
-
-
由 Paul Bolle 提交于
Commit 20e783e3 ("ARM: 8296/1: cache-l2x0: clean up aurora cache handling") removed the only user of the Kconfig symbol CACHE_PL310. Setting CACHE_PL310 is now pointless. Remove its Kconfig entry, and one select of this symbol. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 12 2月, 2015 2 次提交
-
-
由 Kirill A. Shutemov 提交于
The problem is that we check nr_ptes/nr_pmds in exit_mmap() which happens *before* pgd_free(). And if an arch does pte/pmd allocation in pgd_alloc() and frees them in pgd_free() we see offset in counters by the time of the checks. We tried to workaround this by offsetting expected counter value according to FIRST_USER_ADDRESS for both nr_pte and nr_pmd in exit_mmap(). But it doesn't work in some cases: 1. ARM with LPAE enabled also has non-zero USER_PGTABLES_CEILING, but upper addresses occupied with huge pmd entries, so the trick with offsetting expected counter value will get really ugly: we will have to apply it nr_pmds, but not nr_ptes. 2. Metag has non-zero FIRST_USER_ADDRESS, but doesn't do allocation pte/pmd page tables allocation in pgd_alloc(), just setup a pgd entry which is allocated at boot and shared accross all processes. The proposal is to move the check to check_mm() which happens *after* pgd_free() and do proper accounting during pgd_alloc() and pgd_free() which would bring counters to zero if nothing leaked. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: NTyler Baker <tyler.baker@linaro.org> Tested-by: NTyler Baker <tyler.baker@linaro.org> Tested-by: NNishanth Menon <nm@ti.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: James Hogan <james.hogan@imgtec.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Naoya Horiguchi 提交于
Currently we have many duplicates in definitions around follow_huge_addr(), follow_huge_pmd(), and follow_huge_pud(), so this patch tries to remove the m. The basic idea is to put the default implementation for these functions in mm/hugetlb.c as weak symbols (regardless of CONFIG_ARCH_WANT_GENERAL_HUGETL B), and to implement arch-specific code only when the arch needs it. For follow_huge_addr(), only powerpc and ia64 have their own implementation, and in all other architectures this function just returns ERR_PTR(-EINVAL). So this patch sets returning ERR_PTR(-EINVAL) as default. As for follow_huge_(pmd|pud)(), if (pmd|pud)_huge() is implemented to always return 0 in your architecture (like in ia64 or sparc,) it's never called (the callsite is optimized away) no matter how implemented it is. So in such architectures, we don't need arch-specific implementation. In some architecture (like mips, s390 and tile,) their current arch-specific follow_huge_(pmd|pud)() are effectively identical with the common code, so this patch lets these architecture use the common code. One exception is metag, where pmd_huge() could return non-zero but it expects follow_huge_pmd() to always return NULL. This means that we need arch-specific implementation which returns NULL. This behavior looks strange to me (because non-zero pmd_huge() implies that the architecture supports PMD-based hugepage, so follow_huge_pmd() can/should return some relevant value,) but that's beyond this cleanup patch, so let's keep it. Justification of non-trivial changes: - in s390, follow_huge_pmd() checks !MACHINE_HAS_HPAGE at first, and this patch removes the check. This is OK because we can assume MACHINE_HAS_HPAGE is true when follow_huge_pmd() can be called (note that pmd_huge() has the same check and always returns 0 for !MACHINE_HAS_HPAGE.) - in s390 and mips, we use HPAGE_MASK instead of PMD_MASK as done in common code. This patch forces these archs use PMD_MASK, but it's OK because they are identical in both archs. In s390, both of HPAGE_SHIFT and PMD_SHIFT are 20. In mips, HPAGE_SHIFT is defined as (PAGE_SHIFT + PAGE_SHIFT - 3) and PMD_SHIFT is define as (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3), but PTE_ORDER is always 0, so these are identical. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: NHugh Dickins <hughd@google.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: David Rientjes <rientjes@google.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Luiz Capitulino <lcapitulino@redhat.com> Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We've replaced remap_file_pages(2) implementation with emulation. Nobody creates non-linear mapping anymore. This patch also adjust __SWP_TYPE_SHIFT, effectively increase size of possible swap file to 128G. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 2月, 2015 2 次提交
-
-
由 Arnd Bergmann 提交于
The aurora_inv_range(), aurora_clean_range() and aurora_flush_range() functions are highly redundant, both in source and in object code, and they are harder to understand than necessary. By moving the range loop into the aurora_pa_range() function, they become trivial wrappers, and the object code start looking like what one would expect for an optimal implementation. Further optimization may be possible by using the per-CPU "virtual" registers to avoid the spinlocks in most cases. (on Armada 370 RD and Armada XP GP, boot tested, plus a little bit of DMA traffic by reading data from a SD card) Reviewed-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Arnd Bergmann 提交于
The aurora cache controller is the only remaining user of a couple of functions in this file and are completely unused when that is disabled, leading to build warnings: arch/arm/mm/cache-l2x0.c:167:13: warning: 'l2x0_cache_sync' defined but not used [-Wunused-function] arch/arm/mm/cache-l2x0.c:184:13: warning: 'l2x0_flush_all' defined but not used [-Wunused-function] arch/arm/mm/cache-l2x0.c:194:13: warning: 'l2x0_disable' defined but not used [-Wunused-function] With the knowledge that the code is now aurora-specific, we can simplify it noticeably: - The pl310 errata workarounds are not needed on aurora and can be removed - As confirmed by Thomas Petazzoni from the data sheet, the cache_wait() macro is never needed. - No need to hold the lock across atomic cache sync - We can load the l2x0_base into a local variable across operations There should be no functional change in this patch, but readability and the generated object code improves, along with avoiding the warnings. (on Armada 370 RD and Armada XP GP, boot tested, plus a little bit of DMA traffic by reading data from a SD card) Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 2月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Commit e1a5848e ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE") removed the use of the reserved TTBR0 value for LPAE systems, since the ASID is held in the TTBR and can be updated atomicly with the pgd of the next mm. Unfortunately, this patch forgot to update flush_context, which deliberately avoids marking the local active ASID as allocated, since we used to switch via ASID zero and didn't need to allocate the ASID of the previous mm. The side-effect of this is that we can allocate the same ASID to the next mm and, between flushing the local TLB and updating TTBR0, we can perform speculative TLB fills for userspace nG mappings using the page table of the previous mm. The consequence of this is that the next mm can erroneously hit some mappings of the previous mm. Note that this was made significantly harder to hit by a391263c ("ARM: 8203/1: mm: try to re-use old ASID assignments following a rollover") but is still theoretically possible. This patch fixes the problem by removing the code from flush_context that forces the allocated ASID to zero for the local CPU. Many thanks to the Broadcom guys for tracking this one down. Fixes: e1a5848e ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE") Cc: <stable@vger.kernel.org> # v3.14+ Reported-by: NRaymond Ngun <rngun@broadcom.com> Tested-by: NRaymond Ngun <rngun@broadcom.com> Reviewed-by: NGregory Fong <gregory.0xf0@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 1月, 2015 1 次提交
-
-
由 Laurent Pinchart 提交于
Commit 4bb25789 ("arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_ops") moved the setting of the DMA operations from arm_iommu_attach_device() to arch_setup_dma_ops() where the DMA operations to be used are selected based on whether the device is connected to an IOMMU. However, the IOMMU detection scheme requires the IOMMU driver to be ported to the new IOMMU of_xlate API. As no driver has been ported yet, this effectively breaks all IOMMU ARM users that depend on the IOMMU being handled transparently by the DMA mapping API. Fix this by restoring the setting of DMA IOMMU ops in arm_iommu_attach_device() and splitting the rest of the function into a new internal __arm_iommu_attach_device() function, called by arch_setup_dma_ops(). Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NHeiko Stuebner <heiko@sntech.de> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 29 1月, 2015 2 次提交
-
-
由 Arnd Bergmann 提交于
The recently added ARM_KERNMEM_PERMS feature works by manipulating the kernel page tables, which obviously requires an MMU. Trying to enable this feature when the MMU is disabled results in a lot of compile errors in mm/init.c, so let's add a Kconfig dependency to avoid that case. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When tearing down the DMA ops for a device via of_dma_deconfigure, we unconditionally detach the device from its IOMMU domain. For devices that aren't actually behind an IOMMU, this produces a "Not attached" warning message on the console. This patch changes the teardown code so that we don't detach from the IOMMU domain when there isn't an IOMMU dma mapping to start with. Reported-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 1月, 2015 1 次提交
-
-
由 George G. Davis 提交于
DMA contiguous allocations have been able to use highmem since commit "95b0e655 ARM: mm: don't limit default CMA region only to low memory" but a comment still notes the earlier "low memory" limitation. Update the comment to remove the low memory limitation and fix the s/contigouos/contiguous/ typo while we're at it. Signed-off-by: NGeorge G. Davis <george_davis@mentor.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 1月, 2015 2 次提交
-
-
由 Pavel Machek 提交于
It is not clear from the filename, and comment at the begining adds to the confusion by not listing L310. Fix it. Signed-off-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Cc: Russell King <linux@arm.linux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 16 1月, 2015 4 次提交
-
-
由 Tomasz Figa 提交于
Firmware on certain boards (e.g. ODROID-U3) can leave incorrect L2C prefetch settings configured in registers leading to crashes if L2C is enabled without overriding them. This patch introduces bindings to enable prefetch settings to be specified from DT and necessary support in the driver. [mszyprow: rebased onto v3.18-rc1, added error message when prefetch related dt property has been provided without any value] Signed-off-by: NTomasz Figa <t.figa@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NNishanth Menon <nm@ti.com> Acked-by: NNishanth Menon <nm@ti.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tomasz Figa 提交于
Because certain secure hypervisor do not allow writes to individual L2C registers, but rather expect set of parameters to be passed as argument to secure monitor calls, there is a need to provide an interface for the L2C driver to ask the firmware to configure the hardware according to specified parameters. This patch adds such. Signed-off-by: NTomasz Figa <t.figa@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NNishanth Menon <nm@ti.com> Acked-by: NNishanth Menon <nm@ti.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tomasz Figa 提交于
Certain implementations of secure hypervisors (namely the one found on Samsung Exynos-based boards) do not provide access to individual L2C registers. This makes the .write_sec()-based interface insufficient and provoking ugly hacks. This patch is first step to make the driver not rely on availability of writes to individual registers. This is achieved by refactoring the driver to use a commit-like operation scheme: all register values are prepared first and stored in an instance of l2x0_regs struct and then a single callback is responsible to flush those values to the hardware. [mszyprow: rebased onto 'ARM: l2c: use l2c_write_sec() for restoring latency and filter regs' patch] Signed-off-by: NTomasz Figa <t.figa@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NNishanth Menon <nm@ti.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marek Szyprowski 提交于
All four register for latency and filter settings cannot be written in non-secure mode and they should go through l2c_write_sec(). More on this can be found in CoreLink Level 2 Cache Controller L2C-310 Technical Reference Manual, 3.2. Register summary, table 3.1. This have been checked the TRM for r3p3, but it should be uniform for all revisions. Reported-by: NNishanth Menon <nm@ti.com> Suggested-by: NTomasz Figa <tomasz.figa@gmail.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NNishanth Menon <nm@ti.com> Acked-by: NNishanth Menon <nm@ti.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 1月, 2015 1 次提交
-
-
由 Victor Kamensky 提交于
In v3.19-rc3 tree when CONFIG_ARM_LPAE and CONFIG_DEBUG_RODATA are enabled image failed to compile with the following error: arch/arm/mm/init.c:661:14: error: ‘PMD_SECT_RDONLY’ undeclared here (not in a function) It seems that '80d6b0c2 ARM: mm: allow text and rodata sections to be read-only' and 'ded94779 ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE' commits crossed. 80d6b0c2 uses PMD_SECT_RDONLY macro but ded94779 renames it and uses software bits L_PMD_SECT_RDONLY instead. Fix is to use L_PMD_SECT_RDONLY instead PMD_SECT_RDONLY as ded94779 does in another places. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 1月, 2015 2 次提交
-
-
由 Grygorii Strashko 提交于
Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly. For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000 instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW; create_mapping(&map); } Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end. Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mark Rutland 提交于
Currently the arm page table dumping code starts dumping page tables from USER_PGTABLES_CEILING. This is unnecessary for skipping any entries related to userspace as the swapper_pg_dir does not contain such entries, and results in a couple of unfortuante side effects. Firstly, any kernel mappings which might exist below USER_PGTABLES_CEILING will not be accounted in the dump output. This masks any entries erroneously created below this address. Secondly, if the final page table entry walked is part of a valid mapping the page table dumping code will not log the region this entry is part of, as the final note_page call in walk_pgd will trigger an early return when 0 < USER_PGTABLES_CEILING. Luckily this isn't seen on contemporary systems as they typically don't have enough RAM to extend the linear mapping right to the end of the address space. Due to the way addr is constructed in the walk_* functions, it can never be less than USER_PGTABLES_CEILING when walking the page tables, so it is not necessary to avoid dereferencing invalid table addresses. The existing checks for st->current_prot and st->marker[1].start_address are sufficient to ensure we will not print and/or dereference garbage when trying to log information. This patch removes both problematic uses of USER_PGTABLES_CEILING from the arm page table dumping code, preventing both of these issues. We will now report any low mappings, and the final note_page call will not return early, ensuring all regions are logged. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Kees Cook <keescook@chromium.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2014 3 次提交
-
-
由 Jungseung Lee 提交于
set_memory_* functions have same implementation except memory attribute. This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability. Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jungseung Lee 提交于
L1_CACHE_BYTES could be larger than real L1 cache line size. In that case, flush_pfn_alias() would omit to flush last bytes as much as L1_CACHE_BYTES - real cache line size. So fix end address to "to + PAGE_SIZE - 1". The bottom bits of the address is LINELEN. that is ignored by mcrr. Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jungseung Lee 提交于
L1_CACHE_BYTES could be larger value than real L1 cache line size. In that case, discard_old_kernel_data() would omit to invalidate last bytes as much as L1_CACHE_BYTES - real cache line size. So fix end address to "to + PAGE_SIZE -1". The bottom bits of the address is LINELEN. that is ignored by mcrr. Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 12月, 2014 1 次提交
-
-
由 Jungseung Lee 提交于
Modern ARMv7-A/R cores optionally implement below new hardware feature: - PXN: Privileged execute-never(PXN) is a security feature. PXN bit determines whether the processor can execute software from the region. This is effective solution against ret2usr attack. On an implementation that does not include the LPAE, PXN is optionally supported. This patch set PXN bit on user page table for preventing user code execution with privilege mode. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 12月, 2014 1 次提交
-
-
由 Will Deacon 提交于
This patch plumbs the existing ARM IOMMU DMA infrastructure (which isn't actually called outside of a few drivers) into arch_setup_dma_ops, so that we can use IOMMUs for DMA transfers in a more generic fashion. Since this significantly complicates the arch_setup_dma_ops function, it is moved out of line into dma-mapping.c. If CONFIG_ARM_DMA_USE_IOMMU is not set, the iommu parameter is ignored and the normal ops are used instead. Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 11月, 2014 1 次提交
-
-
由 Thomas Petazzoni 提交于
Under extremely rare conditions, in an MPCore node consisting of at least 3 CPUs, two CPUs trying to perform a STREX to data on the same shared cache line can enter a livelock situation. This patch enables the HW mechanism that overcomes the bug. This fixes the incorrect setup of the STREX backoff delay bit due to a wrong description in the specification. Note that enabling the STREX backoff delay mechanism is done by leaving the bit *cleared*, while the bit was currently being set by the proc-v7.S code. [Thomas: adapt to latest mainline, slightly reword the commit log, add stable markers.] Fixes: de490193 ("arm: mm: Add support for PJ4B cpu and init routines") Cc: <stable@vger.kernel.org> # v3.8+ Signed-off-by: NNadav Haklai <nadavh@marvell.com> Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 11月, 2014 4 次提交
-
-
由 Dmitry Eremin-Solenikov 提交于
According to the manuals I have, XScale auxiliary register should be reached with opc_2 = 1 instead of crn = 1. cpu_xscale_proc_init correctly uses c1, c0, 1 arguments, but cpu_xscale_do_suspend and cpu_xscale_do_resume use c1, c1, 0. Correct suspend/resume functions to also use c1, c0, 1. The issue was primarily noticed thanks to qemu reporing "unsupported instruction" on the pxa suspend path. Confirmed in PXA210/250 and PXA255 XScale Core manuals and in PXA270 and PXA320 Developers Guides. Harware tested by me on tosa (pxa255). Robert confirmed on pxa270 board. Tested-by: NRobert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Acked-by: NRobert Jarzmik <robert.jarzmik@free.fr> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them. Tested-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Rather than unconditionally allocating a fresh ASID to an mm from an older generation, attempt to re-use the old assignment where possible. This can bring performance benefits on systems where the ASID is used to tag things other than the TLB (e.g. branch prediction resources). Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Certain versions of the Krait processor don't report that they support the fused multiply accumulate instruction via the MVFR1 register despite the fact that they actually do. Unfortunately we use this register to identify support for VFPv4. Override the hwcap on all Krait processors to indicate support for VFPv4 to workaround this. Tested-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 11月, 2014 1 次提交
-
-
由 Nathan Lynch 提交于
The kuser helpers page is not set up on non-MMU systems, so it does not make sense to allow CONFIG_KUSER_HELPERS to be enabled when CONFIG_MMU=n. Allowing it to be set on !MMU results in an oops in set_tls (used in execve and the arm_syscall trap handler): Unhandled exception: IPSR = 00000005 LR = fffffff1 CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.0-rc1-00041-ga30465a #216 task: 8b838000 ti: 8b82a000 task.ti: 8b82a000 PC is at flush_thread+0x32/0x40 LR is at flush_thread+0x21/0x40 pc : [<8f00157a>] lr : [<8f001569>] psr: 4100000b sp : 8b82be20 ip : 00000000 fp : 8b83c000 r10: 00000001 r9 : 88018c84 r8 : 8bb85000 r7 : 8b838000 r6 : 00000000 r5 : 8bb77400 r4 : 8b82a000 r3 : ffff0ff0 r2 : 8b82a000 r1 : 00000000 r0 : 88020354 xPSR: 4100000b CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.0-rc1-00041-ga30465a #216 [<8f002bc1>] (unwind_backtrace) from [<8f002033>] (show_stack+0xb/0xc) [<8f002033>] (show_stack) from [<8f00265b>] (__invalid_entry+0x4b/0x4c) As best I can tell this issue existed for the set_tls ARM syscall before commit fbfb872f "ARM: 8148/1: flush TLS and thumbee register state during exec" consolidated the TLS manipulation code into the set_tls helper function, but now that we're using it to flush register state during execve, !MMU users encounter the oops at the first exec. Prevent CONFIG_MMU=n configurations from enabling CONFIG_KUSER_HELPERS. Fixes: fbfb872f (ARM: 8148/1: flush TLS and thumbee register state during exec) Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Reported-by: NStefan Agner <stefan@agner.ch> Acked-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 11月, 2014 1 次提交
-
-
由 Linus Walleij 提交于
Commit 68f3b875 "ARM: integrator: make the Integrator multiplatform" broke allmodconfig like this: >> arch/arm/include/asm/cmpxchg.h:114:2: error: #error "SMP is not supported on this platform" (etc) This is due to the fact that as we turned on multiplatform for the Integrator, this enabled a lot of non-applicable CPU's to be selected for its multiplatform images, due to a lot of "depends on ARCH_INTEGRATOR" restrictions in arch/arm/mm/Kconfig for the different ARM CPU types. Fix this by restricting the CPU selections to respective multiplatform config, which now becomes a subset of the possible Integrator configurations, or alternatively the non-multiplatform config plus ARCH_INTEGRATOR, i.e.: if (!ARCH_MULTIPLATFORM || ARCH_MULTI_Vx) && (ARCH_INTEGRATOR || ARCH_FOO ...) Since the Integrator has been converted to multiplatform, this will often take the short form: if (ARCH_MULTI_Vx && ARCH_INTEGRATOR) If no other non-multiplatform platforms are elegible. Reported-by: NBuild bot for Mark Brown <broonie@kernel.org> Reported-by: NKbuild test robot <fengguang.wu@intel.com> Suggested-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-