- 17 6月, 2013 4 次提交
-
-
由 Simon Baatz 提交于
Commit f8b63c18 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: NSimon Baatz <gmbnomis@gmail.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
This commit fixes the ID and mask for the PJ4B which was too restrictive and didn't match the CPU of the Armada 370 SoC. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Po-Yu Chuang 提交于
This bug was introduced in commit e651eab0. Some v4/v5 platforms failed to boot due to this. Signed-off-by: NPo-Yu Chuang <ratbert.chuang@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jon Medhurst 提交于
On Cortex-A9 before version r1p0, the LoUIS bit field of the CLIDR register returns zero when it should return one. This leads to cache maintenance operations which rely on this value to not function as intended, causing data corruption. The workaround for this errata is to detect affected CPUs and correct the LoUIS value read. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 4月, 2013 3 次提交
-
-
由 Jiang Liu 提交于
Use helper function free_highmem_page() to free highmem pages into the buddy system. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Use common help functions to free reserved pages. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
On large systems with a lot of memory, walking all RAM to determine page types may take a half second or even more. In non-blockable contexts, the page allocator will emit a page allocation failure warning unless __GFP_NOWARN is specified. In such contexts, irqs are typically disabled and such a lengthy delay may even result in NMI watchdog timeouts. To fix this, suppress the page walk in such contexts when printing the page allocation failure warning. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Mel Gorman <mgorman@suse.de> Acked-by: NMichal Hocko <mhocko@suse.cz> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 4月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
After the HYP page table rework, it is pretty easy to let the KVM code provide its own idmap, rather than expecting the kernel to provide it. It takes actually less code to do so. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
- 17 4月, 2013 4 次提交
-
-
由 Gregory CLEMENT 提交于
pj4b cpus are LPAE capable so enable them on LPAE compilations Signed-off-by: NLior Amsalem <alior@marvell.com> Tested-by: NFranklin <flin@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
In kmap_atomic(), kmap_high_get() is invoked for checking already mapped area. In __flush_dcache_page() and dma_cache_maint_page(), we explicitly call kmap_high_get() before kmap_atomic() when cache_is_vipt(), so kmap_high_get() can be invoked twice. This is useless operation, so remove one. v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to be self-documented Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Illia Ragozin 提交于
On Feroceon the L2 cache becomes non-coherent with the CPU when the L1 caches are disabled. Thus the L2 needs to be invalidated after both L1 caches are disabled. On kexec before the starting the code for relocation the kernel, the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon), but after L2 cache is never invalidated, because inv_all is not set in cache-feroceon-l2.c. So kernel relocation and decompression may has (and usually has) errors. Setting the function enables L2 invalidation and fixes the issue. Cc: <stable@vger.kernel.org> Signed-off-by: NIllia Ragozin <illia.ragozin@grapecom.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
tcm_init() call iotable_init() and it use early_alloc variants which do memblock allocation. Directly using memblock allocation after initializing bootmem should not permitted, because bootmem can't know where are additinally reserved. So move tcm_init() to a safe place before initalizing bootmem. (On the U300) Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 4月, 2013 1 次提交
-
-
由 Russell King 提交于
Let's do the changes properly and fix the same problem everywhere, not just for one case. Cc: <stable@vger.kernel.org> # kernels containing 15e0d9e3 or equivalent Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 4月, 2013 1 次提交
-
-
由 Will Deacon 提交于
Many ARMv7 cores have hardware page table walkers that can read the L1 cache. This is discoverable from the ID_MMFR3 register, although this can be expensive to access from the low-level set_pte functions and is a pain to cache, particularly with multi-cluster systems. A useful observation is that the multi-processing extensions for ARMv7 require coherent table walks, meaning that we can make use of ALT_SMP patching in proc-v7-* to patch away the cache flush safely for these cores. Reported-by: NAlbin Tonnerre <Albin.Tonnerre@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 4月, 2013 2 次提交
-
-
由 Catalin Marinas 提交于
On Cortex-A15 (r0p0..r3p2) the TLBI/DSB are not adequately shooting down all use of the old entries. This patch implements the erratum workaround which consists of: 1. Dummy TLBIMVAIS and DSB on the CPU doing the TLBI operation. 2. Send IPI to the CPUs that are running the same mm (and ASID) as the one being invalidated (or all the online CPUs for global pages). 3. CPU receiving the IPI executes a DMB and CLREX (part of the exception return code already). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rob Herring 提交于
Commit b8db6b88 (ARM: 7547/4: cache-l2x0: add support for Aurora L2 cache ctrl) moved the masking of the part ID which caused the RTL version to be lost. Commit 6248d060 (ARM: 7545/1: cache-l2x0: make outer_cache_fns a field of l2x0_of_data) changed how .set_debug is initialized. Both commits break commit 74ddcdb8 (ARM: 7608/1: l2x0: Only set .set_debug on PL310 r3p0 and earlier) which uses the RTL version to conditionally set .set_debug function pointer. Commit b8db6b88 also caused the printed cache ID to be missing the version information. Fix this by reverting how the part number is masked so the RTL version info is maintained. The cache-id-part DT property does not set the RTL bits so masking them should have no effect. Also, re-arrange the order of the function pointer init so the .set_debug function can be overridden. Reported-by: NPaolo Pisati <paolo.pisati@canonical.com> Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Cc: Yehuda Yitschak <yehuday@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 3月, 2013 4 次提交
-
-
由 Will Deacon 提交于
cpu_set_pte_ext is only guaranteed to be defined when CONFIG_MMU, so don't export it to modules otherwise. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
There's no point having a conditional cache flush if we don't know the state of the condition beforehand. This patch makes the cacheflush in v4_flush_user_cache_range unconditional. signed-off-by: Nwill deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The setup code in proc-arm740.S is completely broken and, as far as I can tell, always has been. I was >this< close to ripping it out, when a 740t core-tile materialised in the office, so I've had a crack at fixing things up: - Fix the ram/flash area calculations so that we actually set the condition flags before testing them... - Fix the proc_info structure so that __cpu_io_mmu_flags are defined as 0, placing the __cpu_flush pointer at the correct offset - Re-number the registers used during __arm740_setup so that we don't clobber the machine ID et al - Advertise Thumb support via the hwcaps, since 740T is the only 740 implementation. Acked-by: NHyok S. Choi <hyok.choi@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
This is only used by 740t, which is a v4 core and (by my reading of the datasheet for the CPU) ignores CRm for the cp15 cache flush operation, making the v4 cache implementation in cache-v4.S sufficient for this CPU. Tested with 740T core-tile on Integrator/AP baseboard. Acked-by: NHyok S. Choi <hyok.choi@samsung.com> Acked-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 3月, 2013 3 次提交
-
-
由 Stepan Moskovchenko 提交于
Some early versions of the Krait CPU design incorrectly indicate that they only support the UDIV and SDIV instructions in Thumb mode when they actually support them in ARM and Thumb mode. It seems that these CPUs follow the DDI0406B ARM ARM which has two possible values for the divide instructions field, instead of the DDI0406C document which has three possible values. Work around this problem by checking the MIDR against Krait CPUs with this faulty ISAR0 register and force the hwcaps to indicate support in both modes. [sboyd: Rewrote commit text to reflect real reasoning now that we autodetect udiv/sdiv] Signed-off-by: NStepan Moskovchenko <stepanm@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
The ISAR0 register indicates support for the SDIV and UDIV instructions in both the Thumb and ARM instruction set. Read the register to detect the supported instructions and update the elf_hwcap mask as appropriate. This is better than adding more and more cpuid checks in proc-v7.S for each new cpu variant that supports these instructions. Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Sricharan R 提交于
With LPAE enabled, alloc_init_section() does not map the entire address space for unaligned addresses. The issue also reproduced with CMA + LPAE. CMA tries to map 16MB with page granularity mappings during boot. alloc_init_pte() is called and out of 16MB, only 2MB gets mapped and rest remains unaccessible. Because of this OMAP5 boot is broken with CMA + LPAE enabled. Fix the issue by ensuring that the entire addresses are mapped. Signed-off-by: NR Sricharan <r.sricharan@ti.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <chris@cloudcar.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NChristoffer Dall <chris@cloudcar.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 3月, 2013 1 次提交
-
-
由 Marek Szyprowski 提交于
Atomic pool should always be allocated from DMA zone if such zone is available in the system to avoid issues caused by limited dma mask of any of the devices used for making an atomic allocation. Reported-by: NKrzysztof Halasa <khc@pm.waw.pl> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Cc: Stable <stable@vger.kernel.org> [v3.6+]
-
- 04 3月, 2013 4 次提交
-
-
由 Will Deacon 提交于
The ARM ARM requires branch predictor maintenance if, for a given ASID, the instructions at a specific virtual address appear to change. From the kernel's point of view, that means: - Changing the kernel's view of memory (e.g. switching to the identity map) - ASID rollover (since ASIDs will be re-allocated to new tasks) This patch adds explicit branch predictor maintenance when either of the two conditions above are met. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
If a thread triggers an ASID rollover, other threads of the same process must be made to wait until the mm->context.id for the shared mm_struct has been updated to new generation and associated book-keeping (e.g. TLB invalidation) has ben performed. However, there is a *tiny* window where both mm->context.id and the relevant active_asids entry are updated to the new generation, but the TLB flush has not been performed, which could allow another thread to return to userspace with a dirty TLB, potentially leading to data corruption. In reality this will never occur because one CPU would need to perform a context-switch in the time it takes another to do a couple of atomic test/set operations but we should plug the race anyway. This patch moves the active_asids update until after the potential TLB flush on context-switch. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ben Dooks 提交于
Fix missing use of the asid macro when getting the ASID from the mm->context.id field. Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 2月, 2013 1 次提交
-
-
由 Russell King 提交于
Paolo Pisati reports that IPv6 triggers this warning: BUG: scheduling while atomic: swapper/0/0/0x40000100 Modules linked in: [<c001b1c4>] (unwind_backtrace+0x0/0xf0) from [<c0503c5c>] (__schedule_bug+0x48/0x5c) [<c0503c5c>] (__schedule_bug+0x48/0x5c) from [<c0508608>] (__schedule+0x700/0x740) [<c0508608>] (__schedule+0x700/0x740) from [<c007007c>] (__cond_resched+0x24/0x34) [<c007007c>] (__cond_resched+0x24/0x34) from [<c05086dc>] (_cond_resched+0x3c/0x44) [<c05086dc>] (_cond_resched+0x3c/0x44) from [<c0021f6c>] (do_alignment+0x178/0x78c) [<c0021f6c>] (do_alignment+0x178/0x78c) from [<c00083e0>] (do_DataAbort+0x34/0x98) [<c00083e0>] (do_DataAbort+0x34/0x98) from [<c0509a60>] (__dabt_svc+0x40/0x60) Exception stack(0xc0763d70 to 0xc0763db8) 3d60: e97e805e e97e806e 2c000000 11000000 3d80: ea86bb00 0000002c 00000011 e97e807e c076d2a8 e97e805e e97e806e 0000002c 3da0: 3d000000 c0763dbc c04b98fc c02a8490 00000113 ffffffff [<c0509a60>] (__dabt_svc+0x40/0x60) from [<c02a8490>] (__csum_ipv6_magic+0x8/0xc8) Fix this by using probe_kernel_address() stead of __get_user(). Cc: <stable@vger.kernel.org> Reported-by: NPaolo Pisati <p.pisati@gmail.com> Tested-by: NPaolo Pisati <p.pisati@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 2月, 2013 7 次提交
-
-
由 Marek Szyprowski 提交于
This patch removes page_address() usage in IOMMU-aware dma-mapping implementation and replaced it with direct use of the cpu virtual address provided by the caller. page_address() returned incorrect address for pages remapped in atomic pool, what caused memory leak. Reported-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NHiroshi Doyu <hdoyu@nvidia.com>
-
由 Seung-Woo Kim 提交于
Alignment order for a dma iommu buffer is set by buffer size. For large buffer, it is a waste of iommu address space. So configurable parameter to limit maximum alignment order can reduce the waste. Signed-off-by: NSeung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: NKyungmin.park <kyungmin.park@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
IOMMU can provide access to any memory page, so there is no point in limiting the allocated pages only to lowmem, once other parts of dma-mapping subsystem correctly supports himem pages. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com>
-
由 Prathyush K 提交于
This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu functions - arm_iommu_create_mapping, arm_iommu_free_mapping and arm_iommu_attach_device. These three functions are arm specific wrapper functions for creating/freeing/using an iommu mapping and they are called by various drivers. If any of these drivers need to be built as dynamic modules, these functions need to be exported. Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> [m.szyprowski: extended with recently introduced EXPORT_SYMBOL_GPL(arm_iommu_detach_device)] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
A counter part of arm_iommu_attach_device(). Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 17 2月, 2013 4 次提交
-
-
由 Ben Dooks 提交于
The mmid macro is meant to be used to get the mm->context.id data from the mm structure, but it seems to have been missed in a cuple of files. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ben Dooks 提交于
Since the new ASID code in b5466f87 ("ARM: mm: remove IPI broadcasting on ASID rollover") was changed to use 64bit operations it has broken the BE operation due to an issue with the MM code accessing sub-fields of mm->context.id. When running in BE mode we see the values in mm->context.id are stored with the highest value first, so the LDR in the arch/arm/mm/proc-macros.S reads the wrong part of this field. To resolve this, change the LDR in the mmid macro to load from +4. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
869486d5f51 (ARM: 7646/1: mm: use static_vm for managing static mapped areas) introduced new warnings: arch/arm/mm/mmu.c: In function 'pci_reserve_io': arch/arm/mm/mmu.c:888:16: warning: unused variable 'addr' arch/arm/mm/mmu.c:887:20: warning: unused variable 'vm' because it failed to delete the two local variables it no longer used. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
A static mapped area is ARM-specific, so it is better not to use generic vmalloc data structure, that is, vmlist and vmlist_lock for managing static mapped area. And it causes some needless overhead and reducing this overhead is better idea. Now, we have newly introduced static_vm infrastructure. With it, we don't need to iterate all mapped areas. Instead, we just iterate static mapped areas. It helps to reduce an overhead of finding matched area. And architecture dependency on vmalloc layer is removed, so it will help to maintainability for vmalloc layer. Reviewed-by: NNicolas Pitre <nico@linaro.org> Acked-by: NRob Herring <rob.herring@calxeda.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-