- 27 7月, 2010 3 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Provide a common function for allocating early PTE tables. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 7月, 2010 5 次提交
-
-
由 Russell King 提交于
Add a common early allocator function, in preparation for switching over to LMB. When we do, this function will need to do a little more than just allocating memory; we need it zero initialized too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the platform specific bootmem memory reservations out of arch/arm/mm/mmu.c into their respective platform files. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Since we no longer support discontigmem, node is always zero, so remove this argument. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Everything should now be using sparsemem rather than discontigmem, so remove the code supporting discontigmem from ARM. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than storing the minimum size of the vmalloc area, store the maximum permitted address of the vmalloc area instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 7月, 2010 1 次提交
-
-
由 Sascha Hauer 提交于
On i.MX35 the L2X0_AUX_CTRL register does not have sensible reset default values. Allow them to be overwritten with the aux_val/aux_mask arguments passed to l2x0_init(). Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 7月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
RealView boards with certain revisions of the L210/L220 cache controller may have issues (hardware deadlock) with the mandatory barriers (DSB followed by an L2 cache sync) when ARM_DMA_MEM_BUFFERABLE is enabled. The patch disables ARM_DMA_MEM_BUFFERABLE for these boards. Tested-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 7月, 2010 3 次提交
-
-
由 Catalin Marinas 提交于
Commit f4d6477f introduced a workaround for the lack of hardware broadcasting of the cache maintenance operations on ARM11MPCore. However, the workaround is only valid on CPUs that do not do speculative loads into the D-cache. This patch adds a Kconfig option with the corresponding help to make the above clear. When the DMA_CACHE_RWFO option is disabled, the kernel behaviour is that prior to the f4d6477f commit. This also allows ARMv6 UP processors with speculative loads to work correctly. For other processors, a different workaround may be needed. Cc: Ronen Shitrit <rshitrit@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
A recent patch for DMA cache maintenance on ARM11MPCore added a write for ownership trick to the v6_dma_inv_range() function. Such operation destroys data already present in the buffer. However, this function is used with with dma_sync_single_for_device() which is supposed to preserve the existing data transfered into the buffer. This patch adds a combination of read/write for ownership to preserve the original data. Reported-by: NRonen Shitrit <rshitrit@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
This macro is not defined when !CONFIG_MMU so this patch moves the CONSISTENT_* definitions to the CONFIG_MMU section. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 6月, 2010 3 次提交
-
-
由 Khem Raj 提交于
When functions incoming parameters are not in input operands list gcc 4.5 does not load the parameters into registers before calling this function but the inline assembly assumes valid addresses inside this function. This breaks the code because r0 and r1 are invalid when execution enters v4wb_copy_user_page () Also the constant needs to be used as third input operand so account for that as well. Tested on qemu arm. CC: <stable@kernel.org> Signed-off-by: NKhem Raj <raj.khem@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Anfei 提交于
Instruction faults on pre-ARMv6 CPUs are interpreted as a 'translation fault', but do_translation_fault doesn't handle well if user mode trying to run instruction above TASK_SIZE, and result in the infinite retry of that instruction. CC: <stable@kernel.org> Signed-off-by: NAnfei Zhou <anfei.zhou@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
When CONFIG_DEBUG_HIGHMEM is used, the fixmap entry used for a highmem page by kmap_atomic() is always cleared by kunmap_atomic(). This helps find bad usages such as dereferences after the unmap, or overflow into the adjacent fixmap areas. But this debugging aid is completely bypassed when a kmap for the same page already exists as the kmap is reused instead. ON VIVT systems we have no choice but to reuse that kmap due to cache coherency issues, but on non VIVT systems we should always force the fixmap usage when debugging is active. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 5月, 2010 1 次提交
-
-
由 Linus Walleij 提交于
This fixes a bug in mm/init.c when freeing the TCM compile memory, this was being referred to as a char * which is incorrect: this will dereference the pointer and feed in the value at the location instead of the address to it. Change it to a plain char and use &(char) to reference it. Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Cc: <stable@kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 5月, 2010 1 次提交
-
-
由 Santosh Shilimkar 提交于
This patch fixes the flush_cache_all for ARMv7 SMP.It was missing from commit b8349b56Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 5月, 2010 1 次提交
-
-
由 Russell King 提交于
Provide a configuration option to allow the ARMv6 to use normal bufferable memory for coherent DMA. This option is forced to 'y' for ARMv7, and offered as a configuration option on ARMv6. Enabling this option requires drivers to have the necessary barriers to ensure that data in DMA coherent memory is visible prior to the DMA operation commencing. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 5月, 2010 5 次提交
-
-
由 Kirill A. Shutemov 提交于
Between "clean D line..." and "invalidate I line" operations in v7_coherent_user_range(), the memory page may get swapped out. And the fault on "invalidate I line" could not be properly handled causing the oops. In ARMv6 "external abort on linefetch" replaced by "instruction cache maintenance fault". Let's handle it as translation fault. It fixes the issue. I'm not sure if it's reasonable to check arch version in run-time. Let's do it in compile time for now. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSiarhei Siamashka <siarhei.siamashka@nokia.com> Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jason McMullan 提交于
The L310 cache controller's interface is almost identical to the L210. One major difference is that the PL310 can have up to 16 ways. This change uses the cache's part ID and the Associativity bits in the AUX_CTRL register to determine the number of ways. Also, this version prints out the CACHE_ID and AUX_CTRL registers. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NAcked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJason S. McMullan <jason.mcmullan@netronome.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Alexey Dobriyan 提交于
Convert code away from ->read_proc/->write_proc interfaces. Switch to proc_create()/proc_create_data() which makes addition of proc entries reliable wrt NULL ->proc_fops, NULL ->data and so on. Problem with ->read_proc et al is described here commit 786d7e16 "Fix rmmod/read/write races in /proc entries" This patch is part of an effort to remove the old simple procfs PAGE_SIZE buffer interface. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 5月, 2010 1 次提交
-
-
由 Vasily Khoruzhick 提交于
Signed-off-by: NVasily Khoruzhick <anarsoul@gmail.com> Signed-off-by: NBen Dooks <ben-linux@fluff.org>
-
- 11 5月, 2010 1 次提交
-
-
由 Haojian Zhuang 提交于
Enable Tauros2 L2 in mmp2. Tauros2 L2 is shared in Marvell ARM cores. Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 08 5月, 2010 4 次提交
-
-
由 Catalin Marinas 提交于
The standard I-cache Invalidate All (ICIALLU) and Branch Predication Invalidate All (BPIALL) operations are not automatically broadcast to the other CPUs in an ARMv7 MP system. The patch adds the Inner Shareable variants, ICIALLUIS and BPIALLIS, if ARMv7 and SMP. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
The Snoop Control Unit on the ARM11MPCore hardware does not detect the cache operations and the dma_cache_maint*() functions may leave stale cache entries on other CPUs. The solution implemented in this patch performs a Read or Write For Ownership in the ARMv6 DMA cache maintenance functions. These LDR/STR instructions change the cache line state to shared or exclusive so that the cache maintenance operation has the desired effect. Tested-by: NGeorge G. Davis <gdavis@mvista.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
Commit 7959722b introduced calls to copy_(to|from)_user_page() from access_process_vm() in mm/nommu.c. The copy_to_user_page() was not implemented on noMMU ARM. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
Commit 31aa8fd6 introduced the __arm_ioremap_caller() function but the nommu.c version did not have the _caller suffix. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 5月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
The show_mem() and mem_init() function are assuming that the page map is contiguous and calculates the start and end page of a bank using (map + pfn). This fails with SPARSEMEM where pfn_to_page() must be used. Tested-by: NWill Deacon <Will.Deacon@arm.com> Tested-by: NMarek Vasut <marek.vasut@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 5月, 2010 1 次提交
-
-
由 Dave Estes 提交于
Handle incorrectly reported permission faults for qsd8650. On permission faults, retry MVA to PA conversion. If retry detects translation fault. Report as translation fault. Cc: Jamie Lokier <jamie@shareable.org> Signed-off-by: NDave Estes <cestes@quicinc.com>
-
- 02 5月, 2010 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 4月, 2010 1 次提交
-
-
由 Hans Ulli Kroll 提交于
Fix compiler error in copypage-fs.c missing struct vm_area_struct *vma in function fa_copy_user_highpage Signed-off-by: NHans Ulli Kroll <ulli.kroll@googlemail.com>
-
- 21 4月, 2010 1 次提交
-
-
由 Russell King 提交于
/tmp/ccJ3ssZW.s: Assembler messages: /tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077' This is caused because: .section .data .section .text .section .text .previous does not return us to the .text section, but the .data section; this makes use of .previous dangerous if the ordering of previous sections is not known. Fix up the other users of .previous; .pushsection and .popsection are a safer pairing to use than .section and .previous. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 4月, 2010 4 次提交
-
-
由 Srinidhi Kasagar 提交于
This enables the l2x0 support and ensures that the secondary CPU can see the page table and secondary data at this point. Signed-off-by: Nsrinidhi kasagar <srinidhi.kasagar@stericsson.com> Acked-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This cache flush occurs when we first insert a page into the page tables, where a page did not exist previously. There can be no cache lines associated with this virtual mapping, so this cache flush is redundant. Tested-by: NMike Rapoport <mike@compulab.co.il> Tested-by: Mikael Pettersson <mikpe at it.uu.se> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mika Westerberg 提交于
When crash happens in interrupt context there is no userspace context. We always use current->active_mm in those cases. Signed-off-by: NMika Westerberg <ext-mika.1.westerberg@nokia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
The VIVT cache of a highmem page is always flushed before the page is unmapped. This cache flush is explicit through flush_cache_kmaps() in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in kunmap_atomic(). There is also an implicit flush of those highmem pages that were part of a process that just terminated making those pages free as the whole VIVT cache has to be flushed on every task switch. Hence unmapped highmem pages need no cache maintenance in that case. However unmapped pages may still be cached with a VIPT cache because the cache is tagged with physical addresses. There is no need for a whole cache flush during task switching for that reason, and despite the explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(), some highmem pages that were mapped in user space end up still cached even when they become unmapped. So, we do have to perform cache maintenance on those unmapped highmem pages in the context of DMA when using a VIPT cache. Unfortunately, it is not possible to perform that cache maintenance using physical addresses as all the L1 cache maintenance coprocessor functions accept virtual addresses only. Therefore we have no choice but to set up a temporary virtual mapping for that purpose. And of course the explicit cache flushing when unmapping a highmem page on a system with a VIPT cache now can go, which should increase performance. While at it, because the code in __flush_dcache_page() has to be modified anyway, let's also make sure the mapped highmem pages are pinned with kmap_high_get() for the duration of the cache maintenance operation. Because kunmap() does unmap highmem pages lazily, it was reported by Gary King <GKing@nvidia.com> that those pages ended up being unmapped during cache maintenance on SMP causing segmentation faults. Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 4月, 2010 1 次提交
-
-
由 Russell King 提交于
Write combining/cached device mappings are not setting the shared bit, which could potentially cause problems on SMP systems since the cache lines won't participate in the cache coherency protocol. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-