- 21 2月, 2010 1 次提交
-
-
由 Russell King 提交于
Some glibc versions intentionally create lots of alignment faults in their gconv code, which if not fixed up, results in segfaults during boot. This can prevent systems booting properly. There is no clear hard-configurable default for this; the desired default depends on the nature of the userspace which is going to be booted. So, provide a way for the alignment fault handler to be configured via the kernel command line. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 2月, 2010 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 1月, 2010 3 次提交
-
-
由 Tony Lindgren 提交于
The comments in cacheflush.h should follow what's in struct cpu_cache_fns. The comments for V6 and V7 are unnecessary. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tony Lindgren 提交于
The comments in arm_machine_restart() suggest that cpu_proc_fin() will clean and disable cache and turn off interrupts. This does not seem to be implemented for proc-v7.S, implement it the same way as for proc-v6.S. This also makes kexec work for v7. Note that a related TLB and branch traget flush patch is also needed to avoid kexec "crc error". Note that there are still some issues that seem to be related to L2 cache being on and causing occasional uncompress "crc error" with kexec. Anyways, this gets kexec mostly working on V7 for now. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tony Lindgren 提交于
We need to do that if we tinker with the MMU entries. This fixes the occasional bug with kexec where the new fails to uncompress with "crc error". Most likely at least kexec on v6 and v7 need this fix. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 1月, 2010 1 次提交
-
-
由 Russell King 提交于
A kernel with both ARMv6 and ARMv7 selected results in build errors. Fix this by specifying the proper architectures for these assembly files. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 1月, 2010 1 次提交
-
-
由 Andreas Fenkart 提交于
Makes it consistent with the extern declaration, used when CONFIG_HIGHMEM is set Removes redundant casts in printout messages Signed-off-by: NAndreas Fenkart <andreas.fenkart@streamunlimited.com> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Lennox Wu <lennox.wu@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 1月, 2010 1 次提交
-
-
由 Bahadir Balban 提交于
Signed-off-by: NBahadir Balban <bbalban@b-labs.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 1月, 2010 2 次提交
-
-
由 Haojian Zhuang 提交于
Check whether L2 is present or not in XSC3. If it's present, enable L2 immediately. Disabling L2 after L2 is enabled that would result in unpredicatable behavior of XSC3 processor. Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
由 Haojian Zhuang 提交于
Outer cache checked whether L2 is enabled or not. If L2 isn't enabled in XSC3, it would enable L2. This operation is evil that would make system hang. In XSC3 core document, these words are mentioned in below. "Following reset, the L2 Unified Cache Enable bit is cleared. To enable the L2 Cache, software may set the bit to a '1' before or at the same time as enabling the MMU. Enabling the L2 Cache after the MMU has been enabled or disabling the L2 Cache after the L2 Cache has been enabled, may result in unpredictable behavior of the processor." When outer cache is initialized, the MMU is already enabled. We couldn't enable L2 after MMU enabled. Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 24 12月, 2009 2 次提交
-
-
由 Russell King 提交于
PAGE_KERNEL should not be executable; any area marked executable can be prefetched into the instruction cache. We don't want vmalloc areas to be read in this way. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
It is unpredictable to have the same memory mapped using different shared bit settings for ARMv6 and ARMv7 CPUs. Fix this for the CPU write buffer bug test. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 12月, 2009 1 次提交
-
-
由 Russell King 提交于
26-bit ARM support was removed a long time ago, and this symbol has been defined to be 'y' ever since. As it's never disabled anymore, we can kill it without any side effects. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 12月, 2009 1 次提交
-
-
由 Anand Gadiyar 提交于
Commit 2c9b9c84 added an argument to __cpuc_flush_dcache_page and renamed it. Update a caller of the old function to fix this build error: CC arch/arm/mm/copypage-v6.o arch/arm/mm/copypage-v6.c: In function 'v6_copy_user_highpage_nonaliasing': arch/arm/mm/copypage-v6.c:51: error: implicit declaration of function '__cpuc_flush_dcache_page' make[1]: *** [arch/arm/mm/copypage-v6.o] Error 1 make: *** [arch/arm/mm] Error 2 Reported-by: NJinsung Yang <jsgood.yang@samsung.com> Signed-off-by: NAnand Gadiyar <gadiyar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 12月, 2009 4 次提交
-
-
由 Russell King 提交于
... and rename the function since it no longer operates on just pages. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
There is not enough users to warrant its existence, and it is actually an obstacle to progress with the new DMA API which cannot cover this case properly. To keep backward compatibility, let's perform the necessary custom cache maintenance locally in the only driver affected. Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There's no point having the hardware support background operations if we issue a cache operation, and then wait for it to complete before calculating the address of the next operation. We gain no advantage in the cache controller stalling the bus until completion. What we should be doing is using the 'wait' time productively by calculating the address of the next operation, and only then waiting for the previous operation to complete. This means that cache operations can occur in parallel with the CPU calculating the next address. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Russell King 提交于
Taking the spinlock for every iteration is very expensive; instead, batch iterations up into 4K blocks, releasing and reacquiring the spinlock between each block. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 12月, 2009 1 次提交
-
-
由 Al Viro 提交于
We want addr - (pgoff << PAGE_SHIFT) consistently coloured... Acked-by: NPaul Mundt <lethal@linux-sh.org> Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 08 12月, 2009 1 次提交
-
-
由 Saeed Bishara 提交于
... to be the same as proc-v6 Signed-off-by: NSaeed Bishara <saeed@marvell.com> Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
- 04 12月, 2009 8 次提交
-
-
由 Russell King 提交于
Dirk Behme reported instability on ARM11 SMP (VIPT non-aliasing cache) caused by the dynamic linker changing protection on text pages to write GOT entries. The problem is due to an interaction between the write faulting code providing new anonymous pages which are incoherent with the I-cache due to write buffering, and the I-cache not having been invalidated. a4db94d plugs the hole with the data cache coherency. This patch provides the other half of the fix by flushing the I-cache in flush_cache_range() for VM_EXEC VMAs (which is what we have when the region is being made executable again.) This ensures that the I-cache will be up to date with the newly COW'd pages. Note: if users are writing instructions, then they still need to use the ARM sys_cacheflush API to ensure that the caches are correctly synchronized. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
flush_cache_mm() is called in two cases: 1. when a process exits, just before the page tables are torn down. We can allow the stale lines to evict themselves over time without causing any harm. 2. when a process forks, and we've allocated a new ASID. The instruction cache issues are dealt with as pages are brought into the new process address space. Flushing the I-cache here is therefore unnecessary. However, we must keep the VIPT aliasing D-cache flush to ensure that any dirty cache lines are not written back after the pages have been reallocated for some other use - which would result in corruption. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
The I and D caches for copy-on-write pages on processors with write-allocate caches become incoherent causing problems on application relying on CoW for text pages (dynamic linker relocating symbols in a text page). This patch flushes the D-cache for such pages. Cc: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Both call sites for __flush_dcache_page() end up calling __flush_icache_all() themselves, so having __flush_dcache_page() do this as well is wasteful. Remove the duplicated icache flushing. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Srinidhi Kasagar 提交于
If running in non-secure mode accessing some registers of l2x0 will fault. So check if l2x0 is already enabled, if so do not access those secure registers. Signed-off-by: Nsrinidhi kasagar <srinidhi.kasagar@stericsson.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 12月, 2009 3 次提交
-
-
由 Russell King 提交于
The zero page is read-only, and has its cache state cleared during boot. No further maintanence for this page is required. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
page_address() is a function call rather than a macro, and so: if (page_address(page)) do_something(page_address(page)); results in two calls to this function. This is unnecessary; remove the duplication. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We had two copies of the wrapper code for VIVT cache flushing - one in asm/cacheflush.h and one in arch/arm/mm/flush.c. Reduce this down to one common copy. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 12月, 2009 1 次提交
-
-
由 Tomáš Čech 提交于
Signed-off-by: NTomáš Čech <sleep_walker@suse.cz> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 28 11月, 2009 2 次提交
-
-
由 Lennert Buytenhek 提交于
Support for the Tauros2 L2 cache controller as used with the PJ1 and PJ4 CPUs. Signed-off-by: NLennert Buytenhek <buytenh@marvell.com> Signed-off-by: NSaeed Bishara <saeed@marvell.com> Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
由 Saeed Bishara 提交于
The Marvell Dove (88AP510) is a high-performance, highly integrated, low power SoC with high-end ARM-compatible processor (known as PJ4), graphics processing unit, high-definition video decoding acceleration hardware, and a broad range of peripherals. Signed-off-by: NLennert Buytenhek <buytenh@marvell.com> Signed-off-by: NSaeed Bishara <saeed@marvell.com> Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
- 25 11月, 2009 6 次提交
-
-
由 Russell King 提交于
On ARMv7, it is invalid to map the same physical address multiple times with different memory types. Since system RAM is already mapped as 'memory', subsequent remapping of it must retain this attribute. However, DMA memory maps it as "strongly ordered". Fix this by introducing 'pgprot_dmacoherent()' which provides the necessary page table bits for DMA mappings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NGreg Ungerer <gerg@uclinux.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Russell King 提交于
It's unnecessary; x86 doesn't do it, and ALSA doesn't require it anymore. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Russell King 提交于
This entirely separates the DMA coherent buffer remapping code from the allocation code, and gets rid of the duplicate copy in the !MMU section. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Russell King 提交于
IXP23xx added support for dma_alloc_coherent() for DMA arches with an exception in dma_alloc_coherent(). This is a subset of what goes on in __dma_alloc(), and there is no reason why dma_alloc_writecombine() should not be given the same treatment (except, maybe, that IXP23xx doesn't use it.) We can better deal with this by moving the arch_is_coherent() test inside __dma_alloc() and killing the code duplication. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Russell King 提交于
No point wrapping the contents of this function with #ifdef CONFIG_MMU when we can place it and the core_initcall() entirely within the existing conditional block. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Russell King 提交于
We effectively have three implementations of dma_free_coherent() mixed up in the code; the incoherent MMU, coherent MMU and noMMU versions. The coherent MMU and noMMU versions are actually functionally identical. The incoherent MMU version is almost the same, but with the additional step of unmapping the secondary mapping. Separate out this additional step into __dma_free_remap() and simplify the resulting dma_free_coherent() code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NGreg Ungerer <gerg@uclinux.org>
-