- 03 2月, 2011 7 次提交
-
-
由 Russell King 提交于
Limit DMA_CACHE_RWFO to only v6k SMP CPUs - V6 CPUs aren't SMP capable, so the read/write for ownership work-around doesn't apply to them. Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Now that we build a v6+v6k+v7 kernel with -march=armv6k for everything, we don't need to disable swp emulation to work around the build problem with OMAP. Tested-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
CPU_32v6K controls whether we use the ARMv6K extension instructions in the kernel, and in some places whether we use SMP-safe code sequences (eg, bitops.) MX3 prevents the selection of this option to ensure that it is not enabled for their CPU, which is ARMv6 only. Now that we've split the CPU_V6 option, V6K support won't be offered for MX3 anymore. OMAP prevents the selection of this option in an attempt to produce a kernel which runs on architectures from ARMv6 to ARMv7 MPCore. We now achieve this in a different way (see the previous patches). As such, we no longer need to offer this as a configuration option to the user. Tested-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than turning off CPU domain switching when the build architecture includes ARMv6K, thereby causing problems for ARMv6-supporting kernels, turn it on when it's required to support a CPU architecture. Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Acked-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
If CONFIG_CPU_V6 is enabled, then the kernel must support ARMv6 CPUs which don't have the V6K extensions implemented. Always use the dummy store-exclusive method to ensure that the exclusive monitors are cleared. If CONFIG_CPU_V6 is not set, but CONFIG_CPU_32v6K is enabled, then we have the K extensions available on all CPUs we're building support for, so we can use the new clear-exclusive instruction. Acked-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make Dove platforms select the new V6K CPU option. Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Introduce a CPU_V6K configuration option for platforms to select if they have a V6K CPU core. This allows us to identify whether we need to support ARMv6 CPUs without the V6K SMP extensions at build time. Currently CPU_V6K is just an alias for CPU_V6, and all places which reference CPU_V6 are replaced by (CPU_V6 || CPU_V6K). Select CPU_V6K from platforms which are known to be V6K-only. Acked-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 1月, 2011 2 次提交
-
-
由 Dave Martin 提交于
Commit d30e45ee (ARM: pgtable: switch order of Linux vs hardware page tables) introduced a pre-increment addressing offset which is out of range for Thumb-2. Thumb-2 only permits offsets <256. So split the intruction in two for Thumb-2. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Andrea Arcangeli 提交于
pte alloc routines must wait for split_huge_page if the pmd is not present and not null (i.e. pmd_trans_splitting). The additional branches are optimized away at compile time by pmd_trans_splitting if the config option is off. However we must pass the vma down in order to know the anon_vma lock to wait for. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 1月, 2011 1 次提交
-
-
由 Linus Walleij 提交于
The kerneldoc for this function is at odds with the DMA-API document, which holds, so fix it. Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 1月, 2011 2 次提交
-
-
由 Catalin Marinas 提交于
This option uses LDREXB/STREXB to emulate SWPB but these instructions are not supported on all the ARMv6 processors. Reported-by: NAnand Gadiyar <gadiyar@ti.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Leif Lindholm <Leif.Lindholm@arm.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Russell King 提交于
Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: NJanusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 12月, 2010 1 次提交
-
-
由 Russell King 提交于
This reverts commit 06c10884, as promised in the warning message.
-
- 22 12月, 2010 6 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The hardware page tables use an XN bit 'execute never'. Historically, we've had a Linux 'execute allow' bit, in the positive sense. Get rid of this artifact as future hardware will continue to have the XN sense. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
FIRST_USER_PGD_NR is now unnecessary, as this has been replaced by FIRST_USER_ADDRESS except in the architecture code. Fix up the last usage of FIRST_USER_PGD_NR, and remove the definition. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Remove some knowledge of our 2-level page table layout from the identity mapping code - we assume that a step size of PGDIR_SIZE will allow us to step over all entries. While this is true today, it won't be true in the near future. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This switches the ordering of the Linux vs hardware page tables in each page, thereby eliminating some of the arithmetic in the page table walks. As we now place the Linux page table at the beginning of the page, we can deal with the offset in the pgt by simply masking it away, along with the other control bits. This also makes the arithmetic all be positive, rather than a mixture. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 12月, 2010 3 次提交
-
-
由 Nicolas Pitre 提交于
Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is actively wrong to rely on fixed kmap type indices (namely KM_L2_CACHE) as kmap_atomic() totally ignores them and a concurrent instance of it may happily reuse any slot for any purpose. Because kmap_atomic() is now able to deal with reentrancy, we can get rid of the ad hoc mapping here. While the code is made much simpler, there is a needless cache flush introduced by the usage of __kunmap_atomic(). It is not clear if the performance difference to remove that is worth the cost in code maintenance (I don't think there are that many highmem users on that platform anyway) but that should be reconsidered when/if someone cares enough to do some measurements. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Nicolas Pitre 提交于
Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is actively wrong to rely on fixed kmap type indices (namely KM_L2_CACHE) as kmap_atomic() totally ignores them and a concurrent instance of it may happily reuse any slot for any purpose. Because kmap_atomic() is now able to deal with reentrancy, we can get rid of the ad hoc mapping here, and we even don't have to disable IRQs anymore (highmem case). While the code is made much simpler, there is a needless cache flush introduced by the usage of __kunmap_atomic(). It is not clear if the performance difference to remove that is worth the cost in code maintenance (I don't think there are that many highmem users on that platform if at all anyway). Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Nicolas Pitre 提交于
Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is no longer necessary to carry an ad hoc version of kmap_atomic() added in commit 7e5a69e8 "ARM: 6007/1: fix highmem with VIPT cache and DMA" to cope with reentrancy. In fact, it is now actively wrong to rely on fixed kmap type indices (namely KM_L1_CACHE) as kmap_atomic() totally ignores them now and a concurrent instance of it may reuse any slot for any purpose. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 18 12月, 2010 2 次提交
-
-
由 Haojian Zhuang 提交于
Since CPU_PJ4 is shared between PXA95x and MMP2, select CPU_PJ4 in MMP2 configuration. Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
由 Haojian Zhuang 提交于
The core of PXA955 is PJ4. Add new PJ4 support. And add new macro CONFIG_PXA95x. Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 15 12月, 2010 1 次提交
-
-
由 Valentine Barshak 提交于
Cache ownership must be acquired by reading/writing data from the cache line to make cache operation have the desired effect on the SMP MPCore CPU. However, the ownership is never acquired in the v6_dma_inv_range function when cleaning the first line and flushing the last one, in case the address is not aligned to D_CACHE_LINE_SIZE boundary. Fix this by reading/writing data if needed, before performing cache operations. While at it, fix v6_dma_flush_range to prevent RWFO outside the buffer. Cc: stable@kernel.org Signed-off-by: NValentine Barshak <vbarshak@mvista.com> Signed-off-by: NGeorge G. Davis <gdavis@mvista.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 12月, 2010 2 次提交
-
-
由 Catalin Marinas 提交于
The current implementation of the v7_coherent_*_range function assumes that the D and I cache lines have the same size, which is incorrect architecturally. This patch adds the icache_line_size macro which reads the CTR register. The main loop in v7_coherent_*_range is split in two independent loops or the D and I caches. This also has the performance advantage that the DSB is moved outside the main loop. Reported-by: NKevin Sapp <ksapp@quicinc.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
The current implementation of the dcache_line_size macro reads the L1 cache size from the CCSIDR register. This, however, is not guaranteed to be the smallest cache line in the cache hierarchy. The patch changes to the macro to use the more architecturally correct CTR register. Reported-by: NKevin Sapp <ksapp@quicinc.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 11月, 2010 1 次提交
-
-
由 Dave Martin 提交于
Directives such as .long and .word do not magically cause the assembler location counter to become aligned in gas. As a result, using these directives in code sections can result in misaligned data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL). This is a Bad Thing, since the ABI permits the compiler to assume that fundamental types of word size or above are word- aligned when accessing them from C. If the data is not really word-aligned, this can cause impaired performance and stray alignment faults in some circumstances. In general, the following rules should be applied when using data word declaration directives inside code sections: * .quad and .double: .align 3 * .long, .word, .single, .float: .align (or .align 2) * .short: No explicit alignment required, since Thumb-2 instructions are always 2 or 4 bytes in size. immediately after an instruction. In this specific case, we can achieve the desired alignment by forcing a 32-bit branch instruction using the W() macro, since the assembler location counter is already 32-bit aligned in this case. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2010 4 次提交
-
-
由 Russell King 提交于
This makes everywhere dealing with pte values use the same type. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Ensure that physical addresses are typed as phys_addr_t Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Remove knowledge of the 2-level wrapping in pgd_free(), and use the pXd_none_or_clear_bad() macros when checking the entries. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
These old names are just aliases for pgd_alloc/pgd_free. Just use the new names. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 11月, 2010 1 次提交
-
-
由 Russell King 提交于
Adding KERN_WARNING in the middle of strings now produces those tokens in the output, rather than accepting the level as was once the case. Fix this in the one reported case. There might be more... Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 11月, 2010 1 次提交
-
-
由 Magnus Damm 提交于
This patch adds initial support for Renesas SH-Mobile AG5. At this point the AG5 CPU support is limited to the ARM core, SCIF serial and a CMT timer together with L2 cache and the GIC. The AG5EVM board also supports Ethernet. Future patches will add support for GPIO, INTCS, CPGA and platform data / driver updates for devices such as IIC, LCDC, FSI, KEYSC, CEU and SDHI among others. The code in entry-macro.S will be cleaned up when the ARM IRQ demux code improvements have been merged. Depends on the AG5EVM mach-type recently registered but not yet present in arch/arm/tools/mach-types. As the AG5EVM board comes with 512MiB memory it is recommended to turn on HIGHMEM. Many thanks to Yoshii-san for initial bring up. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 15 11月, 2010 1 次提交
-
-
由 Jesper Juhl 提交于
It's enough to include the asm/smp_plat.h once in arch/arm/mm/flush.c Signed-off-by: NJesper Juhl <jj@chaosbits.net> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 08 11月, 2010 1 次提交
-
-
由 Russell King 提交于
An out by one bug meant that the DMA coherent allocator was aligning to one more bit than it should, causing it to run out of available memory quicker. Fix this. Reported-by: NPetr Štetiar <ynezz@true.cz> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 11月, 2010 2 次提交
-
-
由 Leif Lindholm 提交于
The SWP instruction was deprecated in the ARMv6 architecture, superseded by the LDREX/STREX family of instructions for load-linked/store-conditional operations. The ARMv7 multiprocessing extensions mandate that SWP/SWPB instructions are treated as undefined from reset, with the ability to enable them through the System Control Register SW bit. This patch adds the alternative solution to emulate the SWP and SWPB instructions using LDREX/STREX sequences, and log statistics to /proc/cpu/swp_emulation. To correctly deal with copy-on-write, it also modifies cpu_v7_set_pte_ext to change the mappings to priviliged RO when user RO. Signed-off-by: NLeif Lindholm <leif.lindholm@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKirill A. Shutemov <kirill@shutemov.name> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
This patch removes the domain switching functionality via the set_fs and __switch_to functions on cores that have a TLS register. Currently, the ioremap and vmalloc areas share the same level 1 page tables and therefore have the same domain (DOMAIN_KERNEL). When the kernel domain is modified from Client to Manager (via the __set_fs or in the __switch_to function), the XN (eXecute Never) bit is overridden and newer CPUs can speculatively prefetch the ioremap'ed memory. Linux performs the kernel domain switching to allow user-specific functions (copy_to/from_user, get/put_user etc.) to access kernel memory. In order for these functions to work with the kernel domain set to Client, the patch modifies the LDRT/STRT and related instructions to the LDR/STR ones. The user pages access rights are also modified for kernel read-only access rather than read/write so that the copy-on-write mechanism still works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register (CPU_32v6K is defined) since writing the TLS value to the high vectors page isn't possible. The user addresses passed to the kernel are checked by the access_ok() function so that they do not point to the kernel space. Tested-by: NAnton Vorontsov <cbouatmailru@gmail.com> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 10月, 2010 1 次提交
-
-
由 Russell King 提交于
Use memblock information to setup lowmem mappings rather than the membank array. This allows platforms to manipulate the memblock information during initialization to reserve (and remove) memory from the kernel's view of memory - and thus allowing platforms to setup their own private mappings for this memory without causing problems with multiple aliasing mappings: size = min(size, SZ_2M); base = memblock_alloc(size, min(align, SZ_2M)); memblock_free(base, size); memblock_remove(base, size); This is needed because multiple mappings of regions with differing attributes (sharability, type, cache) are not permitted with ARMv6 and above. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-