- 25 9月, 2012 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
ARM v7 architecture introduced the concept of cache levels and related control registers. New processors like A7 and A15 embed an L2 unified cache controller that becomes part of the cache level hierarchy. Some operations in the kernel like cpu_suspend and __cpu_disable do not require a flush of the entire cache hierarchy to DRAM but just the cache levels belonging to the Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems correspond to L1. The current cache flushing API used in cpu_suspend and __cpu_disable, flush_cache_all(), ends up flushing the whole cache hierarchy since for v7 it cleans and invalidates all cache levels up to Level of Coherency (LoC) which cripples system performance when used in hot paths like hotplug and cpuidle. Therefore a new kernel cache maintenance API must be added to cope with latest ARM system requirements. This patch adds flush_cache_louis() to the ARM kernel cache maintenance API. This function cleans and invalidates all data cache levels up to the Level of Unification Inner Shareable (LoUIS) and invalidates the instruction cache for processors that support it (> v7). This patch also creates an alias of the cache LoUIS function to flush_kern_all for all processor versions prior to v7, so that the current cache flushing behaviour is unchanged for those processors. v7 cache maintenance code implements a cache LoUIS function that cleans and invalidates the D-cache up to LoUIS and invalidates the I-cache, according to the new API. Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NShawn Guo <shawn.guo@linaro.org>
-
- 02 5月, 2012 1 次提交
-
-
由 Will Deacon 提交于
The cacheflush syscall can fail for two reasons: (1) The arguments are invalid (nonsensical address range or no VMA) (2) The region generates a translation fault on a VIPT or PIPT cache This patch allows do_cache_op to return an error code to userspace in the case of the above. The various coherent_user_range implementations are modified to return 0 in the case of VIVT caches or -EFAULT in the case of an abort on v6/v7 cores. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 7月, 2011 1 次提交
-
-
由 Dave Martin 提交于
Signed-off-by: NDave Martin <dave.martin@linaro.org>
-
- 26 5月, 2011 1 次提交
-
-
由 Will Deacon 提交于
The v6 and v7 implementations of flush_kern_dcache_area do not align the passed MVA to the size of a cacheline in the data cache. If a misaligned address is used, only a subset of the requested area will be flushed. This has been observed to cause failures in SMP boot where the secondary_data initialised by the primary CPU is not cacheline aligned, causing the secondary CPUs to read incorrect values for their pgd and stack pointers. This patch ensures that the base address is cacheline aligned before flushing the d-cache. Cc: <stable@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 12月, 2010 1 次提交
-
-
由 Valentine Barshak 提交于
Cache ownership must be acquired by reading/writing data from the cache line to make cache operation have the desired effect on the SMP MPCore CPU. However, the ownership is never acquired in the v6_dma_inv_range function when cleaning the first line and flushing the last one, in case the address is not aligned to D_CACHE_LINE_SIZE boundary. Fix this by reading/writing data if needed, before performing cache operations. While at it, fix v6_dma_flush_range to prevent RWFO outside the buffer. Cc: stable@kernel.org Signed-off-by: NValentine Barshak <vbarshak@mvista.com> Signed-off-by: NGeorge G. Davis <gdavis@mvista.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 10月, 2010 1 次提交
-
-
由 Tony Lindgren 提交于
Do this by adding flush_icache_all to cache_fns for ARMv6 and 7. As flush_icache_all may neeed to be called from flush_kern_cache_all, add it as the first entry in the cache_fns. Note that now we can remove the ARM_ERRATA_411920 dependency to !SMP so it can be selected on UP ARMv6 processors, such as omap2. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NAnand Gadiyar <gadiyar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 7月, 2010 2 次提交
-
-
由 Catalin Marinas 提交于
Commit f4d6477f introduced a workaround for the lack of hardware broadcasting of the cache maintenance operations on ARM11MPCore. However, the workaround is only valid on CPUs that do not do speculative loads into the D-cache. This patch adds a Kconfig option with the corresponding help to make the above clear. When the DMA_CACHE_RWFO option is disabled, the kernel behaviour is that prior to the f4d6477f commit. This also allows ARMv6 UP processors with speculative loads to work correctly. For other processors, a different workaround may be needed. Cc: Ronen Shitrit <rshitrit@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
A recent patch for DMA cache maintenance on ARM11MPCore added a write for ownership trick to the v6_dma_inv_range() function. Such operation destroys data already present in the buffer. However, this function is used with with dma_sync_single_for_device() which is supposed to preserve the existing data transfered into the buffer. This patch adds a combination of read/write for ownership to preserve the original data. Reported-by: NRonen Shitrit <rshitrit@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 5月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
The Snoop Control Unit on the ARM11MPCore hardware does not detect the cache operations and the dma_cache_maint*() functions may leave stale cache entries on other CPUs. The solution implemented in this patch performs a Read or Write For Ownership in the ARMv6 DMA cache maintenance functions. These LDR/STR instructions change the cache line state to shared or exclusive so that the cache maintenance operation has the desired effect. Tested-by: NGeorge G. Davis <gdavis@mvista.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2010 3 次提交
-
-
由 Russell King 提交于
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes DMA cache coherency handling slightly more interesting. Rather than being able to rely upon the CPU not accessing the DMA buffer until DMA has completed, we now must expect that the cache could be loaded with possibly stale data from the DMA buffer. Where DMA involves data being transferred to the device, we clean the cache before handing it over for DMA, otherwise we invalidate the buffer to get rid of potential writebacks. On DMA Completion, if data was transferred from the device, we invalidate the buffer to get rid of any stale speculative prefetches. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
These are now unused, and so can be removed. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 14 12月, 2009 1 次提交
-
-
由 Russell King 提交于
... and rename the function since it no longer operates on just pages. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 10月, 2009 1 次提交
-
-
由 Catalin Marinas 提交于
This is needed because applications using the sys_cacheflush system call can pass a memory range which isn't mapped yet even though the corresponding vma is valid. The patch also adds unwinding annotations for correct backtraces from the coherent_user_range() functions. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 5月, 2009 1 次提交
-
-
由 Catalin Marinas 提交于
This patch implements the recommended workaround for erratum 411920 (ARM1136, ARM1156, ARM1176). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 3月, 2006 1 次提交
-
-
由 Catalin Marinas 提交于
Patch from Catalin Marinas ARM1136 erratum 371025 (category 2) specifies that, under rare conditions, an invalidate I-cache by MVA (line or range) operation can fail to invalidate a cache line. The recommended workaround is to either invalidate the entire I-cache or invalidate the range by set/way rather than MVA. Note that for a 16K cache size, invalidating a 4K page by set/way is equivalent to invalidating the entire I-cache. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 2月, 2006 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre Doing so adds a much larger cost to the loop than the cost implied by simply invalidating the whole BTB at once. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 9月, 2005 1 次提交
-
-
由 Gen FUKATSU 提交于
Patch from Gen FUKATSU Invalidate BTB entry instruction flushes two instruction at a time. Therefore this instruction should be done four times after invalidate instruction cache line. Signed-off-by: Gen Fukatsu Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-