- 27 8月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
On revisions of Cortex-A15 prior to r3p3, a CLREX instruction at PL1 may falsely trigger a watchpoint exception, leading to potential data aborts during exception return and/or livelock. This patch resolves the issue in the following ways: - Replacing our uses of CLREX with a dummy STREX sequence instead (as we did for v6 CPUs). - Removing the clrex code from v7_exit_coherency_flush and derivatives, since this only exists as a minor performance improvement when non-cached exclusives are in use (Linux doesn't use these). Benchmarking on a variety of ARM cores revealed no measurable performance difference with this change applied, so the change is performed unconditionally and no new Kconfig entry is added. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2014 2 次提交
-
-
由 Victor Kamensky 提交于
After instruction write into xol area, on ARM V7 architecture code need to flush dcache and icache to sync them up for given set of addresses. Having just 'flush_dcache_page(page)' call is not enough - it is possible to have stale instruction sitting in icache for given xol area slot address. Introduce arch_uprobe_ixol_copy weak function that by default calls uprobes copy_to_page function and than flush_dcache_page function and on ARM define new one that handles xol slot copy in ARM specific way flush_uprobe_xol_access function shares/reuses implementation with/of flush_ptrace_access function and takes care of writing instruction to user land address space on given variety of different cache types on ARM CPUs. Because flush_uprobe_xol_access does not have vma around flush_ptrace_access was split into two parts. First that retrieves set of condition from vma and common that receives those conditions as flags. Note ARM cache flush function need kernel address through which instruction write happened, so instead of using uprobes copy_to_page function changed code to explicitly map page and do memcpy. Note arch_uprobe_copy_ixol function, in similar way as copy_to_user_page function, has preempt_disable/preempt_enable. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NOleg Nesterov <oleg@redhat.com> Reviewed-by: NDavid A. Long <dave.long@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
dsb st can be used to ensure completion of pending cache maintenance operations, so use it for the v7 cache maintenance operations. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 2月, 2014 1 次提交
-
-
由 Vinayak Kale 提交于
Add DSB after icache flush to complete the cache maintenance operation. Signed-off-by: NVinayak Kale <vkale@apm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 12月, 2013 1 次提交
-
-
由 Laura Abbott 提交于
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 10月, 2013 1 次提交
-
-
由 Nicolas Pitre 提交于
This code is becoming duplicated in many places. So let's consolidate it into a handy macro that is known to be right and available for reuse. Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 8月, 2013 1 次提交
-
-
由 Will Deacon 提交于
The flush_cache_user_range macro takes a pair of addresses describing the start and end of the virtual address range to flush. Due to an accidental oversight when flush_cache_range_user was introduced, the address range was rounded up so that the start and end addresses were page-aligned. For historical reference, the interesting commits in history.git are: 10eacf1775e1 ("[ARM] Clean up ARM cache handling interfaces (part 1)") 71432e79b76b ("[ARM] Add flush_cache_user_page() for sys_cacheflush()") This patch removes the alignment code, reducing the amount of flushing required for ranges that are not an exact multiple of PAGE_SIZE. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 8月, 2013 1 次提交
-
-
由 Will Deacon 提交于
flush_cache_vmap contains a dsb to ensure that any cacheflushing operations to flush out newly written ptes have completed. This patch adds the -ishst option to the dsb, since that is all that is required for completing cacheflushing in the inner-shareable domain. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 6月, 2013 1 次提交
-
-
由 Simon Baatz 提交于
Commit f8b63c18 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: NSimon Baatz <gmbnomis@gmail.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 4月, 2013 1 次提交
-
-
由 Nicolas Pitre 提交于
Algorithms used by the MCPM layer rely on state variables which are accessed while the cache is either active or inactive, depending on the code path and the active state. This patch introduces generic cache maintenance helpers to provide the necessary cache synchronization for such state variables to always hit main memory in an ordered way. Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NDave Martin <dave.martin@linaro.org>
-
- 25 9月, 2012 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
ARM v7 architecture introduced the concept of cache levels and related control registers. New processors like A7 and A15 embed an L2 unified cache controller that becomes part of the cache level hierarchy. Some operations in the kernel like cpu_suspend and __cpu_disable do not require a flush of the entire cache hierarchy to DRAM but just the cache levels belonging to the Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems correspond to L1. The current cache flushing API used in cpu_suspend and __cpu_disable, flush_cache_all(), ends up flushing the whole cache hierarchy since for v7 it cleans and invalidates all cache levels up to Level of Coherency (LoC) which cripples system performance when used in hot paths like hotplug and cpuidle. Therefore a new kernel cache maintenance API must be added to cope with latest ARM system requirements. This patch adds flush_cache_louis() to the ARM kernel cache maintenance API. This function cleans and invalidates all data cache levels up to the Level of Unification Inner Shareable (LoUIS) and invalidates the instruction cache for processors that support it (> v7). This patch also creates an alias of the cache LoUIS function to flush_kern_all for all processor versions prior to v7, so that the current cache flushing behaviour is unchanged for those processors. v7 cache maintenance code implements a cache LoUIS function that cleans and invalidates the D-cache up to LoUIS and invalidates the I-cache, according to the new API. Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NShawn Guo <shawn.guo@linaro.org>
-
- 31 7月, 2012 1 次提交
-
-
由 Will Deacon 提交于
The vivt_flush_cache_{range,page} functions check that the mm_struct of the VMA being flushed has been active on the current CPU before performing the cache maintenance. The gate_vma has a NULL mm_struct pointer and, as such, will cause a kernel fault if we try to flush it with the above operations. This happens during ELF core dumps, which include the gate_vma as it may be useful for debugging purposes. This patch adds checks to the VIVT cache flushing functions so that VMAs with a NULL mm_struct are flushed unconditionally (the vectors page may be dirty if we use it to store the current TLS pointer). Cc: <stable@vger.kernel.org> # 3.4+ Reported-by: NGilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> Tested-by: NUros Bizjak <ubizjak@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 5月, 2012 1 次提交
-
-
由 Will Deacon 提交于
The cacheflush syscall can fail for two reasons: (1) The arguments are invalid (nonsensical address range or no VMA) (2) The region generates a translation fault on a VIPT or PIPT cache This patch allows do_cache_op to return an error code to userspace in the case of the above. The various coherent_user_range implementations are modified to return 0 in the case of VIVT caches or -EFAULT in the case of an abort on v6/v7 cores. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 4月, 2012 1 次提交
-
-
由 Dima Zavin 提交于
vma isn't used and flush_cache_user_range isn't a standard macro that is used on several archs with the same prototype. In fact only unicore32 has a macro with the same name (with an identical implementation and no in-tree users). This is a part of a patch proposed by Dima Zavin (with Message-id: 1272439931-12795-1-git-send-email-dima@android.com) that didn't get accepted. Cc: Dima Zavin <dima@android.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 2月, 2011 1 次提交
-
-
由 Russell King 提交于
This allows the cache/processor/fault glue to be more easily used from assembler code. Tested on Assabet and Tegra 2. Tested-by: NColin Cross <ccross@android.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 2月, 2011 2 次提交
-
-
由 Russell King 提交于
The v6 cache call optimization was disabled to allow the optional block cache operations to be subsituted on CPUs which supported those operations. However, as that functionality was removed, we no longer need to prevent this optimization being taken advantage of. The v7 cache call optimization was just a copy of the v6, so also fix that too. Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Introduce a CPU_V6K configuration option for platforms to select if they have a V6K CPU core. This allows us to identify whether we need to support ARMv6 CPUs without the V6K SMP extensions at build time. Currently CPU_V6K is just an alias for CPU_V6, and all places which reference CPU_V6 are replaced by (CPU_V6 || CPU_V6K). Select CPU_V6K from platforms which are known to be V6K-only. Acked-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 10月, 2010 1 次提交
-
-
由 Tony Lindgren 提交于
Do this by adding flush_icache_all to cache_fns for ARMv6 and 7. As flush_icache_all may neeed to be called from flush_kern_cache_all, add it as the first entry in the cache_fns. Note that now we can remove the ARM_ERRATA_411920 dependency to !SMP so it can be selected on UP ARMv6 processors, such as omap2. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NAnand Gadiyar <gadiyar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 9月, 2010 2 次提交
-
-
由 Catalin Marinas 提交于
Since page cache pages are now considered 'dirty' by default, the cache flushing is handled via __flush_dcache_page() when a page gets mapped to user space. Highmem pages on VIVT systems are flushed during kunmap() and flush_kernel_dcache_page() was already a no-op in this case. ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE is still defined since ARM needs specific implementations for flush_kernel_vmap_range() and invalidate_kernel_vmap_range(). Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
There are places in Linux where writes to newly allocated page cache pages happen without a subsequent call to flush_dcache_page() (several PIO drivers including USB HCD). This patch changes the meaning of PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly mapped page in update_mmu_cache(). The patch also sets the PG_arch_1 bit in the DMA cache maintenance function to avoid additional cache flushing in update_mmu_cache(). Tested-by: NRabin Vincent <rabin.vincent@stericsson.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 5月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
The standard I-cache Invalidate All (ICIALLU) and Branch Predication Invalidate All (BPIALL) operations are not automatically broadcast to the other CPUs in an ARMv7 MP system. The patch adds the Inner Shareable variants, ICIALLUIS and BPIALLIS, if ARMv7 and SMP. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 3月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
To avoid #include collisions with subsequent patches in the series, this patch moves the outer_cache definitions to a separate asm/outercache.h file. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 2月, 2010 1 次提交
-
-
由 Abdoulaye Walsimou Gaye 提交于
This patch fix the below build error for arm1026ej-s processor (IntegratorCP/arm1026ej-s board). CC init/main.o In file included from include/linux/highmem.h:8, from include/linux/pagemap.h:10, from include/linux/mempolicy.h:62, from init/main.c:52: arch/arm/include/asm/cacheflush.h:134:2: error: #error Unknown cache maintainence model make[1]: *** [init/main.o] Erreur 1 make: *** [init] Erreur 2 Signed-off-by: NAbdoulaye Walsimou Gaye <walsimou@walsimou.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2010 2 次提交
-
-
由 Russell King 提交于
These are now unused, and so can be removed. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 06 2月, 2010 1 次提交
-
-
由 James Bottomley 提交于
ARM cannot prevent cache movein, so this patch implements both the flush and invalidate pieces of the API. Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
-
- 20 1月, 2010 1 次提交
-
-
由 Tony Lindgren 提交于
The comments in cacheflush.h should follow what's in struct cpu_cache_fns. The comments for V6 and V7 are unnecessary. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 12月, 2009 3 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
... and rename the function since it no longer operates on just pages. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
There is not enough users to warrant its existence, and it is actually an obstacle to progress with the new DMA API which cannot cover this case properly. To keep backward compatibility, let's perform the necessary custom cache maintenance locally in the only driver affected. Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2009 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 12月, 2009 1 次提交
-
-
由 Russell King 提交于
We had two copies of the wrapper code for VIVT cache flushing - one in asm/cacheflush.h and one in arch/arm/mm/flush.c. Reduce this down to one common copy. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 11月, 2009 1 次提交
-
-
由 Ilya Loginov 提交于
Mtdblock driver doesn't call flush_dcache_page for pages in request. So, this causes problems on architectures where the icache doesn't fill from the dcache or with dcache aliases. The patch fixes this. The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid pointless empty cache-thrashing loops on architectures for which flush_dcache_page() is a no-op. Every architecture was provided with this flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is equal 1 or do nothing otherwise. See "fix mtd_blkdevs problem with caches on some architectures" discussion on LKML for more information. Signed-off-by: NIlya Loginov <isloginov@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Peter Horton <phorton@bitbox.co.uk> Cc: "Ed L. Cashin" <ecashin@coraid.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 30 10月, 2009 1 次提交
-
-
由 Russell King 提交于
Errata 411920 indicates that any "invalidate entire instruction cache" operation can fail if the right conditions are present. This is not limited just to those operations in flush.c, but elsewhere. Place the workaround in the already existing __flush_icache_all() function instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 14 6月, 2009 1 次提交
-
-
由 Nicolas Pitre 提交于
Without this, the default implementation is a no op which is completely wrong with a VIVT cache, and usage of sg_copy_buffer() produces unpredictable results. Tested-by: NSebastian Andrzej Siewior <bigeasy@breakpoint.cc> CC: stable@kernel.org Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 3月, 2009 1 次提交
-
-
由 Paulius Zaleckas 提交于
Adds support for Faraday FA526 core. This core is used at least by: Cortina Systems Gemini and Centroid family Cavium Networks ECONA family Grain Media GM8120 Pixelplus ImageARM Prolific PL-1029 Faraday IP evaluation boards v2: - move TLB_BTB to separate patch - update copyrights Signed-off-by: NPaulius Zaleckas <paulius.zaleckas@teltonika.lt>
-
- 23 3月, 2009 1 次提交
-
-
由 Eric Miao 提交于
"""The Marvell® PXA168 processor is the first in a family of application processors targeted at mass market opportunities in computing and consumer devices. It balances high computing and multimedia performance with low power consumption to support extended battery life, and includes a wealth of integrated peripherals to reduce overall BOM cost .... """ See http://www.marvell.com/featured/pxa168.jsp for more information. 1. Marvell Mohawk core is a hybrid of xscale3 and its own ARM core, there are many enhancements like instructions for flushing the whole D-cache, and so on 2. Clock reuses Russell's common clkdev, and added the basic support for UART1/2. 3. Devices are a bit different from the 'mach-pxa' way, the platform devices are now dynamically allocated only when necessary (i.e. when pxa_register_device() is called). Description for each device are stored in an array of 'struct pxa_device_desc'. Now that: a. this array of device description is marked with __initdata and can be freed up system is fully up b. which means board code has to add all needed devices early in his initializing function c. platform specific data can now be marked as __initdata since they are allocated and copied by platform_device_add_data() 4. only the basic UART1/2/3 are added, more devices will come later. Signed-off-by: NJason Chagas <chagas@marvell.com> Signed-off-by: NEric Miao <eric.miao@marvell.com>
-
- 30 11月, 2008 1 次提交
-
-
由 Russell King 提交于
... and fix those drivers that were incorrectly relying upon that include. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 11月, 2008 1 次提交
-
-
由 Catalin Marinas 提交于
In case of non-aliasing VIPT caches, there is no need to flush the whole cache when new mapping is created. The patch introduces this condition check. In the non-aliasing VIPT case flush_cache_vmap() needs a DSB since the set_pte_at() function called from vmap_pte_range() does not have such barrier (done usually via TLB flushing functions). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-