- 25 11月, 2014 1 次提交
-
-
由 Andre Przywara 提交于
The ARM errata 819472, 826319, 827319 and 824069 define the same workaround for these hardware issues in certain Cortex-A53 parts. Use the new alternatives framework and the CPU MIDR detection to patch "cache clean" into "cache clean and invalidate" instructions if an affected CPU is detected at runtime. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> [will: add __maybe_unused to squash gcc warning] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 4月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
If the buffer needing cache invalidation for inbound DMA does start or end on a cache line aligned address, we need to use the non-destructive clean&invalidate operation. This issue was introduced by commit 7363590d (arm64: Implement coherent DMA API based on swiotlb). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NJon Medhurst (Tixy) <tixy@linaro.org>
-
- 05 4月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 3月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Currently we flush the entire dcache at boot within __cpu_setup, but this is unnecessary as the booting protocol demands that the dcache is invalid and off upon entering the kernel. The presence of the cache flush only serves to hide bugs in bootloaders, and is not safe in the presence of SMP. In an SMP boot scenario the CPUs enter coherency outside of the kernel, and the primary CPU enables its caches before bringing up secondary CPUs. Therefore if any secondary CPU has an entry in its cache (in violation of the boot protocol), the primary CPU might snoop it even if the secondary CPU's cache is disabled. The boot-time cache flush only serves to hide a firmware bug, and slows down a cpu boot unnecessarily. This patch removes the unnecessary boot-time cache flush. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: make __flush_dcache_all local only] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 2月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds support for DMA API cache maintenance on SoCs without hardware device cache coherency. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 1月, 2014 1 次提交
-
-
由 Jingoo Han 提交于
Fix the function name of comment of __flush_dcache_area, because __flush_dcache_area is the correct name. Also, the missing variable 'size' is added to the comment. Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 5月, 2013 1 次提交
-
-
由 Sukanto Ghosh 提交于
The format of the lower 32-bits of the 64-bit operand to 'dc cisw' is unchanged from ARMv7 architecture and the upper bits are RES0. This implies that the 'way' field of the operand of 'dc cisw' occupies the bit-positions [31 .. (32-A)]. Due to the use of 64-bit extended operands to 'clz', the existing implementation of __flush_dcache_all is incorrectly placing the 'way' field in the bit-positions [63 .. (64-A)]. Signed-off-by: NSukanto Ghosh <sghosh@apm.com> Tested-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
-
- 17 9月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
The patch adds functionality required for cache maintenance. The AArch64 architecture mandates non-aliasing VIPT or PIPT D-cache and VIPT (may have aliases) or ASID-tagged VIVT I-cache. Cache maintenance operations are automatically broadcast in hardware between CPUs. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-