提交 a248b13b 编写于 作者: W Will Deacon 提交者: Russell King

ARM: 6941/1: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area

The v6 and v7 implementations of flush_kern_dcache_area do not align
the passed MVA to the size of a cacheline in the data cache. If a
misaligned address is used, only a subset of the requested area will
be flushed. This has been observed to cause failures in SMP boot where
the secondary_data initialised by the primary CPU is not cacheline
aligned, causing the secondary CPUs to read incorrect values for their
pgd and stack pointers.

This patch ensures that the base address is cacheline aligned before
flushing the d-cache.

Cc: <stable@kernel.org>
Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
Signed-off-by: NWill Deacon <will.deacon@arm.com>
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
上级 a85fab1c
...@@ -176,6 +176,7 @@ ENDPROC(v6_coherent_kern_range) ...@@ -176,6 +176,7 @@ ENDPROC(v6_coherent_kern_range)
*/ */
ENTRY(v6_flush_kern_dcache_area) ENTRY(v6_flush_kern_dcache_area)
add r1, r0, r1 add r1, r0, r1
bic r0, r0, #D_CACHE_LINE_SIZE - 1
1: 1:
#ifdef HARVARD_CACHE #ifdef HARVARD_CACHE
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line
......
...@@ -221,6 +221,8 @@ ENDPROC(v7_coherent_user_range) ...@@ -221,6 +221,8 @@ ENDPROC(v7_coherent_user_range)
ENTRY(v7_flush_kern_dcache_area) ENTRY(v7_flush_kern_dcache_area)
dcache_line_size r2, r3 dcache_line_size r2, r3
add r1, r0, r1 add r1, r0, r1
sub r3, r2, #1
bic r0, r0, r3
1: 1:
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line / unified line mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line / unified line
add r0, r0, r2 add r0, r0, r2
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册