提交 ebf81a93 编写于 作者: C Catalin Marinas

arm64: Fix DMA range invalidation for cache line unaligned buffers

If the buffer needing cache invalidation for inbound DMA does start or
end on a cache line aligned address, we need to use the non-destructive
clean&invalidate operation. This issue was introduced by commit
7363590d (arm64: Implement coherent DMA API based on swiotlb).
Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
Reported-by: NJon Medhurst (Tixy) <tixy@linaro.org>
上级 d253b440
......@@ -183,12 +183,19 @@ ENTRY(__inval_cache_range)
__dma_inv_range:
dcache_line_size x2, x3
sub x3, x2, #1
bic x0, x0, x3
tst x1, x3 // end cache line aligned?
bic x1, x1, x3
1: dc ivac, x0 // invalidate D / U line
add x0, x0, x2
b.eq 1f
dc civac, x1 // clean & invalidate D / U line
1: tst x0, x3 // start cache line aligned?
bic x0, x0, x3
b.eq 2f
dc civac, x0 // clean & invalidate D / U line
b 3f
2: dc ivac, x0 // invalidate D / U line
3: add x0, x0, x2
cmp x0, x1
b.lo 1b
b.lo 2b
dsb sy
ret
ENDPROC(__inval_cache_range)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册