提交 bdd7a0dc 编写于 作者: I Ivan Malov 提交者: Zheng Zengkai

xsk: Clear page contiguity bit when unmapping pool

stable inclusion
from stable-v5.10.130
commit 5561bddd0599aa94a20036db7d7c495556771757
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I5YRJO

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=5561bddd0599aa94a20036db7d7c495556771757

--------------------------------

[ Upstream commit 512d1999 ]

When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
to pages' DMA addresses that go in ascending order and at 4K stride.

The problem is that the bit does not get cleared before doing unmap.
As a result, a lot of warnings from iommu_dma_unmap_page() are seen
in dmesg, which indicates that lookups by iommu_iova_to_phys() fail.

Fixes: 2b43470a ("xsk: Introduce AF_XDP buffer allocation API")
Signed-off-by: NIvan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
Acked-by: NMagnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220628091848.534803-1-ivan.malov@oktetlabs.ruSigned-off-by: NSasha Levin <sashal@kernel.org>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
上级 893e89a4
......@@ -318,6 +318,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
for (i = 0; i < dma_map->dma_pages_cnt; i++) {
dma = &dma_map->dma_pages[i];
if (*dma) {
*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
DMA_BIDIRECTIONAL, attrs);
*dma = 0;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册