提交 135e9232 编写于 作者: H Hongbo Yao 提交者: Xie XiuQi

ktask: change the chunk size for some ktask thread functions

hulk inclusion
category: feature
bugzilla: 13228
CVE: NA

---------------------------

This patch fixes some issue in original series.
1) PMD_SIZE chunks have made thread finishing times too spread out
in some cases, so KTASK_MEM_CHUNK(128M) seems to be a reasonable compromise
2) If hugepagesz=1G, then pages_per_huge_page = 1G / 4K = 256,
use KTASK_MEM_CHUNK will cause the ktask thread to be 1, which will not
improve the performance of clear gigantic page.
Signed-off-by: NHongbo Yao <yaohongbo@huawei.com>
Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
Tested-by: NHongbo Yao <yaohongbo@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 4733c59f
......@@ -1120,7 +1120,7 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
int ret = 0;
struct vfio_pin_args args = { iommu, dma, limit, current->mm };
/* Stay on PMD boundary in case THP is being used. */
DEFINE_KTASK_CTL(ctl, vfio_pin_map_dma_chunk, &args, PMD_SIZE);
DEFINE_KTASK_CTL(ctl, vfio_pin_map_dma_chunk, &args, KTASK_MEM_CHUNK);
ktask_ctl_set_undo_func(&ctl, vfio_pin_map_dma_undo);
ret = ktask_run((void *)dma->vaddr, map_size, &ctl);
......
......@@ -4489,7 +4489,7 @@ void clear_huge_page(struct page *page,
struct ktask_node node = {0, pages_per_huge_page,
page_to_nid(page)};
DEFINE_KTASK_CTL(ctl, clear_gigantic_page_chunk, &args,
KTASK_MEM_CHUNK);
KTASK_PTE_MINCHUNK);
ktask_run_numa(&node, 1, &ctl);
return;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册