提交 e36df347 编写于 作者: Z Zhou Guanghui 提交者: Yang Yingliang

ascend: mm/hugetlb: Enable charge migrate hugepages

ascend inclusion
category: feature
Bugzilla: N/A
CVE: N/A

-------------------------------------------------------------

When the driver gets huge pages by alloc_huge_page_node, it attempts
to apply for migrate hugepages after the reserved memory hugepages
are used up. We expect that the migrated hugepages that are applied
for can be charged in memcg to limit the memory usage.

__GFP_ACOUNT flag is added to gfp mask before we allocate migrage
hugepages. Then, if memcg is set by memalloc_use_memcg(), the
allocated migrate hugepages will be charged to this memcg.
Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com>
Reviewed-by: NDing Tianhong <dingtianhong@huawei.com>
Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 bbf2c43b
...@@ -1359,6 +1359,17 @@ config ASCEND_IOPF_HIPRI ...@@ -1359,6 +1359,17 @@ config ASCEND_IOPF_HIPRI
CPU which processes IOPF work is the same as that which processes IOPF CPU which processes IOPF work is the same as that which processes IOPF
event interrupts. event interrupts.
config ASCEND_CHARGE_MIGRATE_HUGEPAGES
bool "Enable support for migrate hugepages"
default y
help
When reseved hugepages are used up, we attempts to apply for migrate
hugepages. We expect that the migrated hugepages that are applied for
can be charged in memcg to limit the memory usage.
This option enable the feature to charge migrate hugepages to memory
cgroup.
endif endif
endmenu endmenu
......
...@@ -54,6 +54,7 @@ static struct hstate * __initdata parsed_hstate; ...@@ -54,6 +54,7 @@ static struct hstate * __initdata parsed_hstate;
static unsigned long __initdata default_hstate_max_huge_pages; static unsigned long __initdata default_hstate_max_huge_pages;
static unsigned long __initdata default_hstate_size; static unsigned long __initdata default_hstate_size;
static bool __initdata parsed_valid_hugepagesz = true; static bool __initdata parsed_valid_hugepagesz = true;
static int enable_charge_mighp __read_mostly;
/* /*
* Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages, * Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,
...@@ -1710,8 +1711,12 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid) ...@@ -1710,8 +1711,12 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL, NULL); page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL, NULL);
spin_unlock(&hugetlb_lock); spin_unlock(&hugetlb_lock);
if (!page) if (!page) {
if (enable_charge_mighp)
gfp_mask |= __GFP_ACCOUNT;
page = alloc_migrate_huge_page(h, gfp_mask, nid, NULL); page = alloc_migrate_huge_page(h, gfp_mask, nid, NULL);
}
return page; return page;
} }
...@@ -5225,4 +5230,18 @@ int hugetlb_insert_hugepage_pte_by_pa(struct mm_struct *mm, ...@@ -5225,4 +5230,18 @@ int hugetlb_insert_hugepage_pte_by_pa(struct mm_struct *mm,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hugetlb_insert_hugepage_pte_by_pa); EXPORT_SYMBOL_GPL(hugetlb_insert_hugepage_pte_by_pa);
#ifdef CONFIG_ASCEND_CHARGE_MIGRATE_HUGEPAGES
static int __init ascend_enable_charge_migrate_hugepages(char *s)
{
enable_charge_mighp = 1;
pr_info("Ascend enable charge migrate hugepage\n");
return 1;
}
__setup("enable_charge_mighp", ascend_enable_charge_migrate_hugepages);
#endif
#endif #endif
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册