提交 bb784b81 编写于 作者: Z Zhang Jian 提交者: Yang Yingliang

mm: export collect_procs()

ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4OXH9
CVE: NA

-------------------------------------------------

Collect the processes who have the page mapped via collect_procs().

@page if the page is a part of the hugepages/compound-page, we must
using compound_head() to find it's head page to prevent the kernel panic,
and make the page be locked.

@to_kill the function will return a linked list, when we have used
this list, we must kfree the list.

@force_early if we want to find all process, we must make it be true, if
it's false, the function will only return the process who have PF_MCE_PROCESS
or PF_MCE_EARLY mark.

limits: if force_early is true, sysctl_memory_failure_early_kill is useless.
If it's false, no process have PF_MCE_PROCESS and PF_MCE_EARLY flag, and
the sysctl_memory_failure_early_kill is enabled, function will return all tasks
whether the task have the PF_MCE_PROCESS and PF_MCE_EARLY flag.
Signed-off-by: NZhang Jian <zhangjian210@huawei.com>
Reviewed-by: NWeilong Chen <chenweilong@huawei.com>
Reviewed-by: Kefeng Wang<wangkefeng.wang@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 5dd1df36
...@@ -2866,6 +2866,8 @@ extern int sysctl_memory_failure_recovery; ...@@ -2866,6 +2866,8 @@ extern int sysctl_memory_failure_recovery;
extern void shake_page(struct page *p, int access); extern void shake_page(struct page *p, int access);
extern atomic_long_t num_poisoned_pages __read_mostly; extern atomic_long_t num_poisoned_pages __read_mostly;
extern int soft_offline_page(struct page *page, int flags); extern int soft_offline_page(struct page *page, int flags);
extern void collect_procs(struct page *page, struct list_head *tokill,
int force_early);
/* /*
......
...@@ -512,7 +512,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, ...@@ -512,7 +512,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill,
* First preallocate one tokill structure outside the spin locks, * First preallocate one tokill structure outside the spin locks,
* so that we can kill at least one process reasonably reliable. * so that we can kill at least one process reasonably reliable.
*/ */
static void collect_procs(struct page *page, struct list_head *tokill, void collect_procs(struct page *page, struct list_head *tokill,
int force_early) int force_early)
{ {
struct to_kill *tk; struct to_kill *tk;
...@@ -529,6 +529,7 @@ static void collect_procs(struct page *page, struct list_head *tokill, ...@@ -529,6 +529,7 @@ static void collect_procs(struct page *page, struct list_head *tokill,
collect_procs_file(page, tokill, &tk, force_early); collect_procs_file(page, tokill, &tk, force_early);
kfree(tk); kfree(tk);
} }
EXPORT_SYMBOL_GPL(collect_procs);
static const char *action_name[] = { static const char *action_name[] = {
[MF_IGNORED] = "Ignored", [MF_IGNORED] = "Ignored",
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册