未验证 提交 194196de 编写于 作者: O openeuler-ci-bot 提交者: Gitee

!1231 [sync] PR-1191: fix memory reliable related issues

Merge Pull Request from: @openeuler-sync-bot 
 

Origin pull request: 
https://gitee.com/openeuler/kernel/pulls/1191 
 
PR sync from: Wupeng Ma <mawupeng1@huawei.com>
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/DBYQ7YX2NZXNZFVXLHOUZRNTPCMRY75Q/ 
From: Ma Wupeng <mawupeng1@huawei.com>

Fix memory reliable related issues.

Ma Wupeng (3):
  mm: mem_reliable: Fix reliable page counter mismatch problem
  mm: mem_reliable: Update reliable page counter to zero if underflows
  efi: Disable mirror feature during crashkernel


-- 
2.25.1
 
 
Link:https://gitee.com/openeuler/kernel/pulls/1231 

Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> 
Signed-off-by: Jialin Zhang <zhangjialin11@huawei.com> 
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/ucs2_string.h> #include <linux/ucs2_string.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/crash_dump.h>
#include <asm/early_ioremap.h> #include <asm/early_ioremap.h>
...@@ -446,6 +447,11 @@ void __init efi_find_mirror(void) ...@@ -446,6 +447,11 @@ void __init efi_find_mirror(void)
if (!mirrored_kernelcore) if (!mirrored_kernelcore)
return; return;
if (is_kdump_kernel()) {
mirrored_kernelcore = false;
return;
}
for_each_efi_memory_desc(md) { for_each_efi_memory_desc(md) {
unsigned long long start = md->phys_addr; unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT; unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
......
...@@ -132,8 +132,19 @@ static inline bool reliable_allow_fb_enabled(void) ...@@ -132,8 +132,19 @@ static inline bool reliable_allow_fb_enabled(void)
static inline void reliable_page_counter(struct page *page, static inline void reliable_page_counter(struct page *page,
struct mm_struct *mm, int val) struct mm_struct *mm, int val)
{ {
if (page_reliable(page)) if (!page_reliable(page))
atomic_long_add(val, &mm->reliable_nr_page); return;
atomic_long_add(val, &mm->reliable_nr_page);
/*
* Update reliable page counter to zero if underflows.
*
* Since reliable page counter is used for debug purpose only,
* there is no real function problem by doing this.
*/
if (unlikely(atomic_long_read(&mm->reliable_nr_page) < 0))
atomic_long_set(&mm->reliable_nr_page, 0);
} }
static inline void reliable_clear_page_counter(struct mm_struct *mm) static inline void reliable_clear_page_counter(struct mm_struct *mm)
......
...@@ -887,6 +887,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, ...@@ -887,6 +887,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
get_page(page); get_page(page);
page_dup_rmap(page, false); page_dup_rmap(page, false);
rss[mm_counter(page)]++; rss[mm_counter(page)]++;
reliable_page_counter(page, dst_vma->vm_mm, 1);
} }
/* /*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册