提交 6a46079c 编写于 作者: A Andi Kleen 提交者: Andi Kleen

HWPOISON: The high level memory error handler in the VM v7

Add the high level memory handler that poisons pages
that got corrupted by hardware (typically by a two bit flip in a DIMM
or a cache) on the Linux level. The goal is to prevent everyone
from accessing these pages in the future.

This done at the VM level by marking a page hwpoisoned
and doing the appropriate action based on the type of page
it is.

The code that does this is portable and lives in mm/memory-failure.c

To quote the overview comment:

High level machine check handler. Handles pages reported by the
hardware as being corrupted usually due to a 2bit ECC memory or cache
failure.

This focuses on pages detected as corrupted in the background.
When the current CPU tries to consume corruption the currently
running process can just be killed directly instead. This implies
that if the error cannot be handled for some reason it's safe to
just ignore it because no corruption has been consumed yet. Instead
when that happens another machine check will happen.

Handles page cache pages in various states. The tricky part
here is that we can access any page asynchronous to other VM
users, because memory failures could happen anytime and anywhere,
possibly violating some of their assumptions. This is why this code
has to be extremely careful. Generally it tries to use normal locking
rules, as in get the standard locks, even if that means the
error handling takes potentially a long time.

Some of the operations here are somewhat inefficient and have non
linear algorithmic complexity, because the data structures have not
been optimized for this case. This is in particular the case
for the mapping from a vma to a process. Since this case is expected
to be rare we hope we can get away with this.

There are in principle two strategies to kill processes on poison:
- just unmap the data and wait for an actual reference before
killing
- kill as soon as corruption is detected.
Both have advantages and disadvantages and should be used
in different situations. Right now both are implemented and can
be switched with a new sysctl vm.memory_failure_early_kill
The default is early kill.

The patch does some rmap data structure walking on its own to collect
processes to kill. This is unusual because normally all rmap data structure
knowledge is in rmap.c only. I put it here for now to keep
everything together and rmap knowledge has been seeping out anyways

Includes contributions from Johannes Weiner, Chris Mason, Fengguang Wu,
Nick Piggin (who did a lot of great work) and others.

Cc: npiggin@suse.de
Cc: riel@redhat.com
Signed-off-by: NAndi Kleen <ak@linux.intel.com>
Acked-by: NRik van Riel <riel@redhat.com>
Reviewed-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
上级 4db96cf0
...@@ -32,6 +32,8 @@ Currently, these files are in /proc/sys/vm: ...@@ -32,6 +32,8 @@ Currently, these files are in /proc/sys/vm:
- legacy_va_layout - legacy_va_layout
- lowmem_reserve_ratio - lowmem_reserve_ratio
- max_map_count - max_map_count
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes - min_free_kbytes
- min_slab_ratio - min_slab_ratio
- min_unmapped_ratio - min_unmapped_ratio
...@@ -53,7 +55,6 @@ Currently, these files are in /proc/sys/vm: ...@@ -53,7 +55,6 @@ Currently, these files are in /proc/sys/vm:
- vfs_cache_pressure - vfs_cache_pressure
- zone_reclaim_mode - zone_reclaim_mode
============================================================== ==============================================================
block_dump block_dump
...@@ -275,6 +276,44 @@ e.g., up to one or two maps per allocation. ...@@ -275,6 +276,44 @@ e.g., up to one or two maps per allocation.
The default value is 65536. The default value is 65536.
=============================================================
memory_failure_early_kill:
Control how to kill processes when uncorrected memory error (typically
a 2bit error in a memory module) is detected in the background by hardware
that cannot be handled by the kernel. In some cases (like the page
still having a valid copy on disk) the kernel will handle the failure
transparently without affecting any applications. But if there is
no other uptodate copy of the data it will kill to prevent any data
corruptions from propagating.
1: Kill all processes that have the corrupted and not reloadable page mapped
as soon as the corruption is detected. Note this is not supported
for a few types of pages, like kernel internally allocated data or
the swap cache, but works for the majority of user pages.
0: Only unmap the corrupted page from all processes and only kill a process
who tries to access it.
The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can
handle this if they want to.
This is only active on architectures/platforms with advanced machine
check handling and depends on the hardware capabilities.
Applications can override this setting individually with the PR_MCE_KILL prctl
==============================================================
memory_failure_recovery
Enable memory failure recovery (when supported by the platform)
1: Attempt recovery.
0: Always panic on a memory failure.
============================================================== ==============================================================
min_free_kbytes: min_free_kbytes:
......
...@@ -95,7 +95,11 @@ static int meminfo_proc_show(struct seq_file *m, void *v) ...@@ -95,7 +95,11 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
"Committed_AS: %8lu kB\n" "Committed_AS: %8lu kB\n"
"VmallocTotal: %8lu kB\n" "VmallocTotal: %8lu kB\n"
"VmallocUsed: %8lu kB\n" "VmallocUsed: %8lu kB\n"
"VmallocChunk: %8lu kB\n", "VmallocChunk: %8lu kB\n"
#ifdef CONFIG_MEMORY_FAILURE
"HardwareCorrupted: %8lu kB\n"
#endif
,
K(i.totalram), K(i.totalram),
K(i.freeram), K(i.freeram),
K(i.bufferram), K(i.bufferram),
...@@ -140,6 +144,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v) ...@@ -140,6 +144,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
(unsigned long)VMALLOC_TOTAL >> 10, (unsigned long)VMALLOC_TOTAL >> 10,
vmi.used >> 10, vmi.used >> 10,
vmi.largest_chunk >> 10 vmi.largest_chunk >> 10
#ifdef CONFIG_MEMORY_FAILURE
,atomic_long_read(&mce_bad_pages) << (PAGE_SHIFT - 10)
#endif
); );
hugetlb_report_meminfo(m); hugetlb_report_meminfo(m);
......
...@@ -1309,5 +1309,12 @@ void vmemmap_populate_print_last(void); ...@@ -1309,5 +1309,12 @@ void vmemmap_populate_print_last(void);
extern int account_locked_memory(struct mm_struct *mm, struct rlimit *rlim, extern int account_locked_memory(struct mm_struct *mm, struct rlimit *rlim,
size_t size); size_t size);
extern void refund_locked_memory(struct mm_struct *mm, size_t size); extern void refund_locked_memory(struct mm_struct *mm, size_t size);
extern void memory_failure(unsigned long pfn, int trapno);
extern int __memory_failure(unsigned long pfn, int trapno, int ref);
extern int sysctl_memory_failure_early_kill;
extern int sysctl_memory_failure_recovery;
extern atomic_long_t mce_bad_pages;
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */ #endif /* _LINUX_MM_H */
...@@ -129,6 +129,7 @@ int try_to_munlock(struct page *); ...@@ -129,6 +129,7 @@ int try_to_munlock(struct page *);
*/ */
struct anon_vma *page_lock_anon_vma(struct page *page); struct anon_vma *page_lock_anon_vma(struct page *page);
void page_unlock_anon_vma(struct anon_vma *anon_vma); void page_unlock_anon_vma(struct anon_vma *anon_vma);
int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
#else /* !CONFIG_MMU */ #else /* !CONFIG_MMU */
......
...@@ -1372,6 +1372,31 @@ static struct ctl_table vm_table[] = { ...@@ -1372,6 +1372,31 @@ static struct ctl_table vm_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = &scan_unevictable_handler, .proc_handler = &scan_unevictable_handler,
}, },
#ifdef CONFIG_MEMORY_FAILURE
{
.ctl_name = CTL_UNNUMBERED,
.procname = "memory_failure_early_kill",
.data = &sysctl_memory_failure_early_kill,
.maxlen = sizeof(sysctl_memory_failure_early_kill),
.mode = 0644,
.proc_handler = &proc_dointvec_minmax,
.strategy = &sysctl_intvec,
.extra1 = &zero,
.extra2 = &one,
},
{
.ctl_name = CTL_UNNUMBERED,
.procname = "memory_failure_recovery",
.data = &sysctl_memory_failure_recovery,
.maxlen = sizeof(sysctl_memory_failure_recovery),
.mode = 0644,
.proc_handler = &proc_dointvec_minmax,
.strategy = &sysctl_intvec,
.extra1 = &zero,
.extra2 = &one,
},
#endif
/* /*
* NOTE: do not add new entries to this table unless you have read * NOTE: do not add new entries to this table unless you have read
* Documentation/sysctl/ctl_unnumbered.txt * Documentation/sysctl/ctl_unnumbered.txt
......
...@@ -233,6 +233,16 @@ config DEFAULT_MMAP_MIN_ADDR ...@@ -233,6 +233,16 @@ config DEFAULT_MMAP_MIN_ADDR
/proc/sys/vm/mmap_min_addr tunable. /proc/sys/vm/mmap_min_addr tunable.
config MEMORY_FAILURE
depends on MMU
depends on X86_MCE
bool "Enable recovery from hardware memory errors"
help
Enables code to recover from some memory failures on systems
with MCA recovery. This allows a system to continue running
even when some of its memory has uncorrected errors. This requires
special hardware support and typically ECC memory.
config NOMMU_INITIAL_TRIM_EXCESS config NOMMU_INITIAL_TRIM_EXCESS
int "Turn on mmap() excess space trimming before booting" int "Turn on mmap() excess space trimming before booting"
depends on !MMU depends on !MMU
......
...@@ -40,5 +40,6 @@ obj-$(CONFIG_SMP) += allocpercpu.o ...@@ -40,5 +40,6 @@ obj-$(CONFIG_SMP) += allocpercpu.o
endif endif
obj-$(CONFIG_QUICKLIST) += quicklist.o obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
obj-$(CONFIG_MEMORY_FAILURE) += memory-failure.o
obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
...@@ -104,6 +104,10 @@ ...@@ -104,6 +104,10 @@
* *
* ->task->proc_lock * ->task->proc_lock
* ->dcache_lock (proc_pid_lookup) * ->dcache_lock (proc_pid_lookup)
*
* (code doesn't rely on that order, so you could switch it around)
* ->tasklist_lock (memory_failure, collect_procs_ao)
* ->i_mmap_lock
*/ */
/* /*
......
此差异已折叠。
...@@ -36,6 +36,11 @@ ...@@ -36,6 +36,11 @@
* mapping->tree_lock (widely used, in set_page_dirty, * mapping->tree_lock (widely used, in set_page_dirty,
* in arch-dependent flush_dcache_mmap_lock, * in arch-dependent flush_dcache_mmap_lock,
* within inode_lock in __sync_single_inode) * within inode_lock in __sync_single_inode)
*
* (code doesn't rely on that order so it could be switched around)
* ->tasklist_lock
* anon_vma->lock (memory_failure, collect_procs_anon)
* pte map lock
*/ */
#include <linux/mm.h> #include <linux/mm.h>
...@@ -311,7 +316,7 @@ pte_t *page_check_address(struct page *page, struct mm_struct *mm, ...@@ -311,7 +316,7 @@ pte_t *page_check_address(struct page *page, struct mm_struct *mm,
* if the page is not mapped into the page tables of this VMA. Only * if the page is not mapped into the page tables of this VMA. Only
* valid for normal file or anonymous VMAs. * valid for normal file or anonymous VMAs.
*/ */
static int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
{ {
unsigned long address; unsigned long address;
pte_t *pte; pte_t *pte;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册