提交 b50858ce 编写于 作者: K Kirill A. Shutemov 提交者: Ingo Molnar

x86/mm/vmalloc: Add 5-level paging support

Modify vmalloc_fault() to handle additional page table level.

With 4-level paging, copying happens on p4d level, as we have pgd_none()
always false if p4d_t is folded.
Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20170313143309.16020-6-kirill.shutemov@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 ea3b5e60
...@@ -435,6 +435,7 @@ void vmalloc_sync_all(void) ...@@ -435,6 +435,7 @@ void vmalloc_sync_all(void)
static noinline int vmalloc_fault(unsigned long address) static noinline int vmalloc_fault(unsigned long address)
{ {
pgd_t *pgd, *pgd_ref; pgd_t *pgd, *pgd_ref;
p4d_t *p4d, *p4d_ref;
pud_t *pud, *pud_ref; pud_t *pud, *pud_ref;
pmd_t *pmd, *pmd_ref; pmd_t *pmd, *pmd_ref;
pte_t *pte, *pte_ref; pte_t *pte, *pte_ref;
...@@ -458,17 +459,37 @@ static noinline int vmalloc_fault(unsigned long address) ...@@ -458,17 +459,37 @@ static noinline int vmalloc_fault(unsigned long address)
if (pgd_none(*pgd)) { if (pgd_none(*pgd)) {
set_pgd(pgd, *pgd_ref); set_pgd(pgd, *pgd_ref);
arch_flush_lazy_mmu_mode(); arch_flush_lazy_mmu_mode();
} else { } else if (CONFIG_PGTABLE_LEVELS > 4) {
/*
* With folded p4d, pgd_none() is always false, so the pgd may
* point to an empty page table entry and pgd_page_vaddr()
* will return garbage.
*
* We will do the correct sanity check on the p4d level.
*/
BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref)); BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
} }
/* With 4-level paging, copying happens on the p4d level. */
p4d = p4d_offset(pgd, address);
p4d_ref = p4d_offset(pgd_ref, address);
if (p4d_none(*p4d_ref))
return -1;
if (p4d_none(*p4d)) {
set_p4d(p4d, *p4d_ref);
arch_flush_lazy_mmu_mode();
} else {
BUG_ON(p4d_pfn(*p4d) != p4d_pfn(*p4d_ref));
}
/* /*
* Below here mismatches are bugs because these lower tables * Below here mismatches are bugs because these lower tables
* are shared: * are shared:
*/ */
pud = pud_offset(pgd, address); pud = pud_offset(p4d, address);
pud_ref = pud_offset(pgd_ref, address); pud_ref = pud_offset(p4d_ref, address);
if (pud_none(*pud_ref)) if (pud_none(*pud_ref))
return -1; return -1;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册