提交 d6f81a09 编写于 作者: M Mike Kravetz 提交者: Zheng Zengkai

hugetlb: fix copy_huge_page_from_user contig page struct assumption

stable inclusion
from stable-5.10.20
commit 32e970488f49bf7cd4dc175125fda59f4649c966
bugzilla: 50608

--------------------------------

commit 3272cfc2 upstream.

page structs are not guaranteed to be contiguous for gigantic pages.  The
routine copy_huge_page_from_user can encounter gigantic pages, yet it
assumes page structs are contiguous when copying pages from user space.

Since page structs for the target gigantic page are not contiguous, the
data copied from user space could overwrite other pages not associated
with the gigantic page and cause data corruption.

Non-contiguous page structs are generally not an issue.  However, they can
exist with a specific kernel configuration and hotplug operations.  For
example: Configure the kernel with CONFIG_SPARSEMEM and
!CONFIG_SPARSEMEM_VMEMMAP.  Then, hotplug add memory for the area where
the gigantic page will be allocated.

Link: https://lkml.kernel.org/r/20210217184926.33567-2-mike.kravetz@oracle.com
Fixes: 8fb5debc ("userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Joao Martins <joao.m.martins@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 704ecbc8
...@@ -5203,17 +5203,19 @@ long copy_huge_page_from_user(struct page *dst_page, ...@@ -5203,17 +5203,19 @@ long copy_huge_page_from_user(struct page *dst_page,
void *page_kaddr; void *page_kaddr;
unsigned long i, rc = 0; unsigned long i, rc = 0;
unsigned long ret_val = pages_per_huge_page * PAGE_SIZE; unsigned long ret_val = pages_per_huge_page * PAGE_SIZE;
struct page *subpage = dst_page;
for (i = 0; i < pages_per_huge_page; i++) { for (i = 0; i < pages_per_huge_page;
i++, subpage = mem_map_next(subpage, dst_page, i)) {
if (allow_pagefault) if (allow_pagefault)
page_kaddr = kmap(dst_page + i); page_kaddr = kmap(subpage);
else else
page_kaddr = kmap_atomic(dst_page + i); page_kaddr = kmap_atomic(subpage);
rc = copy_from_user(page_kaddr, rc = copy_from_user(page_kaddr,
(const void __user *)(src + i * PAGE_SIZE), (const void __user *)(src + i * PAGE_SIZE),
PAGE_SIZE); PAGE_SIZE);
if (allow_pagefault) if (allow_pagefault)
kunmap(dst_page + i); kunmap(subpage);
else else
kunmap_atomic(page_kaddr); kunmap_atomic(page_kaddr);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册