提交 612fe2d2 编写于 作者: M Mike Rapoport 提交者: Zheng Zengkai

mm: fix initialization of struct page for holes in memory layout

stable inclusion
from stable-5.10.11
commit f2a79851c776a5345643e0234957f98528ada168
bugzilla: 47621

--------------------------------

commit d3921cb8 upstream.

There could be struct pages that are not backed by actual physical
memory.  This can happen when the actual memory bank is not a multiple
of SECTION_SIZE or when an architecture does not register memory holes
reserved by the firmware as memblock.memory.

Such pages are currently initialized using init_unavailable_mem()
function that iterates through PFNs in holes in memblock.memory and if
there is a struct page corresponding to a PFN, the fields if this page
are set to default values and the page is marked as Reserved.

init_unavailable_mem() does not take into account zone and node the page
belongs to and sets both zone and node links in struct page to zero.

On a system that has firmware reserved holes in a zone above ZONE_DMA,
for instance in a configuration below:

	# grep -A1 E820 /proc/iomem
	7a17b000-7a216fff : Unknown E820 type
	7a217000-7bffffff : System RAM

unset zone link in struct page will trigger

	VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);

because there are pages in both ZONE_DMA32 and ZONE_DMA (unset zone link
in struct page) in the same pageblock.

Update init_unavailable_mem() to use zone constraints defined by an
architecture to properly setup the zone link and use node ID of the
adjacent range in memblock.memory to set the node link.

Link: https://lkml.kernel.org/r/20210111194017.22696-3-rppt@kernel.org
Fixes: 73a6e474 ("mm: memmap_init: iterate over memblock regions rather that check each PFN")
Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
Reported-by: NAndrea Arcangeli <aarcange@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Qian Cai <cai@lca.pw>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
上级 d534b825
...@@ -7001,23 +7001,26 @@ void __init free_area_init_memoryless_node(int nid) ...@@ -7001,23 +7001,26 @@ void __init free_area_init_memoryless_node(int nid)
* Initialize all valid struct pages in the range [spfn, epfn) and mark them * Initialize all valid struct pages in the range [spfn, epfn) and mark them
* PageReserved(). Return the number of struct pages that were initialized. * PageReserved(). Return the number of struct pages that were initialized.
*/ */
static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn,
int zone, int nid)
{ {
unsigned long pfn; unsigned long pfn, zone_spfn, zone_epfn;
u64 pgcnt = 0; u64 pgcnt = 0;
zone_spfn = arch_zone_lowest_possible_pfn[zone];
zone_epfn = arch_zone_highest_possible_pfn[zone];
spfn = clamp(spfn, zone_spfn, zone_epfn);
epfn = clamp(epfn, zone_spfn, zone_epfn);
for (pfn = spfn; pfn < epfn; pfn++) { for (pfn = spfn; pfn < epfn; pfn++) {
if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
+ pageblock_nr_pages - 1; + pageblock_nr_pages - 1;
continue; continue;
} }
/*
* Use a fake node/zone (0) for now. Some of these pages __init_single_page(pfn_to_page(pfn), pfn, zone, nid);
* (in memblock.reserved but not in memblock.memory) will
* get re-initialized via reserve_bootmem_region() later.
*/
__init_single_page(pfn_to_page(pfn), pfn, 0, 0);
__SetPageReserved(pfn_to_page(pfn)); __SetPageReserved(pfn_to_page(pfn));
pgcnt++; pgcnt++;
} }
...@@ -7026,51 +7029,64 @@ static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) ...@@ -7026,51 +7029,64 @@ static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn)
} }
/* /*
* Only struct pages that are backed by physical memory are zeroed and * Only struct pages that correspond to ranges defined by memblock.memory
* initialized by going through __init_single_page(). But, there are some * are zeroed and initialized by going through __init_single_page() during
* struct pages which are reserved in memblock allocator and their fields * memmap_init().
* may be accessed (for example page_to_pfn() on some configuration accesses
* flags). We must explicitly initialize those struct pages.
* *
* This function also addresses a similar issue where struct pages are left * But, there could be struct pages that correspond to holes in
* uninitialized because the physical address range is not covered by * memblock.memory. This can happen because of the following reasons:
* memblock.memory or memblock.reserved. That could happen when memblock * - phyiscal memory bank size is not necessarily the exact multiple of the
* layout is manually configured via memmap=, or when the highest physical * arbitrary section size
* address (max_pfn) does not end on a section boundary. * - early reserved memory may not be listed in memblock.memory
* - memory layouts defined with memmap= kernel parameter may not align
* nicely with memmap sections
*
* Explicitly initialize those struct pages so that:
* - PG_Reserved is set
* - zone link is set accorging to the architecture constrains
* - node is set to node id of the next populated region except for the
* trailing hole where last node id is used
*/ */
static void __init init_unavailable_mem(void) static void __init init_zone_unavailable_mem(int zone)
{ {
phys_addr_t start, end; unsigned long start, end;
u64 i, pgcnt; int i, nid;
phys_addr_t next = 0; u64 pgcnt;
unsigned long next = 0;
/* /*
* Loop through unavailable ranges not covered by memblock.memory. * Loop through holes in memblock.memory and initialize struct
* pages corresponding to these holes
*/ */
pgcnt = 0; pgcnt = 0;
for_each_mem_range(i, &start, &end) { for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, &nid) {
if (next < start) if (next < start)
pgcnt += init_unavailable_range(PFN_DOWN(next), pgcnt += init_unavailable_range(next, start, zone, nid);
PFN_UP(start));
next = end; next = end;
} }
/* /*
* Early sections always have a fully populated memmap for the whole * Last section may surpass the actual end of memory (e.g. we can
* section - see pfn_valid(). If the last section has holes at the * have 1Gb section and 512Mb of RAM pouplated).
* end and that section is marked "online", the memmap will be * Make sure that memmap has a well defined state in this case.
* considered initialized. Make sure that memmap has a well defined
* state.
*/ */
pgcnt += init_unavailable_range(PFN_DOWN(next), end = round_up(max_pfn, PAGES_PER_SECTION);
round_up(max_pfn, PAGES_PER_SECTION)); pgcnt += init_unavailable_range(next, end, zone, nid);
/* /*
* Struct pages that do not have backing memory. This could be because * Struct pages that do not have backing memory. This could be because
* firmware is using some of this memory, or for some other reasons. * firmware is using some of this memory, or for some other reasons.
*/ */
if (pgcnt) if (pgcnt)
pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt); pr_info("Zone %s: zeroed struct page in unavailable ranges: %lld pages", zone_names[zone], pgcnt);
}
static void __init init_unavailable_mem(void)
{
int zone;
for (zone = 0; zone < ZONE_MOVABLE; zone++)
init_zone_unavailable_mem(zone);
} }
#else #else
static inline void __init init_unavailable_mem(void) static inline void __init init_unavailable_mem(void)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册