提交 57dd28fb 编写于 作者: L Lee Schermerhorn 提交者: Linus Torvalds

hugetlb: restore interleaving of bootmem huge pages

I noticed that alloc_bootmem_huge_page() will only advance to the next
node on failure to allocate a huge page, potentially filling nodes with
huge-pages.  I asked about this on linux-mm and linux-numa, cc'ing the
usual huge page suspects.

Mel Gorman responded:

	I strongly suspect that the same node being used until allocation
	failure instead of round-robin is an oversight and not deliberate
	at all. It appears to be a side-effect of a fix made way back in
	commit 63b4613c ["hugetlb: fix
	hugepage allocation with memoryless nodes"]. Prior to that patch
	it looked like allocations would always round-robin even when
	allocation was successful.

This patch--factored out of my "hugetlb mempolicy" series--moves the
advance of the hstate next node from which to allocate up before the test
for success of the attempted allocation.

Note that alloc_bootmem_huge_page() is only used for order > MAX_ORDER
huge pages.

I'll post a separate patch for mainline/stable, as the above mentioned
"balance freeing" series renamed the next node to alloc function.
Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
Reviewed-by: NMel Gorman <mel@csn.ul.ie>
Reviewed-by: NAndy Whitcroft <apw@canonical.com>
Reviewed-by: NAndi Kleen <andi@firstfloor.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 41a25e7e
......@@ -1031,6 +1031,7 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
NODE_DATA(h->next_nid_to_alloc),
huge_page_size(h), huge_page_size(h), 0);
hstate_next_node_to_alloc(h);
if (addr) {
/*
* Use the beginning of the huge page to store the
......@@ -1040,7 +1041,6 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
m = addr;
goto found;
}
hstate_next_node_to_alloc(h);
nr_nodes--;
}
return 0;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册