提交 19fc3f0a 编写于 作者: A Adam Litke 提交者: Linus Torvalds

hugetlb: decrease hugetlb_lock cycling in gather_surplus_huge_pages

To reduce hugetlb_lock acquisitions and releases when freeing excess surplus
pages, scan the page list in two parts.  First, transfer the needed pages to
the hugetlb pool.  Then drop the lock and free the remaining pages back to the
buddy allocator.

In the common case there are zero excess pages and no lock operations are
required.

Thanks Mel Gorman for this improvement.
Signed-off-by: NAdam Litke <agl@us.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 797df574
......@@ -372,11 +372,19 @@ static int gather_surplus_pages(int delta)
resv_huge_pages += delta;
ret = 0;
free:
/* Free the needed pages to the hugetlb pool */
list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
if ((--needed) < 0)
break;
list_del(&page->lru);
if ((--needed) >= 0)
enqueue_huge_page(page);
else {
enqueue_huge_page(page);
}
/* Free unnecessary surplus pages to the buddy allocator */
if (!list_empty(&surplus_list)) {
spin_unlock(&hugetlb_lock);
list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
list_del(&page->lru);
/*
* The page has a reference count of zero already, so
* call free_huge_page directly instead of using
......@@ -384,10 +392,9 @@ static int gather_surplus_pages(int delta)
* unlocked which is safe because free_huge_page takes
* hugetlb_lock before deciding how to free the page.
*/
spin_unlock(&hugetlb_lock);
free_huge_page(page);
spin_lock(&hugetlb_lock);
}
spin_lock(&hugetlb_lock);
}
return ret;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册