提交 aaa9705b 编写于 作者: M Miaohe Lin 提交者: Linus Torvalds

mm/huge_memory.c: make get_huge_zero_page() return bool

It's guaranteed that huge_zero_page will not be NULL if
huge_zero_refcount is increased successfully.

When READ_ONCE(huge_zero_page) is returned, there must be a
huge_zero_page and it can be replaced with returning
'true' when we do not care about the value of huge_zero_page.

We can thus make it return bool to save READ_ONCE cpu cycles as the
return value is just used to check if huge_zero_page exists.

Link: https://lkml.kernel.org/r/20210318122722.13135-3-linmiaohe@huawei.comSigned-off-by: NMiaohe Lin <linmiaohe@huawei.com>
Reviewed-by: NZi Yan <ziy@nvidia.com>
Reviewed-by: NPeter Xu <peterx@redhat.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Thomas Hellstrm (Intel) <thomas_os@shipmail.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: yuleixzhang <yulei.kernel@gmail.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 71f9e58e
...@@ -77,18 +77,18 @@ bool transparent_hugepage_enabled(struct vm_area_struct *vma) ...@@ -77,18 +77,18 @@ bool transparent_hugepage_enabled(struct vm_area_struct *vma)
return false; return false;
} }
static struct page *get_huge_zero_page(void) static bool get_huge_zero_page(void)
{ {
struct page *zero_page; struct page *zero_page;
retry: retry:
if (likely(atomic_inc_not_zero(&huge_zero_refcount))) if (likely(atomic_inc_not_zero(&huge_zero_refcount)))
return READ_ONCE(huge_zero_page); return true;
zero_page = alloc_pages((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE, zero_page = alloc_pages((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE,
HPAGE_PMD_ORDER); HPAGE_PMD_ORDER);
if (!zero_page) { if (!zero_page) {
count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
return NULL; return false;
} }
count_vm_event(THP_ZERO_PAGE_ALLOC); count_vm_event(THP_ZERO_PAGE_ALLOC);
preempt_disable(); preempt_disable();
...@@ -101,7 +101,7 @@ static struct page *get_huge_zero_page(void) ...@@ -101,7 +101,7 @@ static struct page *get_huge_zero_page(void)
/* We take additional reference here. It will be put back by shrinker */ /* We take additional reference here. It will be put back by shrinker */
atomic_set(&huge_zero_refcount, 2); atomic_set(&huge_zero_refcount, 2);
preempt_enable(); preempt_enable();
return READ_ONCE(huge_zero_page); return true;
} }
static void put_huge_zero_page(void) static void put_huge_zero_page(void)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册