提交 78c94366 编写于 作者: P Pavel Tatashin 提交者: Linus Torvalds

sparc64: optimize struct page zeroing

Add an optimized mm_zero_struct_page(), so struct page's are zeroed
without calling memset().  We do eight to ten regular stores based on
the size of struct page.  Compiler optimizes out the conditions of
switch() statement.

SPARC-M6 with 15T of memory, single thread performance:

                               BASE            FIX  OPTIMIZED_FIX
        bootmem_init   28.440467985s   2.305674818s   2.305161615s
free_area_init_nodes  202.845901673s 225.343084508s 172.556506560s
                      --------------------------------------------
Total                 231.286369658s 227.648759326s 174.861668175s

BASE:  current linux
FIX:   This patch series without "optimized struct page zeroing"
OPTIMIZED_FIX: This patch series including the current patch.

bootmem_init() is where memory for struct pages is zeroed during
allocation.  Note, about two seconds in this function is a fixed time:
it does not increase as memory is increased.

Link: http://lkml.kernel.org/r/20171013173214.27300-11-pasha.tatashin@oracle.comSigned-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: NSteven Sistare <steven.sistare@oracle.com>
Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: NBob Picco <bob.picco@oracle.com>
Acked-by: NDavid S. Miller <davem@davemloft.net>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 f7f99100
......@@ -231,6 +231,36 @@ extern unsigned long _PAGE_ALL_SZ_BITS;
extern struct page *mem_map_zero;
#define ZERO_PAGE(vaddr) (mem_map_zero)
/* This macro must be updated when the size of struct page grows above 80
* or reduces below 64.
* The idea that compiler optimizes out switch() statement, and only
* leaves clrx instructions
*/
#define mm_zero_struct_page(pp) do { \
unsigned long *_pp = (void *)(pp); \
\
/* Check that struct page is either 64, 72, or 80 bytes */ \
BUILD_BUG_ON(sizeof(struct page) & 7); \
BUILD_BUG_ON(sizeof(struct page) < 64); \
BUILD_BUG_ON(sizeof(struct page) > 80); \
\
switch (sizeof(struct page)) { \
case 80: \
_pp[9] = 0; /* fallthrough */ \
case 72: \
_pp[8] = 0; /* fallthrough */ \
default: \
_pp[7] = 0; \
_pp[6] = 0; \
_pp[5] = 0; \
_pp[4] = 0; \
_pp[3] = 0; \
_pp[2] = 0; \
_pp[1] = 0; \
_pp[0] = 0; \
} \
} while (0)
/* PFNs are real physical page numbers. However, mem_map only begins to record
* per-page information starting at pfn_base. This is to handle systems where
* the first physical page in the machine is at some huge physical address,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册