• M
    page-allocator: split per-cpu list into one-list-per-migrate-type · 5f8dcc21
    Mel Gorman 提交于
    The following two patches remove searching in the page allocator fast-path
    by maintaining multiple free-lists in the per-cpu structure.  At the time
    the search was introduced, increasing the per-cpu structures would waste a
    lot of memory as per-cpu structures were statically allocated at
    compile-time.  This is no longer the case.
    
    The patches are as follows. They are based on mmotm-2009-08-27.
    
    Patch 1 adds multiple lists to struct per_cpu_pages, one per
    	migratetype that can be stored on the PCP lists.
    
    Patch 2 notes that the pcpu drain path check empty lists multiple times. The
    	patch reduces the number of checks by maintaining a count of free
    	lists encountered. Lists containing pages will then free multiple
    	pages in batch
    
    The patches were tested with kernbench, netperf udp/tcp, hackbench and
    sysbench.  The netperf tests were not bound to any CPU in particular and
    were run such that the results should be 99% confidence that the reported
    results are within 1% of the estimated mean.  sysbench was run with a
    postgres background and read-only tests.  Similar to netperf, it was run
    multiple times so that it's 99% confidence results are within 1%.  The
    patches were tested on x86, x86-64 and ppc64 as
    
    x86:	Intel Pentium D 3GHz with 8G RAM (no-brand machine)
    	kernbench	- No significant difference, variance well within noise
    	netperf-udp	- 1.34% to 2.28% gain
    	netperf-tcp	- 0.45% to 1.22% gain
    	hackbench	- Small variances, very close to noise
    	sysbench	- Very small gains
    
    x86-64:	AMD Phenom 9950 1.3GHz with 8G RAM (no-brand machine)
    	kernbench	- No significant difference, variance well within noise
    	netperf-udp	- 1.83% to 10.42% gains
    	netperf-tcp	- No conclusive until buffer >= PAGE_SIZE
    				4096	+15.83%
    				8192	+ 0.34% (not significant)
    				16384	+ 1%
    	hackbench	- Small gains, very close to noise
    	sysbench	- 0.79% to 1.6% gain
    
    ppc64:	PPC970MP 2.5GHz with 10GB RAM (it's a terrasoft powerstation)
    	kernbench	- No significant difference, variance well within noise
    	netperf-udp	- 2-3% gain for almost all buffer sizes tested
    	netperf-tcp	- losses on small buffers, gains on larger buffers
    			  possibly indicates some bad caching effect.
    	hackbench	- No significant difference
    	sysbench	- 2-4% gain
    
    This patch:
    
    Currently the per-cpu page allocator searches the PCP list for pages of
    the correct migrate-type to reduce the possibility of pages being
    inappropriate placed from a fragmentation perspective.  This search is
    potentially expensive in a fast-path and undesirable.  Splitting the
    per-cpu list into multiple lists increases the size of a per-cpu structure
    and this was potentially a major problem at the time the search was
    introduced.  These problem has been mitigated as now only the necessary
    number of structures is allocated for the running system.
    
    This patch replaces a list search in the per-cpu allocator with one list
    per migrate type.  The potential snag with this approach is when bulk
    freeing pages.  We round-robin free pages based on migrate type which has
    little bearing on the cache hotness of the page and potentially checks
    empty lists repeatedly in the event the majority of PCP pages are of one
    type.
    Signed-off-by: NMel Gorman <mel@csn.ul.ie>
    Acked-by: NNick Piggin <npiggin@suse.de>
    Cc: Christoph Lameter <cl@linux-foundation.org>
    Cc: Minchan Kim <minchan.kim@gmail.com>
    Cc: Pekka Enberg <penberg@cs.helsinki.fi>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    5f8dcc21
page_alloc.c 138.5 KB