• M
    mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines · 3484b2de
    Mel Gorman 提交于
    The arrangement of struct zone has changed over time and now it has
    reached the point where there is some inappropriate sharing going on.
    On x86-64 for example
    
    o The zone->node field is shared with the zone lock and zone->node is
      accessed frequently from the page allocator due to the fair zone
      allocation policy.
    
    o span_seqlock is almost never used by shares a line with free_area
    
    o Some zone statistics share a cache line with the LRU lock so
      reclaim-intensive and allocator-intensive workloads can bounce the cache
      line on a stat update
    
    This patch rearranges struct zone to put read-only and read-mostly
    fields together and then splits the page allocator intensive fields, the
    zone statistics and the page reclaim intensive fields into their own
    cache lines.  Note that the type of lowmem_reserve changes due to the
    watermark calculations being signed and avoiding a signed/unsigned
    conversion there.
    
    On the test configuration I used the overall size of struct zone shrunk
    by one cache line.  On smaller machines, this is not likely to be
    noticable.  However, on a 4-node NUMA machine running tiobench the
    system CPU overhead is reduced by this patch.
    
              3.16.0-rc3  3.16.0-rc3
                 vanillarearrange-v5r9
    User          746.94      759.78
    System      65336.22    58350.98
    Elapsed     27553.52    27282.02
    Signed-off-by: NMel Gorman <mgorman@suse.de>
    Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    3484b2de
mmzone.h 40.5 KB