• H
    mm: memcg: get number of pages on the LRU list in memcgroup base on lru_zone_size · b64e646f
    Honglei Wang 提交于
    mainline inclusion
    from mainline-v5.4-rc4
    commit b11edebb
    category: bugfix
    bugzilla: 34611
    CVE: NA
    
    -------------------------------------------------
    
    Commit 1a61ab80 ("mm: memcontrol: replace zone summing with
    lruvec_page_state()") has made lruvec_page_state to use per-cpu counters
    instead of calculating it directly from lru_zone_size with an idea that
    this would be more effective.
    
    Tim has reported that this is not really the case for their database
    benchmark which is showing an opposite results where lruvec_page_state
    is taking up a huge chunk of CPU cycles (about 25% of the system time
    which is roughly 7% of total cpu cycles) on 5.3 kernels.  The workload
    is running on a larger machine (96cpus), it has many cgroups (500) and
    it is heavily direct reclaim bound.
    
    Tim Chen said:
    
    : The problem can also be reproduced by running simple multi-threaded
    : pmbench benchmark with a fast Optane SSD swap (see profile below).
    :
    :
    : 6.15%     3.08%  pmbench          [kernel.vmlinux]            [k] lruvec_lru_size
    :             |
    :             |--3.07%--lruvec_lru_size
    :             |          |
    :             |          |--2.11%--cpumask_next
    :             |          |          |
    :             |          |           --1.66%--find_next_bit
    :             |          |
    :             |           --0.57%--call_function_interrupt
    :             |                     |
    :             |                      --0.55%--smp_call_function_interrupt
    :             |
    :             |--1.59%--0x441f0fc3d009
    :             |          _ops_rdtsc_init_base_freq
    :             |          access_histogram
    :             |          page_fault
    :             |          __do_page_fault
    :             |          handle_mm_fault
    :             |          __handle_mm_fault
    :             |          |
    :             |           --1.54%--do_swap_page
    :             |                     swapin_readahead
    :             |                     swap_cluster_readahead
    :             |                     |
    :             |                      --1.53%--read_swap_cache_async
    :             |                                __read_swap_cache_async
    :             |                                alloc_pages_vma
    :             |                                __alloc_pages_nodemask
    :             |                                __alloc_pages_slowpath
    :             |                                try_to_free_pages
    :             |                                do_try_to_free_pages
    :             |                                shrink_node
    :             |                                shrink_node_memcg
    :             |                                |
    :             |                                |--0.77%--lruvec_lru_size
    :             |                                |
    :             |                                 --0.76%--inactive_list_is_low
    :             |                                           |
    :             |                                            --0.76%--lruvec_lru_size
    :             |
    :              --1.50%--measure_read
    :                        page_fault
    :                        __do_page_fault
    :                        handle_mm_fault
    :                        __handle_mm_fault
    :                        do_swap_page
    :                        swapin_readahead
    :                        swap_cluster_readahead
    :                        |
    :                         --1.48%--read_swap_cache_async
    :                                   __read_swap_cache_async
    :                                   alloc_pages_vma
    :                                   __alloc_pages_nodemask
    :                                   __alloc_pages_slowpath
    :                                   try_to_free_pages
    :                                   do_try_to_free_pages
    :                                   shrink_node
    :                                   shrink_node_memcg
    :                                   |
    :                                   |--0.75%--inactive_list_is_low
    :                                   |          |
    :                                   |           --0.75%--lruvec_lru_size
    :                                   |
    :                                    --0.73%--lruvec_lru_size
    
    The likely culprit is the cache traffic the lruvec_page_state_local
    generates.  Dave Hansen says:
    
    : I was thinking purely of the cache footprint.  If it's reading
    : pn->lruvec_stat_local->count[idx] is three separate cachelines, so 192
    : bytes of cache *96 CPUs = 18k of data, mostly read-only.  1 cgroup would
    : be 18k of data for the whole system and the caching would be pretty
    : efficient and all 18k would probably survive a tight page fault loop in
    : the L1.  500 cgroups would be ~90k of data per CPU thread which doesn't
    : fit in the L1 and probably wouldn't survive a tight page fault loop if
    : both logical threads were banging on different cgroups.
    :
    : It's just a theory, but it's why I noted the number of cgroups when I
    : initially saw this show up in profiles
    
    Fix the regression by partially reverting the said commit and calculate
    the lru size explicitly.
    
    Link: http://lkml.kernel.org/r/20190905071034.16822-1-honglei.wang@oracle.com
    Fixes: 1a61ab80 ("mm: memcontrol: replace zone summing with lruvec_page_state()")
    Signed-off-by: NHonglei Wang <honglei.wang@oracle.com>
    Reported-by: NTim Chen <tim.c.chen@linux.intel.com>
    Acked-by: NTim Chen <tim.c.chen@linux.intel.com>
    Tested-by: NTim Chen <tim.c.chen@linux.intel.com>
    Acked-by: NMichal Hocko <mhocko@suse.com>
    Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Roman Gushchin <guro@fb.com>
    Cc: Tejun Heo <tj@kernel.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: <stable@vger.kernel.org>	[5.2+]
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: NChen Zhou <chenzhou10@huawei.com>
    Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
    Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
    Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
    b64e646f
vmscan.c 125.0 KB