1. 06 5月, 2021 2 次提交
  2. 01 5月, 2021 8 次提交
  3. 14 3月, 2021 1 次提交
  4. 25 2月, 2021 4 次提交
  5. 13 1月, 2021 1 次提交
  6. 20 12月, 2020 2 次提交
  7. 16 12月, 2020 7 次提交
  8. 03 12月, 2020 4 次提交
  9. 27 11月, 2020 1 次提交
  10. 15 11月, 2020 1 次提交
  11. 19 10月, 2020 1 次提交
  12. 14 10月, 2020 1 次提交
  13. 15 8月, 2020 1 次提交
    • Q
      mm/memcontrol: fix a data race in scan count · e0e3f42f
      Qian Cai 提交于
      struct mem_cgroup_per_node mz.lru_zone_size[zone_idx][lru] could be
      accessed concurrently as noticed by KCSAN,
      
       BUG: KCSAN: data-race in lruvec_lru_size / mem_cgroup_update_lru_size
      
       write to 0xffff9c804ca285f8 of 8 bytes by task 50951 on cpu 12:
        mem_cgroup_update_lru_size+0x11c/0x1d0
        mem_cgroup_update_lru_size at mm/memcontrol.c:1266
        isolate_lru_pages+0x6a9/0xf30
        shrink_active_list+0x123/0xcc0
        shrink_lruvec+0x8fd/0x1380
        shrink_node+0x317/0xd80
        do_try_to_free_pages+0x1f7/0xa10
        try_to_free_pages+0x26c/0x5e0
        __alloc_pages_slowpath+0x458/0x1290
        __alloc_pages_nodemask+0x3bb/0x450
        alloc_pages_vma+0x8a/0x2c0
        do_anonymous_page+0x170/0x700
        __handle_mm_fault+0xc9f/0xd00
        handle_mm_fault+0xfc/0x2f0
        do_page_fault+0x263/0x6f9
        page_fault+0x34/0x40
      
       read to 0xffff9c804ca285f8 of 8 bytes by task 50964 on cpu 95:
        lruvec_lru_size+0xbb/0x270
        mem_cgroup_get_zone_lru_size at include/linux/memcontrol.h:536
        (inlined by) lruvec_lru_size at mm/vmscan.c:326
        shrink_lruvec+0x1d0/0x1380
        shrink_node+0x317/0xd80
        do_try_to_free_pages+0x1f7/0xa10
        try_to_free_pages+0x26c/0x5e0
        __alloc_pages_slowpath+0x458/0x1290
        __alloc_pages_nodemask+0x3bb/0x450
        alloc_pages_current+0xa6/0x120
        alloc_slab_page+0x3b1/0x540
        allocate_slab+0x70/0x660
        new_slab+0x46/0x70
        ___slab_alloc+0x4ad/0x7d0
        __slab_alloc+0x43/0x70
        kmem_cache_alloc+0x2c3/0x420
        getname_flags+0x4c/0x230
        getname+0x22/0x30
        do_sys_openat2+0x205/0x3b0
        do_sys_open+0x9a/0xf0
        __x64_sys_openat+0x62/0x80
        do_syscall_64+0x91/0xb47
        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
       Reported by Kernel Concurrency Sanitizer on:
       CPU: 95 PID: 50964 Comm: cc1 Tainted: G        W  O L    5.5.0-next-20200204+ #6
       Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
      
      The write is under lru_lock, but the read is done as lockless.  The scan
      count is used to determine how aggressively the anon and file LRU lists
      should be scanned.  Load tearing could generate an inefficient heuristic,
      so fix it by adding READ_ONCE() for the read.
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Link: http://lkml.kernel.org/r/20200206034945.2481-1-cai@lca.pwSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0e3f42f
  14. 13 8月, 2020 2 次提交
  15. 08 8月, 2020 4 次提交