1. 27 9月, 2022 13 次提交
    • Y
      mm: MADV_COLLAPSE: refetch vm_end after reacquiring mmap_lock · 4d24de94
      Yang Shi 提交于
      The syzbot reported the below problem:
      
      BUG: Bad page map in process syz-executor198  pte:8000000071c00227 pmd:74b30067
      addr:0000000020563000 vm_flags:08100077 anon_vma:ffff8880547d2200 mapping:0000000000000000 index:20563
      file:(null) fault:0x0 mmap:0x0 read_folio:0x0
      CPU: 1 PID: 3614 Comm: syz-executor198 Not tainted 6.0.0-rc3-next-20220901-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
      Call Trace:
       <TASK>
       __dump_stack lib/dump_stack.c:88 [inline]
       dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
       print_bad_pte.cold+0x2a7/0x2d0 mm/memory.c:565
       vm_normal_page+0x10c/0x2a0 mm/memory.c:636
       hpage_collapse_scan_pmd+0x729/0x1da0 mm/khugepaged.c:1199
       madvise_collapse+0x481/0x910 mm/khugepaged.c:2433
       madvise_vma_behavior+0xd0a/0x1cc0 mm/madvise.c:1062
       madvise_walk_vmas+0x1c7/0x2b0 mm/madvise.c:1236
       do_madvise.part.0+0x24a/0x340 mm/madvise.c:1415
       do_madvise mm/madvise.c:1428 [inline]
       __do_sys_madvise mm/madvise.c:1428 [inline]
       __se_sys_madvise mm/madvise.c:1426 [inline]
       __x64_sys_madvise+0x113/0x150 mm/madvise.c:1426
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd
      RIP: 0033:0x7f770ba87929
      Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 11 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
      RSP: 002b:00007f770ba18308 EFLAGS: 00000246 ORIG_RAX: 000000000000001c
      RAX: ffffffffffffffda RBX: 00007f770bb0f3f8 RCX: 00007f770ba87929
      RDX: 0000000000000019 RSI: 0000000000600003 RDI: 0000000020000000
      RBP: 00007f770bb0f3f0 R08: 00007f770ba18700 R09: 0000000000000000
      R10: 00007f770ba18700 R11: 0000000000000246 R12: 00007f770bb0f3fc
      R13: 00007ffc2d8b62ef R14: 00007f770ba18400 R15: 0000000000022000
      
      Basically the test program does the below conceptually:
      1. mmap 0x2000000 - 0x21000000 as anonymous region
      2. mmap io_uring SQ stuff at 0x20563000 with MAP_FIXED, io_uring_mmap()
         actually remaps the pages with special PTEs
      3. call MADV_COLLAPSE for 0x20000000 - 0x21000000
      
      It actually triggered the below race:
      
                   CPU A                                          CPU B
      mmap 0x20000000 - 0x21000000 as anon
                                                 madvise_collapse is called on this area
                                                   Retrieve start and end address from the vma (NEVER updated later!)
                                                   Collapsed the first 2M area and dropped mmap_lock
      Acquire mmap_lock
      mmap io_uring file at 0x20563000
      Release mmap_lock
                                                   Reacquire mmap_lock
                                                   revalidate vma pass since 0x20200000 + 0x200000 > 0x20563000
                                                   scan the next 2M (0x20200000 - 0x20400000), but due to whatever reason it didn't release mmap_lock
                                                   scan the 3rd 2M area (start from 0x20400000)
                                                     get into the vma created by io_uring
      
      The hend should be updated after MADV_COLLAPSE reacquire mmap_lock since
      the vma may be shrunk.  We don't have to worry about shink from the other
      direction since it could be caught by hugepage_vma_revalidate().  Either
      no valid vma is found or the vma doesn't fit anymore.
      
      Link: https://lkml.kernel.org/r/20220914162220.787703-1-shy828301@gmail.com
      Fixes: 7d8faaf1 ("mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse")
      Reported-by: syzbot+915f3e317adb0e85835f@syzkaller.appspotmail.com
      Signed-off-by: NYang Shi <shy828301@gmail.com>
      Reviewed-by: NZach O'Keefe <zokeefe@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      4d24de94
    • A
      Merge branch 'mm-hotfixes-stable' into mm-stable · 6d751329
      Andrew Morton 提交于
      6d751329
    • K
      x86/uaccess: avoid check_object_size() in copy_from_user_nmi() · 59298997
      Kees Cook 提交于
      The check_object_size() helper under CONFIG_HARDENED_USERCOPY is designed
      to skip any checks where the length is known at compile time as a
      reasonable heuristic to avoid "likely known-good" cases.  However, it can
      only do this when the copy_*_user() helpers are, themselves, inline too.
      
      Using find_vmap_area() requires taking a spinlock.  The
      check_object_size() helper can call find_vmap_area() when the destination
      is in vmap memory.  If show_regs() is called in interrupt context, it will
      attempt a call to copy_from_user_nmi(), which may call check_object_size()
      and then find_vmap_area().  If something in normal context happens to be
      in the middle of calling find_vmap_area() (with the spinlock held), the
      interrupt handler will hang forever.
      
      The copy_from_user_nmi() call is actually being called with a fixed-size
      length, so check_object_size() should never have been called in the first
      place.  Given the narrow constraints, just replace the
      __copy_from_user_inatomic() call with an open-coded version that calls
      only into the sanitizers and not check_object_size(), followed by a call
      to raw_copy_from_user().
      
      [akpm@linux-foundation.org: no instrument_copy_from_user() in my tree...]
      Link: https://lkml.kernel.org/r/20220919201648.2250764-1-keescook@chromium.org
      Link: https://lore.kernel.org/all/CAOUHufaPshtKrTWOz7T7QFYUNVGFm0JBjvM700Nhf9qEL9b3EQ@mail.gmail.com
      Fixes: 0aef499f ("mm/usercopy: Detect vmalloc overruns")
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Reported-by: NYu Zhao <yuzhao@google.com>
      Reported-by: NFlorian Lehner <dev@der-flo.net>
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: NFlorian Lehner <dev@der-flo.net>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      59298997
    • Z
      mm/page_isolation: fix isolate_single_pageblock() isolation behavior · 80e2b584
      Zi Yan 提交于
      set_migratetype_isolate() does not allow isolating MIGRATE_CMA pageblocks
      unless it is used for CMA allocation.  isolate_single_pageblock() did not
      have the same behavior when it is used together with
      set_migratetype_isolate() in start_isolate_page_range().  This allows
      alloc_contig_range() with migratetype other than MIGRATE_CMA, like
      MIGRATE_MOVABLE (used by alloc_contig_pages()), to isolate first and last
      pageblock but fail the rest.  The failure leads to changing migratetype of
      the first and last pageblock to MIGRATE_MOVABLE from MIGRATE_CMA,
      corrupting the CMA region.  This can happen during gigantic page
      allocations.
      
      Like Doug said here:
      https://lore.kernel.org/linux-mm/a3363a52-883b-dcd1-b77f-f2bb378d6f2d@gmail.com/T/#u,
      for gigantic page allocations, the user would notice no difference,
      since the allocation on CMA region will fail as well as it did before. 
      But it might hurt the performance of device drivers that use CMA, since
      CMA region size decreases.
      
      Fix it by passing migratetype into isolate_single_pageblock(), so that
      set_migratetype_isolate() used by isolate_single_pageblock() will prevent
      the isolation happening.
      
      Link: https://lkml.kernel.org/r/20220914023913.1855924-1-zi.yan@sent.com
      Fixes: b2c9e2fb ("mm: make alloc_contig_range work at pageblock granularity")
      Signed-off-by: NZi Yan <ziy@nvidia.com>
      Reported-by: NDoug Berger <opendmb@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Doug Berger <opendmb@gmail.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      80e2b584
    • S
      mm,hwpoison: check mm when killing accessing process · 77677cdb
      Shuai Xue 提交于
      The GHES code calls memory_failure_queue() from IRQ context to queue work
      into workqueue and schedule it on the current CPU.  Then the work is
      processed in memory_failure_work_func() by kworker and calls
      memory_failure().
      
      When a page is already poisoned, commit a3f5d80e ("mm,hwpoison: send
      SIGBUS with error virutal address") make memory_failure() call
      kill_accessing_process() that:
      
          - holds mmap locking of current->mm
          - does pagetable walk to find the error virtual address
          - and sends SIGBUS to the current process with error info.
      
      However, the mm of kworker is not valid, resulting in a null-pointer
      dereference.  So check mm when killing the accessing process.
      
      [akpm@linux-foundation.org: remove unrelated whitespace alteration]
      Link: https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com
      Fixes: a3f5d80e ("mm,hwpoison: send SIGBUS with error virutal address")
      Signed-off-by: NShuai Xue <xueshuai@linux.alibaba.com>
      Reviewed-by: NMiaohe Lin <linmiaohe@huawei.com>
      Acked-by: NNaoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
      Cc: Bixuan Cui <cuibixuan@linux.alibaba.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      77677cdb
    • D
      mm/hugetlb: correct demote page offset logic · 31731452
      Doug Berger 提交于
      With gigantic pages it may not be true that struct page structures are
      contiguous across the entire gigantic page.  The nth_page macro is used
      here in place of direct pointer arithmetic to correct for this.
      
      Mike said:
      
      : This error could cause addressing exceptions.  However, this is only
      : possible in configurations where CONFIG_SPARSEMEM &&
      : !CONFIG_SPARSEMEM_VMEMMAP.  Such a configuration option is rare and
      : unknown to be the default anywhere.
      
      Link: https://lkml.kernel.org/r/20220914190917.3517663-1-opendmb@gmail.com
      Fixes: 8531fc6f ("hugetlb: add hugetlb demote page support")
      Signed-off-by: NDoug Berger <opendmb@gmail.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Reviewed-by: NOscar Salvador <osalvador@suse.de>
      Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      31731452
    • M
      mm: prevent page_frag_alloc() from corrupting the memory · dac22531
      Maurizio Lombardi 提交于
      A number of drivers call page_frag_alloc() with a fragment's size >
      PAGE_SIZE.
      
      In low memory conditions, __page_frag_cache_refill() may fail the order
      3 cache allocation and fall back to order 0; In this case, the cache
      will be smaller than the fragment, causing memory corruptions.
      
      Prevent this from happening by checking if the newly allocated cache is
      large enough for the fragment; if not, the allocation will fail and
      page_frag_alloc() will return NULL.
      
      Link: https://lkml.kernel.org/r/20220715125013.247085-1-mlombard@redhat.com
      Fixes: b63ae8ca ("mm/net: Rename and move page fragment handling from net/ to mm/")
      Signed-off-by: NMaurizio Lombardi <mlombard@redhat.com>
      Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com>
      Cc: Chen Lin <chen45464546@163.com>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      dac22531
    • S
      mm: bring back update_mmu_cache() to finish_fault() · 70427f6e
      Sergei Antonov 提交于
      Running this test program on ARMv4 a few times (sometimes just once)
      reproduces the bug.
      
      int main()
      {
              unsigned i;
              char paragon[SIZE];
              void* ptr;
      
              memset(paragon, 0xAA, SIZE);
              ptr = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
                         MAP_ANON | MAP_SHARED, -1, 0);
              if (ptr == MAP_FAILED) return 1;
              printf("ptr = %p\n", ptr);
              for (i=0;i<10000;i++){
                      memset(ptr, 0xAA, SIZE);
                      if (memcmp(ptr, paragon, SIZE)) {
                              printf("Unexpected bytes on iteration %u!!!\n", i);
                              break;
                      }
              }
              munmap(ptr, SIZE);
      }
      
      In the "ptr" buffer there appear runs of zero bytes which are aligned
      by 16 and their lengths are multiple of 16.
      
      Linux v5.11 does not have the bug, "git bisect" finds the first bad commit:
      f9ce0be7 ("mm: Cleanup faultaround and finish_fault() codepaths")
      
      Before the commit update_mmu_cache() was called during a call to
      filemap_map_pages() as well as finish_fault(). After the commit
      finish_fault() lacks it.
      
      Bring back update_mmu_cache() to finish_fault() to fix the bug.
      Also call update_mmu_tlb() only when returning VM_FAULT_NOPAGE to more
      closely reproduce the code of alloc_set_pte() function that existed before
      the commit.
      
      On many platforms update_mmu_cache() is nop:
       x86, see arch/x86/include/asm/pgtable
       ARMv6+, see arch/arm/include/asm/tlbflush.h
      So, it seems, few users ran into this bug.
      
      Link: https://lkml.kernel.org/r/20220908204809.2012451-1-saproj@gmail.com
      Fixes: f9ce0be7 ("mm: Cleanup faultaround and finish_fault() codepaths")
      Signed-off-by: NSergei Antonov <saproj@gmail.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      70427f6e
    • C
      frontswap: don't call ->init if no ops are registered · 37dcc673
      Christoph Hellwig 提交于
      If no frontswap module (i.e.  zswap) was registered, frontswap_ops will be
      NULL.  In such situation, swapon crashes with the following stack trace:
      
        Unable to handle kernel access to user memory outside uaccess routines at virtual address 0000000000000000
        Mem abort info:
          ESR = 0x0000000096000004
          EC = 0x25: DABT (current EL), IL = 32 bits
          SET = 0, FnV = 0
          EA = 0, S1PTW = 0
          FSC = 0x04: level 0 translation fault
        Data abort info:
          ISV = 0, ISS = 0x00000004
          CM = 0, WnR = 0
        user pgtable: 4k pages, 48-bit VAs, pgdp=00000020a4fab000
        [0000000000000000] pgd=0000000000000000, p4d=0000000000000000
        Internal error: Oops: 96000004 [#1] SMP
        Modules linked in: zram fsl_dpaa2_eth pcs_lynx phylink ahci_qoriq crct10dif_ce ghash_ce sbsa_gwdt fsl_mc_dpio nvme lm90 nvme_core at803x xhci_plat_hcd rtc_fsl_ftm_alarm xgmac_mdio ahci_platform i2c_imx ip6_tables ip_tables fuse
        Unloaded tainted modules: cppc_cpufreq():1
        CPU: 10 PID: 761 Comm: swapon Not tainted 6.0.0-rc2-00454-g22100432cf14 #1
        Hardware name: SolidRun Ltd. SolidRun CEX7 Platform, BIOS EDK II Jun 21 2022
        pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
        pc : frontswap_init+0x38/0x60
        lr : __do_sys_swapon+0x8a8/0x9f4
        sp : ffff80000969bcf0
        x29: ffff80000969bcf0 x28: ffff37bee0d8fc00 x27: ffff80000a7f5000
        x26: fffffcdefb971e80 x25: ffffaba797453b90 x24: 0000000000000064
        x23: ffff37c1f209d1a8 x22: ffff37bee880e000 x21: ffffaba797748560
        x20: ffff37bee0d8fce4 x19: ffffaba797748488 x18: 0000000000000014
        x17: 0000000030ec029a x16: ffffaba795a479b0 x15: 0000000000000000
        x14: 0000000000000000 x13: 0000000000000030 x12: 0000000000000001
        x11: ffff37c63c0aba18 x10: 0000000000000000 x9 : ffffaba7956b8c88
        x8 : ffff80000969bcd0 x7 : 0000000000000000 x6 : 0000000000000000
        x5 : 0000000000000001 x4 : 0000000000000000 x3 : ffffaba79730f000
        x2 : ffff37bee0d8fc00 x1 : 0000000000000000 x0 : 0000000000000000
        Call trace:
        frontswap_init+0x38/0x60
        __do_sys_swapon+0x8a8/0x9f4
        __arm64_sys_swapon+0x28/0x3c
        invoke_syscall+0x78/0x100
        el0_svc_common.constprop.0+0xd4/0xf4
        do_el0_svc+0x38/0x4c
        el0_svc+0x34/0x10c
        el0t_64_sync_handler+0x11c/0x150
        el0t_64_sync+0x190/0x194
        Code: d000e283 910003fd f9006c41 f946d461 (f9400021)
        ---[ end trace 0000000000000000 ]---
      
      Link: https://lkml.kernel.org/r/20220909130829.3262926-1-hch@lst.de
      Fixes: 1da0d94a ("frontswap: remove support for multiple ops")
      Reported-by: NNathan Chancellor <nathan@kernel.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      37dcc673
    • N
      mm/huge_memory: use pfn_to_online_page() in split_huge_pages_all() · 2b7aa91b
      Naoya Horiguchi 提交于
      NULL pointer dereference is triggered when calling thp split via debugfs
      on the system with offlined memory blocks.  With debug option enabled, the
      following kernel messages are printed out:
      
        page:00000000467f4890 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x121c000
        flags: 0x17fffc00000000(node=0|zone=2|lastcpupid=0x1ffff)
        raw: 0017fffc00000000 0000000000000000 dead000000000122 0000000000000000
        raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
        page dumped because: unmovable page
        page:000000007d7ab72e is uninitialized and poisoned
        page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
        ------------[ cut here ]------------
        kernel BUG at include/linux/mm.h:1248!
        invalid opcode: 0000 [#1] PREEMPT SMP PTI
        CPU: 16 PID: 20964 Comm: bash Tainted: G          I        6.0.0-rc3-foll-numa+ #41
        ...
        RIP: 0010:split_huge_pages_write+0xcf4/0xe30
      
      This shows that page_to_nid() in page_zone() is unexpectedly called for an
      offlined memmap.
      
      Use pfn_to_online_page() to get struct page in PFN walker.
      
      Link: https://lkml.kernel.org/r/20220908041150.3430269-1-naoya.horiguchi@linux.dev
      Fixes: f1dd2cd1 ("mm, memory_hotplug: do not associate hotadded memory to zones until online")      [visible after d0dc12e8]
      Signed-off-by: NNaoya Horiguchi <naoya.horiguchi@nec.com>
      Co-developed-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Reviewed-by: NYang Shi <shy828301@gmail.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NMiaohe Lin <linmiaohe@huawei.com>
      Reviewed-by: NOscar Salvador <osalvador@suse.de>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: <stable@vger.kernel.org>	[5.10+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      2b7aa91b
    • M
      mm: fix madivse_pageout mishandling on non-LRU page · 58d426a7
      Minchan Kim 提交于
      MADV_PAGEOUT tries to isolate non-LRU pages and gets a warning from
      isolate_lru_page below.
      
      Fix it by checking PageLRU in advance.
      
      ------------[ cut here ]------------
      trying to isolate tail page
      WARNING: CPU: 0 PID: 6175 at mm/folio-compat.c:158 isolate_lru_page+0x130/0x140
      Modules linked in:
      CPU: 0 PID: 6175 Comm: syz-executor.0 Not tainted 5.18.12 #1
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
      RIP: 0010:isolate_lru_page+0x130/0x140
      
      Link: https://lore.kernel.org/linux-mm/485f8c33.2471b.182d5726afb.Coremail.hantianshuo@iie.ac.cn/
      Link: https://lkml.kernel.org/r/20220908151204.762596-1-minchan@kernel.org
      Fixes: 1a4e58cc ("mm: introduce MADV_PAGEOUT")
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Reported-by: N韩天ç`• <hantianshuo@iie.ac.cn>
      Suggested-by: NYang Shi <shy828301@gmail.com>
      Acked-by: NYang Shi <shy828301@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      58d426a7
    • Y
      powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush · bedf0341
      Yang Shi 提交于
      The IPI broadcast is used to serialize against fast-GUP, but fast-GUP will
      move to use RCU instead of disabling local interrupts in fast-GUP.  Using
      an IPI is the old-styled way of serializing against fast-GUP although it
      still works as expected now.
      
      And fast-GUP now fixed the potential race with THP collapse by checking
      whether PMD is changed or not.  So IPI broadcast in radix pmd collapse
      flush is not necessary anymore.  But it is still needed for hash TLB.
      
      Link: https://lkml.kernel.org/r/20220907180144.555485-2-shy828301@gmail.comSuggested-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NYang Shi <shy828301@gmail.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NPeter Xu <peterx@redhat.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      bedf0341
    • Y
      mm: gup: fix the fast GUP race against THP collapse · 70cbc3cc
      Yang Shi 提交于
      Since general RCU GUP fast was introduced in commit 2667f50e ("mm:
      introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
      sufficient to handle concurrent GUP-fast in all cases, it only handles
      traditional IPI-based GUP-fast correctly.  On architectures that send an
      IPI broadcast on TLB flush, it works as expected.  But on the
      architectures that do not use IPI to broadcast TLB flush, it may have the
      below race:
      
         CPU A                                          CPU B
      THP collapse                                     fast GUP
                                                    gup_pmd_range() <-- see valid pmd
                                                        gup_pte_range() <-- work on pte
      pmdp_collapse_flush() <-- clear pmd and flush
      __collapse_huge_page_isolate()
          check page pinned <-- before GUP bump refcount
                                                            pin the page
                                                            check PTE <-- no change
      __collapse_huge_page_copy()
          copy data to huge page
          ptep_clear()
      install huge pmd for the huge page
                                                            return the stale page
      discard the stale page
      
      The race can be fixed by checking whether PMD is changed or not after
      taking the page pin in fast GUP, just like what it does for PTE.  If the
      PMD is changed it means there may be parallel THP collapse, so GUP should
      back off.
      
      Also update the stale comment about serializing against fast GUP in
      khugepaged.
      
      Link: https://lkml.kernel.org/r/20220907180144.555485-1-shy828301@gmail.com
      Fixes: 2667f50e ("mm: introduce a general RCU get_user_pages_fast()")
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NYang Shi <shy828301@gmail.com>
      Reviewed-by: NJohn Hubbard <jhubbard@nvidia.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      70cbc3cc
  2. 12 9月, 2022 27 次提交