1. 29 5月, 2020 1 次提交
    • Q
      mm/z3fold: silence kmemleak false positives of slots · af4798a5
      Qian Cai 提交于
      Kmemleak reported many leaks while under memory pressue in,
      
          slots = alloc_slots(pool, gfp);
      
      which is referenced by "zhdr" in init_z3fold_page(),
      
          zhdr->slots = slots;
      
      However, "zhdr" could be gone without freeing slots as the later will be
      freed separately when the last "handle" off of "handles" array is freed.
      It will be within "slots" which is always aligned.
      
        unreferenced object 0xc000000fdadc1040 (size 104):
        comm "oom04", pid 140476, jiffies 4295359280 (age 3454.970s)
        hex dump (first 32 bytes):
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          z3fold_zpool_malloc+0x7b0/0xe10
          alloc_slots at mm/z3fold.c:214
          (inlined by) init_z3fold_page at mm/z3fold.c:412
          (inlined by) z3fold_alloc at mm/z3fold.c:1161
          (inlined by) z3fold_zpool_malloc at mm/z3fold.c:1735
          zpool_malloc+0x34/0x50
          zswap_frontswap_store+0x60c/0xda0
          zswap_frontswap_store at mm/zswap.c:1093
          __frontswap_store+0x128/0x330
          swap_writepage+0x58/0x110
          pageout+0x16c/0xa40
          shrink_page_list+0x1ac8/0x25c0
          shrink_inactive_list+0x270/0x730
          shrink_lruvec+0x444/0xf30
          shrink_node+0x2a4/0x9c0
          do_try_to_free_pages+0x158/0x640
          try_to_free_pages+0x1bc/0x5f0
          __alloc_pages_slowpath.constprop.60+0x4dc/0x15a0
          __alloc_pages_nodemask+0x520/0x650
          alloc_pages_vma+0xc0/0x420
          handle_mm_fault+0x1174/0x1bf0
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NVitaly Wool <vitaly.wool@konsulko.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Link: http://lkml.kernel.org/r/20200522220052.2225-1-cai@lca.pwSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      af4798a5
  2. 24 5月, 2020 2 次提交
  3. 15 5月, 2020 4 次提交
  4. 10 5月, 2020 1 次提交
  5. 08 5月, 2020 6 次提交
    • H
      mm: limit boost_watermark on small zones · 14f69140
      Henry Willard 提交于
      Commit 1c30844d ("mm: reclaim small amounts of memory when an
      external fragmentation event occurs") adds a boost_watermark() function
      which increases the min watermark in a zone by at least
      pageblock_nr_pages or the number of pages in a page block.
      
      On Arm64, with 64K pages and 512M huge pages, this is 8192 pages or
      512M.  It does this regardless of the number of managed pages managed in
      the zone or the likelihood of success.
      
      This can put the zone immediately under water in terms of allocating
      pages from the zone, and can cause a small machine to fail immediately
      due to OoM.  Unlike set_recommended_min_free_kbytes(), which
      substantially increases min_free_kbytes and is tied to THP,
      boost_watermark() can be called even if THP is not active.
      
      The problem is most likely to appear on architectures such as Arm64
      where pageblock_nr_pages is very large.
      
      It is desirable to run the kdump capture kernel in as small a space as
      possible to avoid wasting memory.  In some architectures, such as Arm64,
      there are restrictions on where the capture kernel can run, and
      therefore, the space available.  A capture kernel running in 768M can
      fail due to OoM immediately after boost_watermark() sets the min in zone
      DMA32, where most of the memory is, to 512M.  It fails even though there
      is over 500M of free memory.  With boost_watermark() suppressed, the
      capture kernel can run successfully in 448M.
      
      This patch limits boost_watermark() to boosting a zone's min watermark
      only when there are enough pages that the boost will produce positive
      results.  In this case that is estimated to be four times as many pages
      as pageblock_nr_pages.
      
      Mel said:
      
      : There is no harm in marking it stable.  Clearly it does not happen very
      : often but it's not impossible.  32-bit x86 is a lot less common now
      : which would previously have been vulnerable to triggering this easily.
      : ppc64 has a larger base page size but typically only has one zone.
      : arm64 is likely the most vulnerable, particularly when CMA is
      : configured with a small movable zone.
      
      Fixes: 1c30844d ("mm: reclaim small amounts of memory when an external fragmentation event occurs")
      Signed-off-by: NHenry Willard <henry.willard@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1588294148-6586-1-git-send-email-henry.willard@oracle.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      14f69140
    • Q
      mm/vmscan: remove unnecessary argument description of isolate_lru_pages() · 17e34526
      Qiwu Chen 提交于
      Since commit a9e7c39f ("mm/vmscan.c: remove 7th argument of
      isolate_lru_pages()"), the explanation of 'mode' argument has been
      unnecessary.  Let's remove it.
      Signed-off-by: NQiwu Chen <chenqiwu@xiaomi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20200501090346.2894-1-chenqiwu@xiaomi.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      17e34526
    • F
      percpu: make pcpu_alloc() aware of current gfp context · 28307d93
      Filipe Manana 提交于
      Since 5.7-rc1, on btrfs we have a percpu counter initialization for
      which we always pass a GFP_KERNEL gfp_t argument (this happens since
      commit 2992df73 ("btrfs: Implement DREW lock")).
      
      That is safe in some contextes but not on others where allowing fs
      reclaim could lead to a deadlock because we are either holding some
      btrfs lock needed for a transaction commit or holding a btrfs
      transaction handle open.  Because of that we surround the call to the
      function that initializes the percpu counter with a NOFS context using
      memalloc_nofs_save() (this is done at btrfs_init_fs_root()).
      
      However it turns out that this is not enough to prevent a possible
      deadlock because percpu_alloc() determines if it is in an atomic context
      by looking exclusively at the gfp flags passed to it (GFP_KERNEL in this
      case) and it is not aware that a NOFS context is set.
      
      Because percpu_alloc() thinks it is in a non atomic context it locks the
      pcpu_alloc_mutex.  This can result in a btrfs deadlock when
      pcpu_balance_workfn() is running, has acquired that mutex and is waiting
      for reclaim, while the btrfs task that called percpu_counter_init() (and
      therefore percpu_alloc()) is holding either the btrfs commit_root
      semaphore or a transaction handle (done fs/btrfs/backref.c:
      iterate_extent_inodes()), which prevents reclaim from finishing as an
      attempt to commit the current btrfs transaction will deadlock.
      
      Lockdep reports this issue with the following trace:
      
        ======================================================
        WARNING: possible circular locking dependency detected
        5.6.0-rc7-btrfs-next-77 #1 Not tainted
        ------------------------------------------------------
        kswapd0/91 is trying to acquire lock:
        ffff8938a3b3fdc8 (&delayed_node->mutex){+.+.}, at: __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs]
      
        but task is already holding lock:
        ffffffffb4f0dbc0 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #4 (fs_reclaim){+.+.}:
               fs_reclaim_acquire.part.0+0x25/0x30
               __kmalloc+0x5f/0x3a0
               pcpu_create_chunk+0x19/0x230
               pcpu_balance_workfn+0x56a/0x680
               process_one_work+0x235/0x5f0
               worker_thread+0x50/0x3b0
               kthread+0x120/0x140
               ret_from_fork+0x3a/0x50
      
        -> #3 (pcpu_alloc_mutex){+.+.}:
               __mutex_lock+0xa9/0xaf0
               pcpu_alloc+0x480/0x7c0
               __percpu_counter_init+0x50/0xd0
               btrfs_drew_lock_init+0x22/0x70 [btrfs]
               btrfs_get_fs_root+0x29c/0x5c0 [btrfs]
               resolve_indirect_refs+0x120/0xa30 [btrfs]
               find_parent_nodes+0x50b/0xf30 [btrfs]
               btrfs_find_all_leafs+0x60/0xb0 [btrfs]
               iterate_extent_inodes+0x139/0x2f0 [btrfs]
               iterate_inodes_from_logical+0xa1/0xe0 [btrfs]
               btrfs_ioctl_logical_to_ino+0xb4/0x190 [btrfs]
               btrfs_ioctl+0x165a/0x3130 [btrfs]
               ksys_ioctl+0x87/0xc0
               __x64_sys_ioctl+0x16/0x20
               do_syscall_64+0x5c/0x260
               entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
        -> #2 (&fs_info->commit_root_sem){++++}:
               down_write+0x38/0x70
               btrfs_cache_block_group+0x2ec/0x500 [btrfs]
               find_free_extent+0xc6a/0x1600 [btrfs]
               btrfs_reserve_extent+0x9b/0x180 [btrfs]
               btrfs_alloc_tree_block+0xc1/0x350 [btrfs]
               alloc_tree_block_no_bg_flush+0x4a/0x60 [btrfs]
               __btrfs_cow_block+0x122/0x5a0 [btrfs]
               btrfs_cow_block+0x106/0x240 [btrfs]
               commit_cowonly_roots+0x55/0x310 [btrfs]
               btrfs_commit_transaction+0x509/0xb20 [btrfs]
               sync_filesystem+0x74/0x90
               generic_shutdown_super+0x22/0x100
               kill_anon_super+0x14/0x30
               btrfs_kill_super+0x12/0x20 [btrfs]
               deactivate_locked_super+0x31/0x70
               cleanup_mnt+0x100/0x160
               task_work_run+0x93/0xc0
               exit_to_usermode_loop+0xf9/0x100
               do_syscall_64+0x20d/0x260
               entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
        -> #1 (&space_info->groups_sem){++++}:
               down_read+0x3c/0x140
               find_free_extent+0xef6/0x1600 [btrfs]
               btrfs_reserve_extent+0x9b/0x180 [btrfs]
               btrfs_alloc_tree_block+0xc1/0x350 [btrfs]
               alloc_tree_block_no_bg_flush+0x4a/0x60 [btrfs]
               __btrfs_cow_block+0x122/0x5a0 [btrfs]
               btrfs_cow_block+0x106/0x240 [btrfs]
               btrfs_search_slot+0x50c/0xd60 [btrfs]
               btrfs_lookup_inode+0x3a/0xc0 [btrfs]
               __btrfs_update_delayed_inode+0x90/0x280 [btrfs]
               __btrfs_commit_inode_delayed_items+0x81f/0x870 [btrfs]
               __btrfs_run_delayed_items+0x8e/0x180 [btrfs]
               btrfs_commit_transaction+0x31b/0xb20 [btrfs]
               iterate_supers+0x87/0xf0
               ksys_sync+0x60/0xb0
               __ia32_sys_sync+0xa/0x10
               do_syscall_64+0x5c/0x260
               entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
        -> #0 (&delayed_node->mutex){+.+.}:
               __lock_acquire+0xef0/0x1c80
               lock_acquire+0xa2/0x1d0
               __mutex_lock+0xa9/0xaf0
               __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs]
               btrfs_evict_inode+0x40d/0x560 [btrfs]
               evict+0xd9/0x1c0
               dispose_list+0x48/0x70
               prune_icache_sb+0x54/0x80
               super_cache_scan+0x124/0x1a0
               do_shrink_slab+0x176/0x440
               shrink_slab+0x23a/0x2c0
               shrink_node+0x188/0x6e0
               balance_pgdat+0x31d/0x7f0
               kswapd+0x238/0x550
               kthread+0x120/0x140
               ret_from_fork+0x3a/0x50
      
        other info that might help us debug this:
      
        Chain exists of:
          &delayed_node->mutex --> pcpu_alloc_mutex --> fs_reclaim
      
         Possible unsafe locking scenario:
      
               CPU0                    CPU1
               ----                    ----
          lock(fs_reclaim);
                                       lock(pcpu_alloc_mutex);
                                       lock(fs_reclaim);
          lock(&delayed_node->mutex);
      
         *** DEADLOCK ***
      
        3 locks held by kswapd0/91:
         #0: (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30
         #1: (shrinker_rwsem){++++}, at: shrink_slab+0x12f/0x2c0
         #2: (&type->s_umount_key#43){++++}, at: trylock_super+0x16/0x50
      
        stack backtrace:
        CPU: 1 PID: 91 Comm: kswapd0 Not tainted 5.6.0-rc7-btrfs-next-77 #1
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-0-ga698c8995f-prebuilt.qemu.org 04/01/2014
        Call Trace:
         dump_stack+0x8f/0xd0
         check_noncircular+0x170/0x190
         __lock_acquire+0xef0/0x1c80
         lock_acquire+0xa2/0x1d0
         __mutex_lock+0xa9/0xaf0
         __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs]
         btrfs_evict_inode+0x40d/0x560 [btrfs]
         evict+0xd9/0x1c0
         dispose_list+0x48/0x70
         prune_icache_sb+0x54/0x80
         super_cache_scan+0x124/0x1a0
         do_shrink_slab+0x176/0x440
         shrink_slab+0x23a/0x2c0
         shrink_node+0x188/0x6e0
         balance_pgdat+0x31d/0x7f0
         kswapd+0x238/0x550
         kthread+0x120/0x140
         ret_from_fork+0x3a/0x50
      
      This could be fixed by making btrfs pass GFP_NOFS instead of GFP_KERNEL
      to percpu_counter_init() in contextes where it is not reclaim safe,
      however that type of approach is discouraged since
      memalloc_[nofs|noio]_save() were introduced.  Therefore this change
      makes pcpu_alloc() look up into an existing nofs/noio context before
      deciding whether it is in an atomic context or not.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Acked-by: NDennis Zhou <dennis@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Link: http://lkml.kernel.org/r/20200430164356.15543-1-fdmanana@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      28307d93
    • W
      mm/slub: fix incorrect interpretation of s->offset · cbfc35a4
      Waiman Long 提交于
      In a couple of places in the slub memory allocator, the code uses
      "s->offset" as a check to see if the free pointer is put right after the
      object.  That check is no longer true with commit 3202fa62 ("slub:
      relocate freelist pointer to middle of object").
      
      As a result, echoing "1" into the validate sysfs file, e.g.  of dentry,
      may cause a bunch of "Freepointer corrupt" error reports like the
      following to appear with the system in panic afterwards.
      
        =============================================================================
        BUG dentry(666:pmcd.service) (Tainted: G    B): Freepointer corrupt
        -----------------------------------------------------------------------------
      
      To fix it, use the check "s->offset == s->inuse" in the new helper
      function freeptr_outside_object() instead.  Also add another helper
      function get_info_end() to return the end of info block (inuse + free
      pointer if not overlapping with object).
      
      Fixes: 3202fa62 ("slub: relocate freelist pointer to middle of object")
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Acked-by: NRafael Aquini <aquini@redhat.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Vitaly Nikolenko <vnik@duasynt.com>
      Cc: Silvio Cesare <silvio.cesare@gmail.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Markus Elfring <Markus.Elfring@web.de>
      Cc: Changbin Du <changbin.du@gmail.com>
      Link: http://lkml.kernel.org/r/20200429135328.26976-1-longman@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cbfc35a4
    • D
      mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous() · e84fe99b
      David Hildenbrand 提交于
      Without CONFIG_PREEMPT, it can happen that we get soft lockups detected,
      e.g., while booting up.
      
        watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:1]
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.6.0-next-20200331+ #4
        Hardware name: Red Hat KVM, BIOS 1.11.1-4.module+el8.1.0+4066+0f1aadab 04/01/2014
        RIP: __pageblock_pfn_to_page+0x134/0x1c0
        Call Trace:
         set_zone_contiguous+0x56/0x70
         page_alloc_init_late+0x166/0x176
         kernel_init_freeable+0xfa/0x255
         kernel_init+0xa/0x106
         ret_from_fork+0x35/0x40
      
      The issue becomes visible when having a lot of memory (e.g., 4TB)
      assigned to a single NUMA node - a system that can easily be created
      using QEMU.  Inside VMs on a hypervisor with quite some memory
      overcommit, this is fairly easy to trigger.
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      Reviewed-by: NPankaj Gupta <pankaj.gupta.linux@gmail.com>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Reviewed-by: NShile Zhang <shile.zhang@linux.alibaba.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Shile Zhang <shile.zhang@linux.alibaba.com>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Alexander Duyck <alexander.duyck@gmail.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200416073417.5003-1-david@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e84fe99b
    • Y
      mm, memcg: fix error return value of mem_cgroup_css_alloc() · 11d67612
      Yafang Shao 提交于
      When I run my memcg testcase which creates lots of memcgs, I found
      there're unexpected out of memory logs while there're still enough
      available free memory.  The error log is
      
        mkdir: cannot create directory 'foo.65533': Cannot allocate memory
      
      The reason is when we try to create more than MEM_CGROUP_ID_MAX memcgs,
      an -ENOMEM errno will be set by mem_cgroup_css_alloc(), but the right
      errno should be -ENOSPC "No space left on device", which is an
      appropriate errno for userspace's failed mkdir.
      
      As the errno really misled me, we should make it right.  After this
      patch, the error log will be
      
        mkdir: cannot create directory 'foo.65533': No space left on device
      
      [akpm@linux-foundation.org: s/EBUSY/ENOSPC/, per Michal]
      [akpm@linux-foundation.org: s/EBUSY/ENOSPC/, per Michal]
      Fixes: 73f576c0 ("mm: memcontrol: fix cgroup creation failure after many small jobs")
      Suggested-by: NMatthew Wilcox <willy@infradead.org>
      Signed-off-by: NYafang Shao <laoar.shao@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NMichal Hocko <mhocko@kernel.org>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Link: http://lkml.kernel.org/r/20200407063621.GA18914@dhcp22.suse.cz
      Link: http://lkml.kernel.org/r/1586192163-20099-1-git-send-email-laoar.shao@gmail.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      11d67612
  6. 07 5月, 2020 1 次提交
  7. 25 4月, 2020 1 次提交
    • L
      mm: check that mm is still valid in madvise() · bc0c4d1e
      Linus Torvalds 提交于
      IORING_OP_MADVISE can end up basically doing mprotect() on the VM of
      another process, which means that it can race with our crazy core dump
      handling which accesses the VM state without holding the mmap_sem
      (because it incorrectly thinks that it is the final user).
      
      This is clearly a core dumping problem, but we've never fixed it the
      right way, and instead have the notion of "check that the mm is still
      ok" using mmget_still_valid() after getting the mmap_sem for writing in
      any situation where we're not the original VM thread.
      
      See commit 04f5866e ("coredump: fix race condition between
      mmget_not_zero()/get_task_mm() and core dumping") for more background on
      this whole mmget_still_valid() thing.  You might want to have a barf bag
      handy when you do.
      
      We're discussing just fixing this properly in the only remaining core
      dumping routines.  But even if we do that, let's make do_madvise() do
      the right thing, and then when we fix core dumping, we can remove all
      these mmget_still_valid() checks.
      Reported-and-tested-by: NJann Horn <jannh@google.com>
      Fixes: c1ca757b ("io_uring: add IORING_OP_MADVISE")
      Acked-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bc0c4d1e
  8. 22 4月, 2020 8 次提交
  9. 20 4月, 2020 1 次提交
    • B
      mm: Fix MREMAP_DONTUNMAP accounting on VMA merge · dadbd85f
      Brian Geffon 提交于
      When remapping a mapping where a portion of a VMA is remapped
      into another portion of the VMA it can cause the VMA to become
      split. During the copy_vma operation the VMA can actually
      be remerged if it's an anonymous VMA whose pages have not yet
      been faulted. This isn't normally a problem because at the end
      of the remap the original portion is unmapped causing it to
      become split again.
      
      However, MREMAP_DONTUNMAP leaves that original portion in place which
      means that the VMA which was split and then remerged is not actually
      split at the end of the mremap. This patch fixes a bug where
      we don't detect that the VMAs got remerged and we end up
      putting back VM_ACCOUNT on the next mapping which is completely
      unreleated. When that next mapping is unmapped it results in
      incorrectly unaccounting for the memory which was never accounted,
      and eventually we will underflow on the memory comittment.
      
      There is also another issue which is similar, we're currently
      accouting for the number of pages in the new_vma but that's wrong.
      We need to account for the length of the remap operation as that's
      all that is being added. If there was a mapping already at that
      location its comittment would have been adjusted as part of
      the munmap at the start of the mremap.
      
      A really simple repro can be seen in:
      https://gist.github.com/bgaff/e101ce99da7d9a8c60acc641d07f312c
      
      Fixes: e346b381 ("mm/mremap: add MREMAP_DONTUNMAP to mremap()")
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: NBrian Geffon <bgeffon@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dadbd85f
  10. 11 4月, 2020 13 次提交
  11. 09 4月, 2020 1 次提交
  12. 08 4月, 2020 1 次提交