1. 12 10月, 2021 1 次提交
  2. 29 9月, 2021 2 次提交
  3. 14 7月, 2021 16 次提交
  4. 15 6月, 2021 1 次提交
    • M
      mm, hugetlb: fix simple resv_huge_pages underflow on UFFDIO_COPY · 82ac4173
      Mina Almasry 提交于
      stable inclusion
      from stable-5.10.43
      commit 2eb4ec9c2c3535b9755c484183cc5c4d90fd37ff
      bugzilla: 109284
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit d84cf06e ]
      
      The userfaultfd hugetlb tests cause a resv_huge_pages underflow.  This
      happens when hugetlb_mcopy_atomic_pte() is called with !is_continue on
      an index for which we already have a page in the cache.  When this
      happens, we allocate a second page, double consuming the reservation,
      and then fail to insert the page into the cache and return -EEXIST.
      
      To fix this, we first check if there is a page in the cache which
      already consumed the reservation, and return -EEXIST immediately if so.
      
      There is still a rare condition where we fail to copy the page contents
      AND race with a call for hugetlb_no_page() for this index and again we
      will underflow resv_huge_pages.  That is fixed in a more complicated
      patch not targeted for -stable.
      
      Test:
      
        Hacked the code locally such that resv_huge_pages underflows produce a
        warning, then:
      
        ./tools/testing/selftests/vm/userfaultfd hugetlb_shared 10
      	2 /tmp/kokonut_test/huge/userfaultfd_test && echo test success
        ./tools/testing/selftests/vm/userfaultfd hugetlb 10
      	2 /tmp/kokonut_test/huge/userfaultfd_test && echo test success
      
      Both tests succeed and produce no warnings.  After the test runs number
      of free/resv hugepages is correct.
      
      [mike.kravetz@oracle.com: changelog fixes]
      
      Link: https://lkml.kernel.org/r/20210528004649.85298-1-almasrymina@google.com
      Fixes: 8fb5debc ("userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for userfaultfd support")
      Signed-off-by: NMina Almasry <almasrymina@google.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Cc: Axel Rasmussen <axelrasmussen@google.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      82ac4173
  5. 03 6月, 2021 1 次提交
  6. 19 4月, 2021 1 次提交
    • M
      hugetlb_cgroup: fix imbalanced css_get and css_put pair for shared mappings · c329e216
      Miaohe Lin 提交于
      stable inclusion
      from stable-5.10.27
      commit fe03ccc3ce906a31005637263fb82dd84d5d1dac
      bugzilla: 51493
      
      --------------------------------
      
      commit d85aecf2 upstream.
      
      The current implementation of hugetlb_cgroup for shared mappings could
      have different behavior.  Consider the following two scenarios:
      
       1.Assume initial css reference count of hugetlb_cgroup is 1:
        1.1 Call hugetlb_reserve_pages with from = 1, to = 2. So css reference
            count is 2 associated with 1 file_region.
        1.2 Call hugetlb_reserve_pages with from = 2, to = 3. So css reference
            count is 3 associated with 2 file_region.
        1.3 coalesce_file_region will coalesce these two file_regions into
            one. So css reference count is 3 associated with 1 file_region
            now.
      
       2.Assume initial css reference count of hugetlb_cgroup is 1 again:
        2.1 Call hugetlb_reserve_pages with from = 1, to = 3. So css reference
            count is 2 associated with 1 file_region.
      
      Therefore, we might have one file_region while holding one or more css
      reference counts. This inconsistency could lead to imbalanced css_get()
      and css_put() pair. If we do css_put one by one (i.g. hole punch case),
      scenario 2 would put one more css reference. If we do css_put all
      together (i.g. truncate case), scenario 1 will leak one css reference.
      
      The imbalanced css_get() and css_put() pair would result in a non-zero
      reference when we try to destroy the hugetlb cgroup. The hugetlb cgroup
      directory is removed __but__ associated resource is not freed. This
      might result in OOM or can not create a new hugetlb cgroup in a busy
      workload ultimately.
      
      In order to fix this, we have to make sure that one file_region must
      hold exactly one css reference. So in coalesce_file_region case, we
      should release one css reference before coalescence. Also only put css
      reference when the entire file_region is removed.
      
      The last thing to note is that the caller of region_add() will only hold
      one reference to h_cg->css for the whole contiguous reservation region.
      But this area might be scattered when there are already some
      file_regions reside in it. As a result, many file_regions may share only
      one h_cg->css reference. In order to ensure that one file_region must
      hold exactly one css reference, we should do css_get() for each
      file_region and release the reference held by caller when they are done.
      
      [linmiaohe@huawei.com: fix imbalanced css_get and css_put pair for shared mappings]
        Link: https://lkml.kernel.org/r/20210316023002.53921-1-linmiaohe@huawei.com
      
      Link: https://lkml.kernel.org/r/20210301120540.37076-1-linmiaohe@huawei.com
      Fixes: 075a61d0 ("hugetlb_cgroup: add accounting for shared mappings")
      Reported-by: kernel test robot <lkp@intel.com> (auto build test ERROR)
      Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Wanpeng Li <liwp.linux@gmail.com>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: N  Weilong Chen <chenweilong@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      c329e216
  7. 09 4月, 2021 4 次提交
  8. 09 3月, 2021 4 次提交
  9. 28 1月, 2021 1 次提交
  10. 18 1月, 2021 1 次提交
    • M
      mm/hugetlb: fix deadlock in hugetlb_cow error path · 69d4df11
      Mike Kravetz 提交于
      stable inclusion
      from stable-5.10.5
      commit df73c80338ef397d5fb2fe2631d24e2256bed9bd
      bugzilla: 46931
      
      --------------------------------
      
      commit e7dd91c4 upstream.
      
      syzbot reported the deadlock here [1].  The issue is in hugetlb cow
      error handling when there are not enough huge pages for the faulting
      task which took the original reservation.  It is possible that other
      (child) tasks could have consumed pages associated with the reservation.
      In this case, we want the task which took the original reservation to
      succeed.  So, we unmap any associated pages in children so that they can
      be used by the faulting task that owns the reservation.
      
      The unmapping code needs to hold i_mmap_rwsem in write mode.  However,
      due to commit c0d0381a ("hugetlbfs: use i_mmap_rwsem for more pmd
      sharing synchronization") we are already holding i_mmap_rwsem in read
      mode when hugetlb_cow is called.
      
      Technically, i_mmap_rwsem does not need to be held in read mode for COW
      mappings as they can not share pmd's.  Modifying the fault code to not
      take i_mmap_rwsem in read mode for COW (and other non-sharable) mappings
      is too involved for a stable fix.
      
      Instead, we simply drop the hugetlb_fault_mutex and i_mmap_rwsem before
      unmapping.  This is OK as it is technically not needed.  They are
      reacquired after unmapping as expected by calling code.  Since this is
      done in an uncommon error path, the overhead of dropping and reacquiring
      mutexes is acceptable.
      
      While making changes, remove redundant BUG_ON after unmap_ref_private.
      
      [1] https://lkml.kernel.org/r/000000000000b73ccc05b5cf8558@google.com
      
      Link: https://lkml.kernel.org/r/4c5781b8-3b00-761e-c0c7-c5edebb6ec1a@oracle.com
      Fixes: c0d0381a ("hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization")
      Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com>
      Reported-by: syzbot+5eee4145df3c15e96625@syzkaller.appspotmail.com
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      69d4df11
  11. 12 1月, 2021 1 次提交
  12. 12 12月, 2020 1 次提交
    • G
      mm/hugetlb: clear compound_nr before freeing gigantic pages · ba9c1201
      Gerald Schaefer 提交于
      Commit 1378a5ee ("mm: store compound_nr as well as compound_order")
      added compound_nr counter to first tail struct page, overlaying with
      page->mapping.  The overlay itself is fine, but while freeing gigantic
      hugepages via free_contig_range(), a "bad page" check will trigger for
      non-NULL page->mapping on the first tail page:
      
        BUG: Bad page state in process bash  pfn:380001
        page:00000000c35f0856 refcount:0 mapcount:0 mapping:00000000126b68aa index:0x0 pfn:0x380001
        aops:0x0
        flags: 0x3ffff00000000000()
        raw: 3ffff00000000000 0000000000000100 0000000000000122 0000000100000000
        raw: 0000000000000000 0000000000000000 ffffffff00000000 0000000000000000
        page dumped because: non-NULL mapping
        Modules linked in:
        CPU: 6 PID: 616 Comm: bash Not tainted 5.10.0-rc7-next-20201208 #1
        Hardware name: IBM 3906 M03 703 (LPAR)
        Call Trace:
          show_stack+0x6e/0xe8
          dump_stack+0x90/0xc8
          bad_page+0xd6/0x130
          free_pcppages_bulk+0x26a/0x800
          free_unref_page+0x6e/0x90
          free_contig_range+0x94/0xe8
          update_and_free_page+0x1c4/0x2c8
          free_pool_huge_page+0x11e/0x138
          set_max_huge_pages+0x228/0x300
          nr_hugepages_store_common+0xb8/0x130
          kernfs_fop_write+0xd2/0x218
          vfs_write+0xb0/0x2b8
          ksys_write+0xac/0xe0
          system_call+0xe6/0x288
        Disabling lock debugging due to kernel taint
      
      This is because only the compound_order is cleared in
      destroy_compound_gigantic_page(), and compound_nr is set to
      1U << order == 1 for order 0 in set_compound_order(page, 0).
      
      Fix this by explicitly clearing compound_nr for first tail page after
      calling set_compound_order(page, 0).
      
      Link: https://lkml.kernel.org/r/20201208182813.66391-2-gerald.schaefer@linux.ibm.com
      Fixes: 1378a5ee ("mm: store compound_nr as well as compound_order")
      Signed-off-by: NGerald Schaefer <gerald.schaefer@linux.ibm.com>
      Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: <stable@vger.kernel.org>	[5.9+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba9c1201
  13. 15 11月, 2020 1 次提交
    • M
      hugetlbfs: fix anon huge page migration race · 336bf30e
      Mike Kravetz 提交于
      Qian Cai reported the following BUG in [1]
      
        LTP: starting move_pages12
        BUG: unable to handle page fault for address: ffffffffffffffe0
        ...
        RIP: 0010:anon_vma_interval_tree_iter_first+0xa2/0x170 avc_start_pgoff at mm/interval_tree.c:63
        Call Trace:
          rmap_walk_anon+0x141/0xa30 rmap_walk_anon at mm/rmap.c:1864
          try_to_unmap+0x209/0x2d0 try_to_unmap at mm/rmap.c:1763
          migrate_pages+0x1005/0x1fb0
          move_pages_and_store_status.isra.47+0xd7/0x1a0
          __x64_sys_move_pages+0xa5c/0x1100
          do_syscall_64+0x5f/0x310
          entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Hugh Dickins diagnosed this as a migration bug caused by code introduced
      to use i_mmap_rwsem for pmd sharing synchronization.  Specifically, the
      routine unmap_and_move_huge_page() is always passing the TTU_RMAP_LOCKED
      flag to try_to_unmap() while holding i_mmap_rwsem.  This is wrong for
      anon pages as the anon_vma_lock should be held in this case.  Further
      analysis suggested that i_mmap_rwsem was not required to he held at all
      when calling try_to_unmap for anon pages as an anon page could never be
      part of a shared pmd mapping.
      
      Discussion also revealed that the hack in hugetlb_page_mapping_lock_write
      to drop page lock and acquire i_mmap_rwsem is wrong.  There is no way to
      keep mapping valid while dropping page lock.
      
      This patch does the following:
      
       - Do not take i_mmap_rwsem and set TTU_RMAP_LOCKED for anon pages when
         calling try_to_unmap.
      
       - Remove the hacky code in hugetlb_page_mapping_lock_write. The routine
         will now simply do a 'trylock' while still holding the page lock. If
         the trylock fails, it will return NULL. This could impact the
         callers:
      
          - migration calling code will receive -EAGAIN and retry up to the
            hard coded limit (10).
      
          - memory error code will treat the page as BUSY. This will force
            killing (SIGKILL) instead of SIGBUS any mapping tasks.
      
         Do note that this change in behavior only happens when there is a
         race. None of the standard kernel testing suites actually hit this
         race, but it is possible.
      
      [1] https://lore.kernel.org/lkml/20200708012044.GC992@lca.pw/
      [2] https://lore.kernel.org/linux-mm/alpine.LSU.2.11.2010071833100.2214@eggly.anvils/
      
      Fixes: c0d0381a ("hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization")
      Reported-by: NQian Cai <cai@lca.pw>
      Suggested-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NNaoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20201105195058.78401-1-mike.kravetz@oracle.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      336bf30e
  14. 03 11月, 2020 1 次提交
  15. 14 10月, 2020 4 次提交