1. 06 3月, 2019 9 次提交
  2. 02 3月, 2019 1 次提交
    • M
      hugetlbfs: fix races and page leaks during migration · cb6acd01
      Mike Kravetz 提交于
      hugetlb pages should only be migrated if they are 'active'.  The
      routines set/clear_page_huge_active() modify the active state of hugetlb
      pages.
      
      When a new hugetlb page is allocated at fault time, set_page_huge_active
      is called before the page is locked.  Therefore, another thread could
      race and migrate the page while it is being added to page table by the
      fault code.  This race is somewhat hard to trigger, but can be seen by
      strategically adding udelay to simulate worst case scheduling behavior.
      Depending on 'how' the code races, various BUG()s could be triggered.
      
      To address this issue, simply delay the set_page_huge_active call until
      after the page is successfully added to the page table.
      
      Hugetlb pages can also be leaked at migration time if the pages are
      associated with a file in an explicitly mounted hugetlbfs filesystem.
      For example, consider a two node system with 4GB worth of huge pages
      available.  A program mmaps a 2G file in a hugetlbfs filesystem.  It
      then migrates the pages associated with the file from one node to
      another.  When the program exits, huge page counts are as follows:
      
        node0
        1024    free_hugepages
        1024    nr_hugepages
      
        node1
        0       free_hugepages
        1024    nr_hugepages
      
        Filesystem                         Size  Used Avail Use% Mounted on
        nodev                              4.0G  2.0G  2.0G  50% /var/opt/hugepool
      
      That is as expected.  2G of huge pages are taken from the free_hugepages
      counts, and 2G is the size of the file in the explicitly mounted
      filesystem.  If the file is then removed, the counts become:
      
        node0
        1024    free_hugepages
        1024    nr_hugepages
      
        node1
        1024    free_hugepages
        1024    nr_hugepages
      
        Filesystem                         Size  Used Avail Use% Mounted on
        nodev                              4.0G  2.0G  2.0G  50% /var/opt/hugepool
      
      Note that the filesystem still shows 2G of pages used, while there
      actually are no huge pages in use.  The only way to 'fix' the filesystem
      accounting is to unmount the filesystem
      
      If a hugetlb page is associated with an explicitly mounted filesystem,
      this information in contained in the page_private field.  At migration
      time, this information is not preserved.  To fix, simply transfer
      page_private from old to new page at migration time if necessary.
      
      There is a related race with removing a huge page from a file and
      migration.  When a huge page is removed from the pagecache, the
      page_mapping() field is cleared, yet page_private remains set until the
      page is actually freed by free_huge_page().  A page could be migrated
      while in this state.  However, since page_mapping() is not set the
      hugetlbfs specific routine to transfer page_private is not called and we
      leak the page count in the filesystem.
      
      To fix that, check for this condition before migrating a huge page.  If
      the condition is detected, return EBUSY for the page.
      
      Link: http://lkml.kernel.org/r/74510272-7319-7372-9ea6-ec914734c179@oracle.com
      Link: http://lkml.kernel.org/r/20190212221400.3512-1-mike.kravetz@oracle.com
      Fixes: bcc54222 ("mm: hugetlb: introduce page_huge_active")
      Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: <stable@vger.kernel.org>
      [mike.kravetz@oracle.com: v2]
        Link: http://lkml.kernel.org/r/7534d322-d782-8ac6-1c8d-a8dc380eb3ab@oracle.com
      [mike.kravetz@oracle.com: update comment and changelog]
        Link: http://lkml.kernel.org/r/420bcfd6-158b-38e4-98da-26d0cd85bd01@oracle.comSigned-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb6acd01
  3. 28 2月, 2019 1 次提交
  4. 26 2月, 2019 2 次提交
    • D
      tmpfs: fix uninitialized return value in shmem_link · 29b00e60
      Darrick J. Wong 提交于
      When we made the shmem_reserve_inode call in shmem_link conditional, we
      forgot to update the declaration for ret so that it always has a known
      value.  Dan Carpenter pointed out this deficiency in the original patch.
      
      Fixes: 1062af92 ("tmpfs: fix link accounting when a tmpfile is linked in")
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Matej Kupljen <matej.kupljen@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      29b00e60
    • L
      Revert "x86/fault: BUG() when uaccess helpers fault on kernel addresses" · 53a41cb7
      Linus Torvalds 提交于
      This reverts commit 9da3f2b7.
      
      It was well-intentioned, but wrong.  Overriding the exception tables for
      instructions for random reasons is just wrong, and that is what the new
      code did.
      
      It caused problems for tracing, and it caused problems for strncpy_from_user(),
      because the new checks made perfectly valid use cases break, rather than
      catch things that did bad things.
      
      Unchecked user space accesses are a problem, but that's not a reason to
      add invalid checks that then people have to work around with silly flags
      (in this case, that 'kernel_uaccess_faults_ok' flag, which is just an
      odd way to say "this commit was wrong" and was sprinked into random
      places to hide the wrongness).
      
      The real fix to unchecked user space accesses is to get rid of the
      special "let's not check __get_user() and __put_user() at all" logic.
      Make __{get|put}_user() be just aliases to the regular {get|put}_user()
      functions, and make it impossible to access user space without having
      the proper checks in places.
      
      The raison d'être of the special double-underscore versions used to be
      that the range check was expensive, and if you did multiple user
      accesses, you'd do the range check up front (like the signal frame
      handling code, for example).  But SMAP (on x86) and PAN (on ARM) have
      made that optimization pointless, because the _real_ expense is the "set
      CPU flag to allow user space access".
      
      Do let's not break the valid cases to catch invalid cases that shouldn't
      even exist.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Tobin C. Harding <tobin@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Jann Horn <jannh@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53a41cb7
  5. 22 2月, 2019 20 次提交
  6. 18 2月, 2019 1 次提交
  7. 16 2月, 2019 1 次提交
    • A
      arm64, mm, efi: Account for GICv3 LPI tables in static memblock reserve table · 8a5b403d
      Ard Biesheuvel 提交于
      In the irqchip and EFI code, we have what basically amounts to a quirk
      to work around a peculiarity in the GICv3 architecture, which permits
      the system memory address of LPI tables to be programmable only once
      after a CPU reset. This means kexec kernels must use the same memory
      as the first kernel, and thus ensure that this memory has not been
      given out for other purposes by the time the ITS init code runs, which
      is not very early for secondary CPUs.
      
      On systems with many CPUs, these reservations could overflow the
      memblock reservation table, and this was addressed in commit:
      
        eff89628 ("efi/arm: Defer persistent reservations until after paging_init()")
      
      However, this turns out to have made things worse, since the allocation
      of page tables and heap space for the resized memblock reservation table
      itself may overwrite the regions we are attempting to reserve, which may
      cause all kinds of corruption, also considering that the ITS will still
      be poking bits into that memory in response to incoming MSIs.
      
      So instead, let's grow the static memblock reservation table on such
      systems so it can accommodate these reservations at an earlier time.
      This will permit us to revert the above commit in a subsequent patch.
      
      [ mingo: Minor cleanups. ]
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20190215123333.21209-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8a5b403d
  8. 15 2月, 2019 1 次提交
    • J
      mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs · 2c2ade81
      Jann Horn 提交于
      The basic idea behind ->pagecnt_bias is: If we pre-allocate the maximum
      number of references that we might need to create in the fastpath later,
      the bump-allocation fastpath only has to modify the non-atomic bias value
      that tracks the number of extra references we hold instead of the atomic
      refcount. The maximum number of allocations we can serve (under the
      assumption that no allocation is made with size 0) is nc->size, so that's
      the bias used.
      
      However, even when all memory in the allocation has been given away, a
      reference to the page is still held; and in the `offset < 0` slowpath, the
      page may be reused if everyone else has dropped their references.
      This means that the necessary number of references is actually
      `nc->size+1`.
      
      Luckily, from a quick grep, it looks like the only path that can call
      page_frag_alloc(fragsz=1) is TAP with the IFF_NAPI_FRAGS flag, which
      requires CAP_NET_ADMIN in the init namespace and is only intended to be
      used for kernel testing and fuzzing.
      
      To test for this issue, put a `WARN_ON(page_ref_count(page) == 0)` in the
      `offset < 0` path, below the virt_to_page() call, and then repeatedly call
      writev() on a TAP device with IFF_TAP|IFF_NO_PI|IFF_NAPI_FRAGS|IFF_NAPI,
      with a vector consisting of 15 elements containing 1 byte each.
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2c2ade81
  9. 13 2月, 2019 3 次提交
    • Q
      Revert "mm: use early_pfn_to_nid in page_ext_init" · 2f1ee091
      Qian Cai 提交于
      This reverts commit fe53ca54 ("mm: use early_pfn_to_nid in
      page_ext_init").
      
      When booting a system with "page_owner=on",
      
      start_kernel
        page_ext_init
          invoke_init_callbacks
            init_section_page_ext
              init_page_owner
                init_early_allocated_pages
                  init_zones_in_node
                    init_pages_in_zone
                      lookup_page_ext
                        page_to_nid
      
      The issue here is that page_to_nid() will not work since some page flags
      have no node information until later in page_alloc_init_late() due to
      DEFERRED_STRUCT_PAGE_INIT.  Hence, it could trigger an out-of-bounds
      access with an invalid nid.
      
        UBSAN: Undefined behaviour in ./include/linux/mm.h:1104:50
        index 7 is out of range for type 'zone [5]'
      
      Also, kernel will panic since flags were poisoned earlier with,
      
      CONFIG_DEBUG_VM_PGFLAGS=y
      CONFIG_NODE_NOT_IN_PAGE_FLAGS=n
      
      start_kernel
        setup_arch
          pagetable_init
            paging_init
              sparse_init
                sparse_init_nid
                  memblock_alloc_try_nid_raw
      
      It did not handle it well in init_pages_in_zone() which ends up calling
      page_to_nid().
      
        page:ffffea0004200000 is uninitialized and poisoned
        raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
        raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
        page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
        page_owner info is not active (free page?)
        kernel BUG at include/linux/mm.h:990!
        RIP: 0010:init_page_owner+0x486/0x520
      
      This means that assumptions behind commit fe53ca54 ("mm: use
      early_pfn_to_nid in page_ext_init") are incomplete.  Therefore, revert
      the commit for now.  A proper way to move the page_owner initialization
      to sooner is to hook into memmap initialization.
      
      Link: http://lkml.kernel.org/r/20190115202812.75820-1-cai@lca.pwSigned-off-by: NQian Cai <cai@lca.pw>
      Acked-by: NMichal Hocko <mhocko@kernel.org>
      Cc: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f1ee091
    • Y
      mm/gup: fix gup_pmd_range() for dax · 414fd080
      Yu Zhao 提交于
      For dax pmd, pmd_trans_huge() returns false but pmd_huge() returns true
      on x86.  So the function works as long as hugetlb is configured.
      However, dax doesn't depend on hugetlb.
      
      Link: http://lkml.kernel.org/r/20190111034033.601-1-yuzhao@google.comSigned-off-by: NYu Zhao <yuzhao@google.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: "Michael S . Tsirkin" <mst@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      414fd080
    • D
      Revert "mm: slowly shrink slabs with a relatively small number of objects" · a9a238e8
      Dave Chinner 提交于
      This reverts commit 172b06c3 ("mm: slowly shrink slabs with a
      relatively small number of objects").
      
      This change changes the agressiveness of shrinker reclaim, causing small
      cache and low priority reclaim to greatly increase scanning pressure on
      small caches.  As a result, light memory pressure has a disproportionate
      affect on small caches, and causes large caches to be reclaimed much
      faster than previously.
      
      As a result, it greatly perturbs the delicate balance of the VFS caches
      (dentry/inode vs file page cache) such that the inode/dentry caches are
      reclaimed much, much faster than the page cache and this drives us into
      several other caching imbalance related problems.
      
      As such, this is a bad change and needs to be reverted.
      
      [ Needs some massaging to retain the later seekless shrinker
        modifications.]
      
      Link: http://lkml.kernel.org/r/20190130041707.27750-3-david@fromorbit.com
      Fixes: 172b06c3 ("mm: slowly shrink slabs with a relatively small number of objects")
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Cc: Wolfgang Walter <linux@stwm.de>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Spock <dairinin@gmail.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9a238e8
  10. 02 2月, 2019 1 次提交
    • D
      mm: migrate: don't rely on __PageMovable() of newpage after unlocking it · e0a352fa
      David Hildenbrand 提交于
      We had a race in the old balloon compaction code before b1123ea6
      ("mm: balloon: use general non-lru movable page feature") refactored it
      that became visible after backporting 195a8c43 ("virtio-balloon:
      deflate via a page list") without the refactoring.
      
      The bug existed from commit d6d86c0a ("mm/balloon_compaction:
      redesign ballooned pages management") till b1123ea6 ("mm: balloon:
      use general non-lru movable page feature").  d6d86c0a
      ("mm/balloon_compaction: redesign ballooned pages management") was
      backported to 3.12, so the broken kernels are stable kernels [3.12 -
      4.7].
      
      There was a subtle race between dropping the page lock of the newpage in
      __unmap_and_move() and checking for __is_movable_balloon_page(newpage).
      
      Just after dropping this page lock, virtio-balloon could go ahead and
      deflate the newpage, effectively dequeueing it and clearing PageBalloon,
      in turn making __is_movable_balloon_page(newpage) fail.
      
      This resulted in dropping the reference of the newpage via
      putback_lru_page(newpage) instead of put_page(newpage), leading to
      page->lru getting modified and a !LRU page ending up in the LRU lists.
      With 195a8c43 ("virtio-balloon: deflate via a page list")
      backported, one would suddenly get corrupted lists in
      release_pages_balloon():
      
      - WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0
      - list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100
      
      Nowadays this race is no longer possible, but it is hidden behind very
      ugly handling of __ClearPageMovable() and __PageMovable().
      
      __ClearPageMovable() will not make __PageMovable() fail, only
      PageMovable().  So the new check (__PageMovable(newpage)) will still
      hold even after newpage was dequeued by virtio-balloon.
      
      If anybody would ever change that special handling, the BUG would be
      introduced again.  So instead, make it explicit and use the information
      of the original isolated page before migration.
      
      This patch can be backported fairly easy to stable kernels (in contrast
      to the refactoring).
      
      Link: http://lkml.kernel.org/r/20190129233217.10747-1-david@redhat.com
      Fixes: d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management")
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Reported-by: NVratislav Bendel <vbendel@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NRafael Aquini <aquini@redhat.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Vratislav Bendel <vbendel@redhat.com>
      Cc: Rafael Aquini <aquini@redhat.com>
      Cc: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: <stable@vger.kernel.org>	[3.12 - 4.7]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0a352fa