1. 09 3月, 2021 2 次提交
  2. 23 1月, 2021 3 次提交
    • P
      mm: Make mem_dump_obj() handle vmalloc() memory · 98f18083
      Paul E. McKenney 提交于
      This commit adds vmalloc() support to mem_dump_obj().  Note that the
      vmalloc_dump_obj() function combines the checking and dumping, in
      contrast with the split between kmem_valid_obj() and kmem_dump_obj().
      The reason for the difference is that the checking in the vmalloc()
      case involves acquiring a global lock, and redundant acquisitions of
      global locks should be avoided, even on not-so-fast paths.
      
      Note that this change causes on-stack variables to be reported as
      vmalloc() storage from kernel_clone() or similar, depending on the degree
      of inlining that your compiler does.  This is likely more helpful than
      the earlier "non-paged (local) memory".
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: <linux-mm@kvack.org>
      Reported-by: NAndrii Nakryiko <andrii@kernel.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Tested-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      98f18083
    • P
      mm: Make mem_dump_obj() handle NULL and zero-sized pointers · b70fa3b1
      Paul E. McKenney 提交于
      This commit makes mem_dump_obj() call out NULL and zero-sized pointers
      specially instead of classifying them as non-paged memory.
      
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <linux-mm@kvack.org>
      Reported-by: NAndrii Nakryiko <andrii@kernel.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Tested-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      b70fa3b1
    • P
      mm: Add mem_dump_obj() to print source of memory block · 8e7f37f2
      Paul E. McKenney 提交于
      There are kernel facilities such as per-CPU reference counts that give
      error messages in generic handlers or callbacks, whose messages are
      unenlightening.  In the case of per-CPU reference-count underflow, this
      is not a problem when creating a new use of this facility because in that
      case the bug is almost certainly in the code implementing that new use.
      However, trouble arises when deploying across many systems, which might
      exercise corner cases that were not seen during development and testing.
      Here, it would be really nice to get some kind of hint as to which of
      several uses the underflow was caused by.
      
      This commit therefore exposes a mem_dump_obj() function that takes
      a pointer to memory (which must still be allocated if it has been
      dynamically allocated) and prints available information on where that
      memory came from.  This pointer can reference the middle of the block as
      well as the beginning of the block, as needed by things like RCU callback
      functions and timer handlers that might not know where the beginning of
      the memory block is.  These functions and handlers can use mem_dump_obj()
      to print out better hints as to where the problem might lie.
      
      The information printed can depend on kernel configuration.  For example,
      the allocation return address can be printed only for slab and slub,
      and even then only when the necessary debug has been enabled.  For slab,
      build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
      to the next power of two or use the SLAB_STORE_USER when creating the
      kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
      boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
      if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
      to enable printing of the allocation-time stack trace.
      
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <linux-mm@kvack.org>
      Reported-by: NAndrii Nakryiko <andrii@kernel.org>
      [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
      [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
      [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
      [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
      [ paulmck: Extract more info from !SLUB_DEBUG per Joonsoo Kim. ]
      [ paulmck: Explicitly check for small pointers per Naresh Kamboju. ]
      Acked-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Tested-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      8e7f37f2
  3. 19 11月, 2020 1 次提交
  4. 17 10月, 2020 1 次提交
  5. 04 9月, 2020 1 次提交
  6. 08 8月, 2020 3 次提交
  7. 10 6月, 2020 3 次提交
  8. 05 6月, 2020 2 次提交
  9. 03 6月, 2020 1 次提交
    • C
      mm: remove __vmalloc_node_flags_caller · 2b905948
      Christoph Hellwig 提交于
      Just use __vmalloc_node instead which gets and extra argument.  To be able
      to to use __vmalloc_node in all caller make it available outside of
      vmalloc and implement it in nommu.c.
      
      [akpm@linux-foundation.org: fix nommu build]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Gao Xiang <xiang@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Link: http://lkml.kernel.org/r/20200414131348.444715-25-hch@lst.deSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b905948
  10. 27 4月, 2020 1 次提交
  11. 01 12月, 2019 2 次提交
  12. 25 9月, 2019 5 次提交
  13. 17 7月, 2019 1 次提交
  14. 13 7月, 2019 1 次提交
    • C
      mm: consolidate the get_user_pages* implementations · 050a9adc
      Christoph Hellwig 提交于
      Always build mm/gup.c so that we don't have to provide separate nommu
      stubs.  Also merge the get_user_pages_fast and __get_user_pages_fast stubs
      when HAVE_FAST_GUP into the main implementations, which will never call
      the fast path if HAVE_FAST_GUP is not set.
      
      This also ensures the new put_user_pages* helpers are available for nommu,
      as those are currently missing, which would create a problem as soon as we
      actually grew users for it.
      
      Link: http://lkml.kernel.org/r/20190625143715.1689-13-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Jason Gunthorpe <jgg@mellanox.com>
      Cc: Khalid Aziz <khalid.aziz@oracle.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050a9adc
  15. 02 6月, 2019 1 次提交
  16. 21 5月, 2019 1 次提交
  17. 15 5月, 2019 2 次提交
    • J
      mm: fix false-positive OVERCOMMIT_GUESS failures · 8c7829b0
      Johannes Weiner 提交于
      With the default overcommit==guess we occasionally run into mmap
      rejections despite plenty of memory that would get dropped under
      pressure but just isn't accounted reclaimable. One example of this is
      dying cgroups pinned by some page cache. A previous case was auxiliary
      path name memory associated with dentries; we have since annotated
      those allocations to avoid overcommit failures (see d79f7aa4 ("mm:
      treat indirectly reclaimable memory as free in overcommit logic")).
      
      But trying to classify all allocated memory reliably as reclaimable
      and unreclaimable is a bit of a fool's errand. There could be a myriad
      of dependencies that constantly change with kernel versions.
      
      It becomes even more questionable of an effort when considering how
      this estimate of available memory is used: it's not compared to the
      system-wide allocated virtual memory in any way. It's not even
      compared to the allocating process's address space. It's compared to
      the single allocation request at hand!
      
      So we have an elaborate left-hand side of the equation that tries to
      assess the exact breathing room the system has available down to a
      page - and then compare it to an isolated allocation request with no
      additional context. We could fail an allocation of N bytes, but for
      two allocations of N/2 bytes we'd do this elaborate dance twice in a
      row and then still let N bytes of virtual memory through. This doesn't
      make a whole lot of sense.
      
      Let's take a step back and look at the actual goal of the
      heuristic. From the documentation:
      
         Heuristic overcommit handling. Obvious overcommits of address
         space are refused. Used for a typical system. It ensures a
         seriously wild allocation fails while allowing overcommit to
         reduce swap usage.  root is allowed to allocate slightly more
         memory in this mode. This is the default.
      
      If all we want to do is catch clearly bogus allocation requests
      irrespective of the general virtual memory situation, the physical
      memory counter-part doesn't need to be that complicated, either.
      
      When in GUESS mode, catch wild allocations by comparing their request
      size to total amount of ram and swap in the system.
      
      Link: http://lkml.kernel.org/r/20190412191418.26333-1-hannes@cmpxchg.orgSigned-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NRoman Gushchin <guro@fb.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8c7829b0
    • I
      mm/gup: change GUP fast to use flags rather than a write 'bool' · 73b0140b
      Ira Weiny 提交于
      To facilitate additional options to get_user_pages_fast() change the
      singular write parameter to be gup_flags.
      
      This patch does not change any functionality.  New functionality will
      follow in subsequent patches.
      
      Some of the get_user_pages_fast() call sites were unchanged because they
      already passed FOLL_WRITE or 0 for the write parameter.
      
      NOTE: It was suggested to change the ordering of the get_user_pages_fast()
      arguments to ensure that callers were converted.  This breaks the current
      GUP call site convention of having the returned pages be the final
      parameter.  So the suggestion was rejected.
      
      Link: http://lkml.kernel.org/r/20190328084422.29911-4-ira.weiny@intel.com
      Link: http://lkml.kernel.org/r/20190317183438.2057-4-ira.weiny@intel.comSigned-off-by: NIra Weiny <ira.weiny@intel.com>
      Reviewed-by: NMike Marshall <hubcap@omnibond.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      73b0140b
  18. 06 4月, 2019 1 次提交
  19. 06 3月, 2019 1 次提交
  20. 22 2月, 2019 1 次提交
  21. 09 1月, 2019 1 次提交
  22. 29 12月, 2018 1 次提交
  23. 27 10月, 2018 2 次提交
  24. 16 10月, 2018 1 次提交
  25. 05 9月, 2018 1 次提交