1. 24 8月, 2022 4 次提交
  2. 20 7月, 2022 1 次提交
  3. 04 7月, 2022 3 次提交
  4. 17 6月, 2022 1 次提交
  5. 13 5月, 2022 1 次提交
  6. 16 4月, 2022 1 次提交
  7. 07 4月, 2022 1 次提交
  8. 05 4月, 2022 1 次提交
  9. 28 3月, 2022 1 次提交
  10. 23 3月, 2022 1 次提交
    • M
      mm: introduce kmem_cache_alloc_lru · 88f2ef73
      Muchun Song 提交于
      We currently allocate scope for every memcg to be able to tracked on
      every superblock instantiated in the system, regardless of whether that
      superblock is even accessible to that memcg.
      
      These huge memcg counts come from container hosts where memcgs are
      confined to just a small subset of the total number of superblocks that
      instantiated at any given point in time.
      
      For these systems with huge container counts, list_lru does not need the
      capability of tracking every memcg on every superblock.  What it comes
      down to is that adding the memcg to the list_lru at the first insert.
      So introduce kmem_cache_alloc_lru to allocate objects and its list_lru.
      In the later patch, we will convert all inode and dentry allocation from
      kmem_cache_alloc to kmem_cache_alloc_lru.
      
      Link: https://lkml.kernel.org/r/20220228122126.37293-3-songmuchun@bytedance.comSigned-off-by: NMuchun Song <songmuchun@bytedance.com>
      Cc: Alex Shi <alexs@kernel.org>
      Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
      Cc: Chao Yu <chao@kernel.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Fam Zheng <fam.zheng@bytedance.com>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kari Argillander <kari.argillander@gmail.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Qi Zheng <zhengqi.arch@bytedance.com>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
      Cc: Yang Shi <shy828301@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      88f2ef73
  11. 06 1月, 2022 9 次提交
    • M
      mm/kasan: Convert to struct folio and struct slab · 6e48a966
      Matthew Wilcox (Oracle) 提交于
      KASAN accesses some slab related struct page fields so we need to
      convert it to struct slab. Some places are a bit simplified thanks to
      kasan_addr_to_slab() encapsulating the PageSlab flag check through
      virt_to_slab().  When resolving object address to either a real slab or
      a large kmalloc, use struct folio as the intermediate type for testing
      the slab flag to avoid unnecessary implicit compound_head().
      
      [ vbabka@suse.cz: use struct folio, adjust to differences in previous
        patches ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Tested-by: NHyeongogn Yoo <42.hyeyoo@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: <kasan-dev@googlegroups.com>
      6e48a966
    • V
      mm: Convert struct page to struct slab in functions used by other subsystems · 40f3bf0c
      Vlastimil Babka 提交于
      KASAN, KFENCE and memcg interact with SLAB or SLUB internals through
      functions nearest_obj(), obj_to_index() and objs_per_slab() that use
      struct page as parameter. This patch converts it to struct slab
      including all callers, through a coccinelle semantic patch.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
       { ... }
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
      
      @@
      identifier fn =~ "obj_to_index|objs_per_slab";
      @@
      
       fn(...,
      -   const struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj";
      @@
      
       fn(...,
      -   struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
      expression E;
      @@
      
       fn(...,
      (
      - slab_page(E)
      + E
      |
      - virt_to_page(E)
      + virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + page_slab(page)
      )
        ,...)
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <kasan-dev@googlegroups.com>
      Cc: <cgroups@vger.kernel.org>
      40f3bf0c
    • V
      mm/slab: Finish struct page to struct slab conversion · dd35f71a
      Vlastimil Babka 提交于
      Change cache_free_alien() to use slab_nid(virt_to_slab()). Otherwise
      just update of comments and some remaining variable names.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      dd35f71a
    • V
      mm/slab: Convert most struct page to struct slab by spatch · 7981e67e
      Vlastimil Babka 提交于
      The majority of conversion from struct page to struct slab in SLAB
      internals can be delegated to a coccinelle semantic patch. This includes
      renaming of variables with 'page' in name to 'slab', and similar.
      
      Big thanks to Julia Lawall and Luis Chamberlain for help with
      coccinelle.
      
      // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
      // embedded script
      
      // build list of functions for applying the next rule
      @initialize:ocaml@
      @@
      
      let ok_function p =
        not (List.mem (List.hd p).current_element ["kmem_getpages";"kmem_freepages"])
      
      // convert the type in selected functions
      @@
      position p : script:ocaml() { ok_function p };
      @@
      
      - struct page@p
      + struct slab
      
      @@
      @@
      
      -PageSlabPfmemalloc(page)
      +slab_test_pfmemalloc(slab)
      
      @@
      @@
      
      -ClearPageSlabPfmemalloc(page)
      +slab_clear_pfmemalloc(slab)
      
      @@
      @@
      
      obj_to_index(
       ...,
      - page
      + slab_page(slab)
      ,...)
      
      // for all functions, change any "struct slab *page" parameter to "struct slab
      // *slab" in the signature, and generally all occurences of "page" to "slab" in
      // the body - with some special cases.
      @@
      identifier fn;
      expression E;
      @@
      
       fn(...,
      -   struct slab *page
      +   struct slab *slab
          ,...)
       {
      <...
      (
      - int page_node;
      + int slab_node;
      |
      - page_node
      + slab_node
      |
      - page_slab(page)
      + slab
      |
      - page_address(page)
      + slab_address(slab)
      |
      - page_size(page)
      + slab_size(slab)
      |
      - page_to_nid(page)
      + slab_nid(slab)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + slab
      )
      ...>
       }
      
      // rename a function parameter
      @@
      identifier fn;
      expression E;
      @@
      
       fn(...,
      -   int page_node
      +   int slab_node
          ,...)
       {
      <...
      - page_node
      + slab_node
      ...>
       }
      
      // functions converted by previous rules that were temporarily called using
      // slab_page(E) so we want to remove the wrapper now that they accept struct
      // slab ptr directly
      @@
      identifier fn =~ "index_to_obj";
      expression E;
      @@
      
       fn(...,
      - slab_page(E)
      + E
       ,...)
      
      // functions that were returning struct page ptr and now will return struct
      // slab ptr, including slab_page() wrapper removal
      @@
      identifier fn =~ "cache_grow_begin|get_valid_first_slab|get_first_slab";
      expression E;
      @@
      
       fn(...)
       {
      <...
      - slab_page(E)
      + E
      ...>
       }
      
      // rename any former struct page * declarations
      @@
      @@
      
      struct slab *
      -page
      +slab
      ;
      
      // all functions (with exceptions) with a local "struct slab *page" variable
      // that will be renamed to "struct slab *slab"
      @@
      identifier fn !~ "kmem_getpages|kmem_freepages";
      expression E;
      @@
      
       fn(...)
       {
      <...
      (
      - page_slab(page)
      + slab
      |
      - page_to_nid(page)
      + slab_nid(slab)
      |
      - kasan_poison_slab(page)
      + kasan_poison_slab(slab_page(slab))
      |
      - page_address(page)
      + slab_address(slab)
      |
      - page_size(page)
      + slab_size(slab)
      |
      - page->pages
      + slab->slabs
      |
      - page = virt_to_head_page(E)
      + slab = virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + slab
      )
      ...>
       }
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      7981e67e
    • V
      mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab · 42c0faac
      Vlastimil Babka 提交于
      These functions sit at the boundary to page allocator. Also use folio
      internally to avoid extra compound_head() when dealing with page flags.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      42c0faac
    • M
      mm: Convert check_heap_object() to use struct slab · 0b3eb091
      Matthew Wilcox (Oracle) 提交于
      Ensure that we're not seeing a tail page inside __check_heap_object() by
      converting to a slab instead of a page.  Take the opportunity to mark
      the slab as const since we're not modifying it.  Also move the
      declaration of __check_heap_object() to mm/slab.h so it's not available
      to the wider kernel.
      
      [ vbabka@suse.cz: in check_heap_object() only convert to struct slab for
        actual PageSlab pages; use folio as intermediate step instead of page ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      0b3eb091
    • M
      mm: Use struct slab in kmem_obj_info() · 7213230a
      Matthew Wilcox (Oracle) 提交于
      All three implementations of slab support kmem_obj_info() which reports
      details of an object allocated from the slab allocator.  By using the
      slab type instead of the page type, we make it obvious that this can
      only be called for slabs.
      
      [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      7213230a
    • M
      mm: Convert [un]account_slab_page() to struct slab · b918653b
      Matthew Wilcox (Oracle) 提交于
      Convert the parameter of these functions to struct slab instead of
      struct page and drop _page from the names. For now their callers just
      convert page to slab.
      
      [ vbabka@suse.cz: replace existing functions instead of calling them ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      b918653b
    • V
      mm/slab: Dissolve slab_map_pages() in its caller · c7981543
      Vlastimil Babka 提交于
      The function no longer does what its name and comment suggests, and just
      sets two struct page fields, which can be done directly in its sole
      caller.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      c7981543
  12. 21 11月, 2021 1 次提交
  13. 07 11月, 2021 2 次提交
  14. 19 10月, 2021 1 次提交
  15. 07 5月, 2021 2 次提交
  16. 01 5月, 2021 2 次提交
    • A
      kasan, mm: integrate slab init_on_free with HW_TAGS · d57a964e
      Andrey Konovalov 提交于
      This change uses the previously added memory initialization feature of
      HW_TAGS KASAN routines for slab memory when init_on_free is enabled.
      
      With this change, memory initialization memset() is no longer called when
      both HW_TAGS KASAN and init_on_free are enabled.  Instead, memory is
      initialized in KASAN runtime.
      
      For SLUB, the memory initialization memset() is moved into
      slab_free_hook() that currently directly follows the initialization loop.
      A new argument is added to slab_free_hook() that indicates whether to
      initialize the memory or not.
      
      To avoid discrepancies with which memory gets initialized that can be
      caused by future changes, both KASAN hook and initialization memset() are
      put together and a warning comment is added.
      
      Combining setting allocation tags with memory initialization improves
      HW_TAGS KASAN performance when init_on_free is enabled.
      
      Link: https://lkml.kernel.org/r/190fd15c1886654afdec0d19ebebd5ade665b601.1615296150.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NMarco Elver <elver@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Branislav Rankov <Branislav.Rankov@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Evgenii Stepanov <eugenis@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Kevin Brodsky <kevin.brodsky@arm.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Collingbourne <pcc@google.com>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d57a964e
    • A
      kasan, mm: integrate slab init_on_alloc with HW_TAGS · da844b78
      Andrey Konovalov 提交于
      This change uses the previously added memory initialization feature of
      HW_TAGS KASAN routines for slab memory when init_on_alloc is enabled.
      
      With this change, memory initialization memset() is no longer called when
      both HW_TAGS KASAN and init_on_alloc are enabled.  Instead, memory is
      initialized in KASAN runtime.
      
      The memory initialization memset() is moved into slab_post_alloc_hook()
      that currently directly follows the initialization loop.  A new argument
      is added to slab_post_alloc_hook() that indicates whether to initialize
      the memory or not.
      
      To avoid discrepancies with which memory gets initialized that can be
      caused by future changes, both KASAN hook and initialization memset() are
      put together and a warning comment is added.
      
      Combining setting allocation tags with memory initialization improves
      HW_TAGS KASAN performance when init_on_alloc is enabled.
      
      Link: https://lkml.kernel.org/r/c1292aeb5d519da221ec74a0684a949b027d7720.1615296150.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NMarco Elver <elver@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Branislav Rankov <Branislav.Rankov@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Evgenii Stepanov <eugenis@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Kevin Brodsky <kevin.brodsky@arm.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Collingbourne <pcc@google.com>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da844b78
  17. 14 3月, 2021 1 次提交
  18. 09 3月, 2021 1 次提交
    • P
      mm: Don't build mm_dump_obj() on CONFIG_PRINTK=n kernels · 5bb1bb35
      Paul E. McKenney 提交于
      The mem_dump_obj() functionality adds a few hundred bytes, which is a
      small price to pay.  Except on kernels built with CONFIG_PRINTK=n, in
      which mem_dump_obj() messages will be suppressed.  This commit therefore
      makes mem_dump_obj() be a static inline empty function on kernels built
      with CONFIG_PRINTK=n and excludes all of its support functions as well.
      This avoids kernel bloat on systems that cannot use mem_dump_obj().
      
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: <linux-mm@kvack.org>
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      5bb1bb35
  19. 27 2月, 2021 1 次提交
    • A
      mm, kfence: insert KFENCE hooks for SLAB · d3fb45f3
      Alexander Potapenko 提交于
      Inserts KFENCE hooks into the SLAB allocator.
      
      To pass the originally requested size to KFENCE, add an argument
      'orig_size' to slab_alloc*(). The additional argument is required to
      preserve the requested original size for kmalloc() allocations, which
      uses size classes (e.g. an allocation of 272 bytes will return an object
      of size 512). Therefore, kmem_cache::size does not represent the
      kmalloc-caller's requested size, and we must introduce the argument
      'orig_size' to propagate the originally requested size to KFENCE.
      
      Without the originally requested size, we would not be able to detect
      out-of-bounds accesses for objects placed at the end of a KFENCE object
      page if that object is not equal to the kmalloc-size class it was
      bucketed into.
      
      When KFENCE is disabled, there is no additional overhead, since
      slab_alloc*() functions are __always_inline.
      
      Link: https://lkml.kernel.org/r/20201103175841.3495947-5-elver@google.comSigned-off-by: NMarco Elver <elver@google.com>
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Co-developed-by: NMarco Elver <elver@google.com>
      
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hillf Danton <hdanton@sina.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Joern Engel <joern@purestorage.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: SeongJae Park <sjpark@amazon.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d3fb45f3
  20. 25 2月, 2021 5 次提交