1. 06 1月, 2022 26 次提交
    • V
      mm/sl*b: Differentiate struct slab fields by sl*b implementations · 401fb12c
      Vlastimil Babka 提交于
      With a struct slab definition separate from struct page, we can go
      further and define only fields that the chosen sl*b implementation uses.
      This means everything between __page_flags and __page_refcount
      placeholders now depends on the chosen CONFIG_SL*B. Some fields exist in
      all implementations (slab_list) but can be part of a union in some, so
      it's simpler to repeat them than complicate the definition with ifdefs
      even more.
      
      The patch doesn't change physical offsets of the fields, although it
      could be done later - for example it's now clear that tighter packing in
      SLOB could be possible.
      
      This should also prevent accidental use of fields that don't exist in
      given implementation. Before this patch virt_to_cache() and
      cache_from_obj() were visible for SLOB (albeit not used), although they
      rely on the slab_cache field that isn't set by SLOB. With this patch
      it's now a compile error, so these functions are now hidden behind
      an #ifndef CONFIG_SLOB.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Tested-by: Marco Elver <elver@google.com> # kfence
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: <kasan-dev@googlegroups.com>
      401fb12c
    • V
      mm/kfence: Convert kfence_guarded_alloc() to struct slab · 8dae0cfe
      Vlastimil Babka 提交于
      The function sets some fields that are being moved from struct page to
      struct slab so it needs to be converted.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Tested-by: NMarco Elver <elver@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: <kasan-dev@googlegroups.com>
      8dae0cfe
    • M
      mm/kasan: Convert to struct folio and struct slab · 6e48a966
      Matthew Wilcox (Oracle) 提交于
      KASAN accesses some slab related struct page fields so we need to
      convert it to struct slab. Some places are a bit simplified thanks to
      kasan_addr_to_slab() encapsulating the PageSlab flag check through
      virt_to_slab().  When resolving object address to either a real slab or
      a large kmalloc, use struct folio as the intermediate type for testing
      the slab flag to avoid unnecessary implicit compound_head().
      
      [ vbabka@suse.cz: use struct folio, adjust to differences in previous
        patches ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Tested-by: NHyeongogn Yoo <42.hyeyoo@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: <kasan-dev@googlegroups.com>
      6e48a966
    • M
      mm/slob: Convert SLOB to use struct slab and struct folio · 50757018
      Matthew Wilcox (Oracle) 提交于
      Use struct slab throughout the slob allocator. Where non-slab page can
      appear use struct folio instead of struct page.
      
      [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h
        just for the single callers being wrappers in mm/slob.c ]
      
      [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      50757018
    • V
      mm/memcg: Convert slab objcgs from struct page to struct slab · 4b5f8d9a
      Vlastimil Babka 提交于
      page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
      so convert all the related infrastructure to struct slab. Also use
      struct folio instead of struct page when resolving object pointers.
      
      This is not just mechanistic changing of types and names. Now in
      mem_cgroup_from_obj() we use folio_test_slab() to decide if we interpret
      the folio as a real slab instead of a large kmalloc, instead of relying
      on MEMCG_DATA_OBJCGS bit that used to be checked in page_objcgs_check().
      Similarly in memcg_slab_free_hook() where we can encounter
      kmalloc_large() pages (here the folio slab flag check is implied by
      virt_to_slab()). As a result, page_objcgs_check() can be dropped instead
      of converted.
      
      To avoid include cycles, move the inline definition of slab_objcgs()
      from memcontrol.h to mm/slab.h.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <cgroups@vger.kernel.org>
      4b5f8d9a
    • V
      mm: Convert struct page to struct slab in functions used by other subsystems · 40f3bf0c
      Vlastimil Babka 提交于
      KASAN, KFENCE and memcg interact with SLAB or SLUB internals through
      functions nearest_obj(), obj_to_index() and objs_per_slab() that use
      struct page as parameter. This patch converts it to struct slab
      including all callers, through a coccinelle semantic patch.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
       { ... }
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
      
      @@
      identifier fn =~ "obj_to_index|objs_per_slab";
      @@
      
       fn(...,
      -   const struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj";
      @@
      
       fn(...,
      -   struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
      expression E;
      @@
      
       fn(...,
      (
      - slab_page(E)
      + E
      |
      - virt_to_page(E)
      + virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + page_slab(page)
      )
        ,...)
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <kasan-dev@googlegroups.com>
      Cc: <cgroups@vger.kernel.org>
      40f3bf0c
    • V
      mm/slab: Finish struct page to struct slab conversion · dd35f71a
      Vlastimil Babka 提交于
      Change cache_free_alien() to use slab_nid(virt_to_slab()). Otherwise
      just update of comments and some remaining variable names.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      dd35f71a
    • V
      mm/slab: Convert most struct page to struct slab by spatch · 7981e67e
      Vlastimil Babka 提交于
      The majority of conversion from struct page to struct slab in SLAB
      internals can be delegated to a coccinelle semantic patch. This includes
      renaming of variables with 'page' in name to 'slab', and similar.
      
      Big thanks to Julia Lawall and Luis Chamberlain for help with
      coccinelle.
      
      // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
      // embedded script
      
      // build list of functions for applying the next rule
      @initialize:ocaml@
      @@
      
      let ok_function p =
        not (List.mem (List.hd p).current_element ["kmem_getpages";"kmem_freepages"])
      
      // convert the type in selected functions
      @@
      position p : script:ocaml() { ok_function p };
      @@
      
      - struct page@p
      + struct slab
      
      @@
      @@
      
      -PageSlabPfmemalloc(page)
      +slab_test_pfmemalloc(slab)
      
      @@
      @@
      
      -ClearPageSlabPfmemalloc(page)
      +slab_clear_pfmemalloc(slab)
      
      @@
      @@
      
      obj_to_index(
       ...,
      - page
      + slab_page(slab)
      ,...)
      
      // for all functions, change any "struct slab *page" parameter to "struct slab
      // *slab" in the signature, and generally all occurences of "page" to "slab" in
      // the body - with some special cases.
      @@
      identifier fn;
      expression E;
      @@
      
       fn(...,
      -   struct slab *page
      +   struct slab *slab
          ,...)
       {
      <...
      (
      - int page_node;
      + int slab_node;
      |
      - page_node
      + slab_node
      |
      - page_slab(page)
      + slab
      |
      - page_address(page)
      + slab_address(slab)
      |
      - page_size(page)
      + slab_size(slab)
      |
      - page_to_nid(page)
      + slab_nid(slab)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + slab
      )
      ...>
       }
      
      // rename a function parameter
      @@
      identifier fn;
      expression E;
      @@
      
       fn(...,
      -   int page_node
      +   int slab_node
          ,...)
       {
      <...
      - page_node
      + slab_node
      ...>
       }
      
      // functions converted by previous rules that were temporarily called using
      // slab_page(E) so we want to remove the wrapper now that they accept struct
      // slab ptr directly
      @@
      identifier fn =~ "index_to_obj";
      expression E;
      @@
      
       fn(...,
      - slab_page(E)
      + E
       ,...)
      
      // functions that were returning struct page ptr and now will return struct
      // slab ptr, including slab_page() wrapper removal
      @@
      identifier fn =~ "cache_grow_begin|get_valid_first_slab|get_first_slab";
      expression E;
      @@
      
       fn(...)
       {
      <...
      - slab_page(E)
      + E
      ...>
       }
      
      // rename any former struct page * declarations
      @@
      @@
      
      struct slab *
      -page
      +slab
      ;
      
      // all functions (with exceptions) with a local "struct slab *page" variable
      // that will be renamed to "struct slab *slab"
      @@
      identifier fn !~ "kmem_getpages|kmem_freepages";
      expression E;
      @@
      
       fn(...)
       {
      <...
      (
      - page_slab(page)
      + slab
      |
      - page_to_nid(page)
      + slab_nid(slab)
      |
      - kasan_poison_slab(page)
      + kasan_poison_slab(slab_page(slab))
      |
      - page_address(page)
      + slab_address(slab)
      |
      - page_size(page)
      + slab_size(slab)
      |
      - page->pages
      + slab->slabs
      |
      - page = virt_to_head_page(E)
      + slab = virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + slab
      )
      ...>
       }
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      7981e67e
    • V
      mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab · 42c0faac
      Vlastimil Babka 提交于
      These functions sit at the boundary to page allocator. Also use folio
      internally to avoid extra compound_head() when dealing with page flags.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      42c0faac
    • V
      mm/slub: Finish struct page to struct slab conversion · c2092c12
      Vlastimil Babka 提交于
      Update comments mentioning pages to mention slabs where appropriate.
      Also some goto labels.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      c2092c12
    • V
      mm/slub: Convert most struct page to struct slab by spatch · bb192ed9
      Vlastimil Babka 提交于
      The majority of conversion from struct page to struct slab in SLUB
      internals can be delegated to a coccinelle semantic patch. This includes
      renaming of variables with 'page' in name to 'slab', and similar.
      
      Big thanks to Julia Lawall and Luis Chamberlain for help with
      coccinelle.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
      // embedded script
      
      // build list of functions to exclude from applying the next rule
      @initialize:ocaml@
      @@
      
      let ok_function p =
        not (List.mem (List.hd p).current_element ["nearest_obj";"obj_to_index";"objs_per_slab_page";"__slab_lock";"__slab_unlock";"free_nonslab_page";"kmalloc_large_node"])
      
      // convert the type from struct page to struct page in all functions except the
      // list from previous rule
      // this also affects struct kmem_cache_cpu, but that's ok
      @@
      position p : script:ocaml() { ok_function p };
      @@
      
      - struct page@p
      + struct slab
      
      // in struct kmem_cache_cpu, change the name from page to slab
      // the type was already converted by the previous rule
      @@
      @@
      
      struct kmem_cache_cpu {
      ...
      -struct slab *page;
      +struct slab *slab;
      ...
      }
      
      // there are many places that use c->page which is now c->slab after the
      // previous rule
      @@
      struct kmem_cache_cpu *c;
      @@
      
      -c->page
      +c->slab
      
      @@
      @@
      
      struct kmem_cache {
      ...
      - unsigned int cpu_partial_pages;
      + unsigned int cpu_partial_slabs;
      ...
      }
      
      @@
      struct kmem_cache *s;
      @@
      
      - s->cpu_partial_pages
      + s->cpu_partial_slabs
      
      @@
      @@
      
      static void
      - setup_page_debug(
      + setup_slab_debug(
       ...)
       {...}
      
      @@
      @@
      
      - setup_page_debug(
      + setup_slab_debug(
       ...);
      
      // for all functions (with exceptions), change any "struct slab *page"
      // parameter to "struct slab *slab" in the signature, and generally all
      // occurences of "page" to "slab" in the body - with some special cases.
      
      @@
      identifier fn !~ "free_nonslab_page|obj_to_index|objs_per_slab_page|nearest_obj";
      @@
       fn(...,
      -   struct slab *page
      +   struct slab *slab
          ,...)
       {
      <...
      - page
      + slab
      ...>
       }
      
      // similar to previous but the param is called partial_page
      @@
      identifier fn;
      @@
      
       fn(...,
      -   struct slab *partial_page
      +   struct slab *partial_slab
          ,...)
       {
      <...
      - partial_page
      + partial_slab
      ...>
       }
      
      // similar to previous but for functions that take pointer to struct page ptr
      @@
      identifier fn;
      @@
      
       fn(...,
      -   struct slab **ret_page
      +   struct slab **ret_slab
          ,...)
       {
      <...
      - ret_page
      + ret_slab
      ...>
       }
      
      // functions converted by previous rules that were temporarily called using
      // slab_page(E) so we want to remove the wrapper now that they accept struct
      // slab ptr directly
      @@
      identifier fn =~ "slab_free|do_slab_free";
      expression E;
      @@
      
       fn(...,
      - slab_page(E)
      + E
        ,...)
      
      // similar to previous but for another pattern
      @@
      identifier fn =~ "slab_pad_check|check_object";
      @@
      
       fn(...,
      - folio_page(folio, 0)
      + slab
        ,...)
      
      // functions that were returning struct page ptr and now will return struct
      // slab ptr, including slab_page() wrapper removal
      @@
      identifier fn =~ "allocate_slab|new_slab";
      expression E;
      @@
      
       static
      -struct slab *
      +struct slab *
       fn(...)
       {
      <...
      - slab_page(E)
      + E
      ...>
       }
      
      // rename any former struct page * declarations
      @@
      @@
      
      struct slab *
      (
      - page
      + slab
      |
      - partial_page
      + partial_slab
      |
      - oldpage
      + oldslab
      )
      ;
      
      // this has to be separate from previous rule as page and page2 appear at the
      // same line
      @@
      @@
      
      struct slab *
      -page2
      +slab2
      ;
      
      // similar but with initial assignment
      @@
      expression E;
      @@
      
      struct slab *
      (
      - page
      + slab
      |
      - flush_page
      + flush_slab
      |
      - discard_page
      + slab_to_discard
      |
      - page_to_unfreeze
      + slab_to_unfreeze
      )
      = E;
      
      // convert most of struct page to struct slab usage inside functions (with
      // exceptions), including specific variable renames
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      expression E;
      @@
      
       fn(...)
       {
      <...
      (
      - int pages;
      + int slabs;
      |
      - int pages = E;
      + int slabs = E;
      |
      - page
      + slab
      |
      - flush_page
      + flush_slab
      |
      - partial_page
      + partial_slab
      |
      - oldpage->pages
      + oldslab->slabs
      |
      - oldpage
      + oldslab
      |
      - unsigned int nr_pages;
      + unsigned int nr_slabs;
      |
      - nr_pages
      + nr_slabs
      |
      - unsigned int partial_pages = E;
      + unsigned int partial_slabs = E;
      |
      - partial_pages
      + partial_slabs
      )
      ...>
       }
      
      // this has to be split out from the previous rule so that lines containing
      // multiple matching changes will be fully converted
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      @@
      
       fn(...)
       {
      <...
      (
      - slab->pages
      + slab->slabs
      |
      - pages
      + slabs
      |
      - page2
      + slab2
      |
      - discard_page
      + slab_to_discard
      |
      - page_to_unfreeze
      + slab_to_unfreeze
      )
      ...>
       }
      
      // after we simply changed all occurences of page to slab, some usages need
      // adjustment for slab-specific functions, or use slab_page() wrapper
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      @@
      
       fn(...)
       {
      <...
      (
      - page_slab(slab)
      + slab
      |
      - kasan_poison_slab(slab)
      + kasan_poison_slab(slab_page(slab))
      |
      - page_address(slab)
      + slab_address(slab)
      |
      - page_size(slab)
      + slab_size(slab)
      |
      - PageSlab(slab)
      + folio_test_slab(slab_folio(slab))
      |
      - page_to_nid(slab)
      + slab_nid(slab)
      |
      - compound_order(slab)
      + slab_order(slab)
      )
      ...>
       }
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      bb192ed9
    • M
      mm/slub: Convert pfmemalloc_match() to take a struct slab · 01b34d16
      Matthew Wilcox (Oracle) 提交于
      Preparatory for mass conversion. Use the new slab_test_pfmemalloc()
      helper.  As it doesn't do VM_BUG_ON(!PageSlab()) we no longer need the
      pfmemalloc_match_unsafe() variant.
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      01b34d16
    • V
      mm/slub: Convert __free_slab() to use struct slab · 4020b4a2
      Vlastimil Babka 提交于
      __free_slab() is on the boundary of distinguishing struct slab and
      struct page so start with struct slab but convert to folio for working
      with flags and folio_page() to call functions that require struct page.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      4020b4a2
    • V
      mm/slub: Convert alloc_slab_page() to return a struct slab · 45387b8c
      Vlastimil Babka 提交于
      Preparatory, callers convert back to struct page for now.
      
      Also move setting page flags to alloc_slab_page() where we still operate
      on a struct page. This means the page->slab_cache pointer is now set
      later than the PageSlab flag, which could theoretically confuse some pfn
      walker assuming PageSlab means there would be a valid cache pointer. But
      as the code had no barriers and used __set_bit() anyway, it could have
      happened already, so there shouldn't be such a walker.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      45387b8c
    • M
      mm/slub: Convert print_page_info() to print_slab_info() · fb012e27
      Matthew Wilcox (Oracle) 提交于
      Improve the type safety and prepare for further conversion. For flags
      access, convert to folio internally.
      
      [ vbabka@suse.cz: access flags via folio_flags() ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      fb012e27
    • V
      mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab · 0393895b
      Vlastimil Babka 提交于
      These functions operate on the PG_locked page flag, but make them accept
      struct slab to encapsulate this implementation detail.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      0393895b
    • M
      mm/slub: Convert kfree() to use a struct slab · d835eef4
      Matthew Wilcox (Oracle) 提交于
      Convert kfree(), kmem_cache_free() and ___cache_free() to resolve object
      addresses to struct slab, using folio as intermediate step where needed.
      Keep passing the result as struct page for now in preparation for mass
      conversion of internal functions.
      
      [ vbabka@suse.cz: Use folio as intermediate step when checking for
        large kmalloc pages, and when freeing them - rename
        free_nonslab_page() to free_large_kmalloc() that takes struct folio ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      d835eef4
    • M
      mm/slub: Convert detached_freelist to use a struct slab · cc465c3b
      Matthew Wilcox (Oracle) 提交于
      This gives us a little bit of extra typesafety as we know that nobody
      called virt_to_page() instead of virt_to_head_page().
      
      [ vbabka@suse.cz: Use folio as intermediate step when filtering out
        large kmalloc pages ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      cc465c3b
    • M
      mm: Convert check_heap_object() to use struct slab · 0b3eb091
      Matthew Wilcox (Oracle) 提交于
      Ensure that we're not seeing a tail page inside __check_heap_object() by
      converting to a slab instead of a page.  Take the opportunity to mark
      the slab as const since we're not modifying it.  Also move the
      declaration of __check_heap_object() to mm/slab.h so it's not available
      to the wider kernel.
      
      [ vbabka@suse.cz: in check_heap_object() only convert to struct slab for
        actual PageSlab pages; use folio as intermediate step instead of page ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      0b3eb091
    • M
      mm: Use struct slab in kmem_obj_info() · 7213230a
      Matthew Wilcox (Oracle) 提交于
      All three implementations of slab support kmem_obj_info() which reports
      details of an object allocated from the slab allocator.  By using the
      slab type instead of the page type, we make it obvious that this can
      only be called for slabs.
      
      [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      7213230a
    • M
      mm: Convert __ksize() to struct slab · 0c24811b
      Matthew Wilcox (Oracle) 提交于
      In SLUB, use folios, and struct slab to access slab_cache field.
      In SLOB, use folios to properly resolve pointers beyond
      PAGE_SIZE offset of the object.
      
      [ vbabka@suse.cz: use folios, and only convert folio_test_slab() == true
        folios to struct slab ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      0c24811b
    • M
      mm: Convert virt_to_cache() to use struct slab · 82c1775d
      Matthew Wilcox (Oracle) 提交于
      This function is entirely self-contained, so can be converted from page
      to slab.
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      82c1775d
    • M
      mm: Convert [un]account_slab_page() to struct slab · b918653b
      Matthew Wilcox (Oracle) 提交于
      Convert the parameter of these functions to struct slab instead of
      struct page and drop _page from the names. For now their callers just
      convert page to slab.
      
      [ vbabka@suse.cz: replace existing functions instead of calling them ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      b918653b
    • M
      mm: Split slab into its own type · d122019b
      Matthew Wilcox (Oracle) 提交于
      Make struct slab independent of struct page. It still uses the
      underlying memory in struct page for storing slab-specific data, but
      slab and slub can now be weaned off using struct page directly.  Some of
      the wrapper functions (slab_address() and slab_order()) still need to
      cast to struct folio, but this is a significant disentanglement.
      
      [ vbabka@suse.cz: Rebase on folios, use folio instead of page where
        possible.
      
        Do not duplicate flags field in struct slab, instead make the related
        accessors go through slab_folio(). For testing pfmemalloc use the
        folio_*_active flag accessors directly so the PageSlabPfmemalloc
        wrappers can be removed later.
      
        Make folio_slab() expect only folio_test_slab() == true folios and
        virt_to_slab() return NULL when folio_test_slab() == false.
      
        Move struct slab to mm/slab.h.
      
        Don't represent with struct slab pages that are not true slab pages,
        but just a compound page obtained directly rom page allocator (with
        large kmalloc() for SLUB and SLOB). ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      d122019b
    • V
      mm/slub: Make object_err() static · ae16d059
      Vlastimil Babka 提交于
      There are no callers outside of mm/slub.c anymore.
      
      Move freelist_corrupted() that calls object_err() to avoid a need for
      forward declaration.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      ae16d059
    • V
      mm/slab: Dissolve slab_map_pages() in its caller · c7981543
      Vlastimil Babka 提交于
      The function no longer does what its name and comment suggests, and just
      sets two struct page fields, which can be done directly in its sole
      caller.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      c7981543
  2. 20 12月, 2021 14 次提交