1. 20 1月, 2022 4 次提交
  2. 19 1月, 2022 5 次提交
    • K
      net/netfilter: Add unstable CT lookup helpers for XDP and TC-BPF · b4c2b959
      Kumar Kartikeya Dwivedi 提交于
      This change adds conntrack lookup helpers using the unstable kfunc call
      interface for the XDP and TC-BPF hooks. The primary usecase is
      implementing a synproxy in XDP, see Maxim's patchset [0].
      
      Export get_net_ns_by_id as nf_conntrack_bpf.c needs to call it.
      
      This object is only built when CONFIG_DEBUG_INFO_BTF_MODULES is enabled.
      
        [0]: https://lore.kernel.org/bpf/20211019144655.3483197-1-maximmi@nvidia.comSigned-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220114163953.1455836-7-memxor@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      b4c2b959
    • K
      bpf: Add reference tracking support to kfunc · 5c073f26
      Kumar Kartikeya Dwivedi 提交于
      This patch adds verifier support for PTR_TO_BTF_ID return type of kfunc
      to be a reference, by reusing acquire_reference_state/release_reference
      support for existing in-kernel bpf helpers.
      
      We make use of the three kfunc types:
      
      - BTF_KFUNC_TYPE_ACQUIRE
        Return true if kfunc_btf_id is an acquire kfunc.  This will
        acquire_reference_state for the returned PTR_TO_BTF_ID (this is the
        only allow return value). Note that acquire kfunc must always return a
        PTR_TO_BTF_ID{_OR_NULL}, otherwise the program is rejected.
      
      - BTF_KFUNC_TYPE_RELEASE
        Return true if kfunc_btf_id is a release kfunc.  This will release the
        reference to the passed in PTR_TO_BTF_ID which has a reference state
        (from earlier acquire kfunc).
        The btf_check_func_arg_match returns the regno (of argument register,
        hence > 0) if the kfunc is a release kfunc, and a proper referenced
        PTR_TO_BTF_ID is being passed to it.
        This is similar to how helper call check uses bpf_call_arg_meta to
        store the ref_obj_id that is later used to release the reference.
        Similar to in-kernel helper, we only allow passing one referenced
        PTR_TO_BTF_ID as an argument. It can also be passed in to normal
        kfunc, but in case of release kfunc there must always be one
        PTR_TO_BTF_ID argument that is referenced.
      
      - BTF_KFUNC_TYPE_RET_NULL
        For kfunc returning PTR_TO_BTF_ID, tells if it can be NULL, hence
        force caller to mark the pointer not null (using check) before
        accessing it. Note that taking into account the case fixed by commit
        93c230e3 ("bpf: Enforce id generation for all may-be-null register type")
        we assign a non-zero id for mark_ptr_or_null_reg logic. Later, if more
        return types are supported by kfunc, which have a _OR_NULL variant, it
        might be better to move this id generation under a common
        reg_type_may_be_null check, similar to the case in the commit.
      
      Referenced PTR_TO_BTF_ID is currently only limited to kfunc, but can be
      extended in the future to other BPF helpers as well.  For now, we can
      rely on the btf_struct_ids_match check to ensure we get the pointer to
      the expected struct type. In the future, care needs to be taken to avoid
      ambiguity for reference PTR_TO_BTF_ID passed to release function, in
      case multiple candidates can release same BTF ID.
      
      e.g. there might be two release kfuncs (or kfunc and helper):
      
      foo(struct abc *p);
      bar(struct abc *p);
      
      ... such that both release a PTR_TO_BTF_ID with btf_id of struct abc. In
      this case we would need to track the acquire function corresponding to
      the release function to avoid type confusion, and store this information
      in the register state so that an incorrect program can be rejected. This
      is not a problem right now, hence it is left as an exercise for the
      future patch introducing such a case in the kernel.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220114163953.1455836-6-memxor@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      5c073f26
    • K
      bpf: Introduce mem, size argument pair support for kfunc · d583691c
      Kumar Kartikeya Dwivedi 提交于
      BPF helpers can associate two adjacent arguments together to pass memory
      of certain size, using ARG_PTR_TO_MEM and ARG_CONST_SIZE arguments.
      Since we don't use bpf_func_proto for kfunc, we need to leverage BTF to
      implement similar support.
      
      The ARG_CONST_SIZE processing for helpers is refactored into a common
      check_mem_size_reg helper that is shared with kfunc as well. kfunc
      ptr_to_mem support follows logic similar to global functions, where
      verification is done as if pointer is not null, even when it may be
      null.
      
      This leads to a simple to follow rule for writing kfunc: always check
      the argument pointer for NULL, except when it is PTR_TO_CTX. Also, the
      PTR_TO_CTX case is also only safe when the helper expecting pointer to
      program ctx is not exposed to other programs where same struct is not
      ctx type. In that case, the type check will fall through to other cases
      and would permit passing other types of pointers, possibly NULL at
      runtime.
      
      Currently, we require the size argument to be suffixed with "__sz" in
      the parameter name. This information is then recorded in kernel BTF and
      verified during function argument checking. In the future we can use BTF
      tagging instead, and modify the kernel function definitions. This will
      be a purely kernel-side change.
      
      This allows us to have some form of backwards compatibility for
      structures that are passed in to the kernel function with their size,
      and allow variable length structures to be passed in if they are
      accompanied by a size parameter.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220114163953.1455836-5-memxor@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      d583691c
    • K
      bpf: Remove check_kfunc_call callback and old kfunc BTF ID API · b202d844
      Kumar Kartikeya Dwivedi 提交于
      Completely remove the old code for check_kfunc_call to help it work
      with modules, and also the callback itself.
      
      The previous commit adds infrastructure to register all sets and put
      them in vmlinux or module BTF, and concatenates all related sets
      organized by the hook and the type. Once populated, these sets remain
      immutable for the lifetime of the struct btf.
      
      Also, since we don't need the 'owner' module anywhere when doing
      check_kfunc_call, drop the 'btf_modp' module parameter from
      find_kfunc_desc_btf.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220114163953.1455836-4-memxor@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      b202d844
    • K
      bpf: Populate kfunc BTF ID sets in struct btf · dee872e1
      Kumar Kartikeya Dwivedi 提交于
      This patch prepares the kernel to support putting all kinds of kfunc BTF
      ID sets in the struct btf itself. The various kernel subsystems will
      make register_btf_kfunc_id_set call in the initcalls (for built-in code
      and modules).
      
      The 'hook' is one of the many program types, e.g. XDP and TC/SCHED_CLS,
      STRUCT_OPS, and 'types' are check (allowed or not), acquire, release,
      and ret_null (with PTR_TO_BTF_ID_OR_NULL return type).
      
      A maximum of BTF_KFUNC_SET_MAX_CNT (32) kfunc BTF IDs are permitted in a
      set of certain hook and type for vmlinux sets, since they are allocated
      on demand, and otherwise set as NULL. Module sets can only be registered
      once per hook and type, hence they are directly assigned.
      
      A new btf_kfunc_id_set_contains function is exposed for use in verifier,
      this new method is faster than the existing list searching method, and
      is also automatic. It also lets other code not care whether the set is
      unallocated or not.
      
      Note that module code can only do single register_btf_kfunc_id_set call
      per hook. This is why sorting is only done for in-kernel vmlinux sets,
      because there might be multiple sets for the same hook and type that
      must be concatenated, hence sorting them is required to ensure bsearch
      in btf_id_set_contains continues to work correctly.
      
      Next commit will update the kernel users to make use of this
      infrastructure.
      
      Finally, add __maybe_unused annotation for BTF ID macros for the
      !CONFIG_DEBUG_INFO_BTF case, so that they don't produce warnings during
      build time.
      
      The previous patch is also needed to provide synchronization against
      initialization for module BTF's kfunc_set_tab introduced here, as
      described below:
      
        The kfunc_set_tab pointer in struct btf is write-once (if we consider
        the registration phase (comprised of multiple register_btf_kfunc_id_set
        calls) as a single operation). In this sense, once it has been fully
        prepared, it isn't modified, only used for lookup (from the verifier
        context).
      
        For btf_vmlinux, it is initialized fully during the do_initcalls phase,
        which happens fairly early in the boot process, before any processes are
        present. This also eliminates the possibility of bpf_check being called
        at that point, thus relieving us of ensuring any synchronization between
        the registration and lookup function (btf_kfunc_id_set_contains).
      
        However, the case for module BTF is a bit tricky. The BTF is parsed,
        prepared, and published from the MODULE_STATE_COMING notifier callback.
        After this, the module initcalls are invoked, where our registration
        function will be called to populate the kfunc_set_tab for module BTF.
      
        At this point, BTF may be available to userspace while its corresponding
        module is still intializing. A BTF fd can then be passed to verifier
        using bpf syscall (e.g. for kfunc call insn).
      
        Hence, there is a race window where verifier may concurrently try to
        lookup the kfunc_set_tab. To prevent this race, we must ensure the
        operations are serialized, or waiting for the __init functions to
        complete.
      
        In the earlier registration API, this race was alleviated as verifier
        bpf_check_mod_kfunc_call didn't find the kfunc BTF ID until it was added
        by the registration function (called usually at the end of module __init
        function after all module resources have been initialized). If the
        verifier made the check_kfunc_call before kfunc BTF ID was added to the
        list, it would fail verification (saying call isn't allowed). The
        access to list was protected using a mutex.
      
        Now, it would still fail verification, but for a different reason
        (returning ENXIO due to the failed btf_try_get_module call in
        add_kfunc_call), because if the __init call is in progress the module
        will be in the middle of MODULE_STATE_COMING -> MODULE_STATE_LIVE
        transition, and the BTF_MODULE_LIVE flag for btf_module instance will
        not be set, so the btf_try_get_module call will fail.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220114163953.1455836-3-memxor@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      dee872e1
  3. 10 1月, 2022 14 次提交
  4. 08 1月, 2022 1 次提交
  5. 07 1月, 2022 5 次提交
  6. 06 1月, 2022 11 次提交
    • L
      Bluetooth: hci_event: Rework hci_inquiry_result_with_rssi_evt · 72279d17
      Luiz Augusto von Dentz 提交于
      This rework the handling of hci_inquiry_result_with_rssi_evt to not use
      a union to represent the different inquiry responses.
      Signed-off-by: NLuiz Augusto von Dentz <luiz.von.dentz@intel.com>
      Tested-by: NSoenke Huster <soenke.huster@eknoes.de>
      Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
      72279d17
    • C
      gro: add ability to control gro max packet size · eac1b93c
      Coco Li 提交于
      Eric Dumazet suggested to allow users to modify max GRO packet size.
      
      We have seen GRO being disabled by users of appliances (such as
      wifi access points) because of claimed bufferbloat issues,
      or some work arounds in sch_cake, to split GRO/GSO packets.
      
      Instead of disabling GRO completely, one can chose to limit
      the maximum packet size of GRO packets, depending on their
      latency constraints.
      
      This patch adds a per device gro_max_size attribute
      that can be changed with ip link command.
      
      ip link set dev eth0 gro_max_size 16000
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NCoco Li <lixiaoyan@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eac1b93c
    • M
      net: fix SOF_TIMESTAMPING_BIND_PHC to work with multiple sockets · 007747a9
      Miroslav Lichvar 提交于
      When multiple sockets using the SOF_TIMESTAMPING_BIND_PHC flag received
      a packet with a hardware timestamp (e.g. multiple PTP instances in
      different PTP domains using the UDPv4/v6 multicast or L2 transport),
      the timestamps received on some sockets were corrupted due to repeated
      conversion of the same timestamp (by the same or different vclocks).
      
      Fix ptp_convert_timestamp() to not modify the shared skb timestamp
      and return the converted timestamp as a ktime_t instead. If the
      conversion fails, return 0 to not confuse the application with
      timestamps corresponding to an unexpected PHC.
      
      Fixes: d7c08826 ("net: socket: support hardware timestamp conversion to PHC bound")
      Signed-off-by: NMiroslav Lichvar <mlichvar@redhat.com>
      Cc: Yangbo Lu <yangbo.lu@nxp.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Acked-by: NRichard Cochran <richardcochran@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      007747a9
    • V
      net: dsa: warn about dsa_port and dsa_switch bit fields being non atomic · 1b26d364
      Vladimir Oltean 提交于
      As discussed during review here:
      https://patchwork.kernel.org/project/netdevbpf/patch/20220105132141.2648876-3-vladimir.oltean@nxp.com/
      
      we should inform developers about pitfalls of concurrent access to the
      boolean properties of dsa_switch and dsa_port, now that they've been
      converted to bit fields. No other measure than a comment needs to be
      taken, since the code paths that update these bit fields are not
      concurrent with each other.
      Suggested-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1b26d364
    • V
      net: dsa: don't enumerate dsa_switch and dsa_port bit fields using commas · 63cfc657
      Vladimir Oltean 提交于
      This is a cosmetic incremental fixup to commits
      7787ff77 ("net: dsa: merge all bools of struct dsa_switch into a single u32")
      bde82f38 ("net: dsa: merge all bools of struct dsa_port into a single u8")
      
      The desire to make this change was enunciated after posting these
      patches here:
      https://patchwork.kernel.org/project/netdevbpf/cover/20220105132141.2648876-1-vladimir.oltean@nxp.com/
      
      but due to a slight timing overlap (message posted at 2:28 p.m. UTC,
      merge commit is at 2:46 p.m. UTC), that comment was missed and the
      changes were applied as-is.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      63cfc657
    • M
      bootmem: Use page->index instead of page->freelist · c5e97ed1
      Matthew Wilcox (Oracle) 提交于
      page->freelist is for the use of slab.  Using page->index is the same
      set of bits as page->freelist, and by using an integer instead of a
      pointer, we can avoid casts.
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: <x86@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      c5e97ed1
    • M
      mm/kasan: Convert to struct folio and struct slab · 6e48a966
      Matthew Wilcox (Oracle) 提交于
      KASAN accesses some slab related struct page fields so we need to
      convert it to struct slab. Some places are a bit simplified thanks to
      kasan_addr_to_slab() encapsulating the PageSlab flag check through
      virt_to_slab().  When resolving object address to either a real slab or
      a large kmalloc, use struct folio as the intermediate type for testing
      the slab flag to avoid unnecessary implicit compound_head().
      
      [ vbabka@suse.cz: use struct folio, adjust to differences in previous
        patches ]
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Tested-by: NHyeongogn Yoo <42.hyeyoo@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: <kasan-dev@googlegroups.com>
      6e48a966
    • V
      mm/memcg: Convert slab objcgs from struct page to struct slab · 4b5f8d9a
      Vlastimil Babka 提交于
      page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
      so convert all the related infrastructure to struct slab. Also use
      struct folio instead of struct page when resolving object pointers.
      
      This is not just mechanistic changing of types and names. Now in
      mem_cgroup_from_obj() we use folio_test_slab() to decide if we interpret
      the folio as a real slab instead of a large kmalloc, instead of relying
      on MEMCG_DATA_OBJCGS bit that used to be checked in page_objcgs_check().
      Similarly in memcg_slab_free_hook() where we can encounter
      kmalloc_large() pages (here the folio slab flag check is implied by
      virt_to_slab()). As a result, page_objcgs_check() can be dropped instead
      of converted.
      
      To avoid include cycles, move the inline definition of slab_objcgs()
      from memcontrol.h to mm/slab.h.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <cgroups@vger.kernel.org>
      4b5f8d9a
    • V
      mm: Convert struct page to struct slab in functions used by other subsystems · 40f3bf0c
      Vlastimil Babka 提交于
      KASAN, KFENCE and memcg interact with SLAB or SLUB internals through
      functions nearest_obj(), obj_to_index() and objs_per_slab() that use
      struct page as parameter. This patch converts it to struct slab
      including all callers, through a coccinelle semantic patch.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
       { ... }
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
      
      @@
      identifier fn =~ "obj_to_index|objs_per_slab";
      @@
      
       fn(...,
      -   const struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj";
      @@
      
       fn(...,
      -   struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
      expression E;
      @@
      
       fn(...,
      (
      - slab_page(E)
      + E
      |
      - virt_to_page(E)
      + virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + page_slab(page)
      )
        ,...)
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <kasan-dev@googlegroups.com>
      Cc: <cgroups@vger.kernel.org>
      40f3bf0c
    • V
      mm/slub: Finish struct page to struct slab conversion · c2092c12
      Vlastimil Babka 提交于
      Update comments mentioning pages to mention slabs where appropriate.
      Also some goto labels.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      c2092c12
    • V
      mm/slub: Convert most struct page to struct slab by spatch · bb192ed9
      Vlastimil Babka 提交于
      The majority of conversion from struct page to struct slab in SLUB
      internals can be delegated to a coccinelle semantic patch. This includes
      renaming of variables with 'page' in name to 'slab', and similar.
      
      Big thanks to Julia Lawall and Luis Chamberlain for help with
      coccinelle.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
      // embedded script
      
      // build list of functions to exclude from applying the next rule
      @initialize:ocaml@
      @@
      
      let ok_function p =
        not (List.mem (List.hd p).current_element ["nearest_obj";"obj_to_index";"objs_per_slab_page";"__slab_lock";"__slab_unlock";"free_nonslab_page";"kmalloc_large_node"])
      
      // convert the type from struct page to struct page in all functions except the
      // list from previous rule
      // this also affects struct kmem_cache_cpu, but that's ok
      @@
      position p : script:ocaml() { ok_function p };
      @@
      
      - struct page@p
      + struct slab
      
      // in struct kmem_cache_cpu, change the name from page to slab
      // the type was already converted by the previous rule
      @@
      @@
      
      struct kmem_cache_cpu {
      ...
      -struct slab *page;
      +struct slab *slab;
      ...
      }
      
      // there are many places that use c->page which is now c->slab after the
      // previous rule
      @@
      struct kmem_cache_cpu *c;
      @@
      
      -c->page
      +c->slab
      
      @@
      @@
      
      struct kmem_cache {
      ...
      - unsigned int cpu_partial_pages;
      + unsigned int cpu_partial_slabs;
      ...
      }
      
      @@
      struct kmem_cache *s;
      @@
      
      - s->cpu_partial_pages
      + s->cpu_partial_slabs
      
      @@
      @@
      
      static void
      - setup_page_debug(
      + setup_slab_debug(
       ...)
       {...}
      
      @@
      @@
      
      - setup_page_debug(
      + setup_slab_debug(
       ...);
      
      // for all functions (with exceptions), change any "struct slab *page"
      // parameter to "struct slab *slab" in the signature, and generally all
      // occurences of "page" to "slab" in the body - with some special cases.
      
      @@
      identifier fn !~ "free_nonslab_page|obj_to_index|objs_per_slab_page|nearest_obj";
      @@
       fn(...,
      -   struct slab *page
      +   struct slab *slab
          ,...)
       {
      <...
      - page
      + slab
      ...>
       }
      
      // similar to previous but the param is called partial_page
      @@
      identifier fn;
      @@
      
       fn(...,
      -   struct slab *partial_page
      +   struct slab *partial_slab
          ,...)
       {
      <...
      - partial_page
      + partial_slab
      ...>
       }
      
      // similar to previous but for functions that take pointer to struct page ptr
      @@
      identifier fn;
      @@
      
       fn(...,
      -   struct slab **ret_page
      +   struct slab **ret_slab
          ,...)
       {
      <...
      - ret_page
      + ret_slab
      ...>
       }
      
      // functions converted by previous rules that were temporarily called using
      // slab_page(E) so we want to remove the wrapper now that they accept struct
      // slab ptr directly
      @@
      identifier fn =~ "slab_free|do_slab_free";
      expression E;
      @@
      
       fn(...,
      - slab_page(E)
      + E
        ,...)
      
      // similar to previous but for another pattern
      @@
      identifier fn =~ "slab_pad_check|check_object";
      @@
      
       fn(...,
      - folio_page(folio, 0)
      + slab
        ,...)
      
      // functions that were returning struct page ptr and now will return struct
      // slab ptr, including slab_page() wrapper removal
      @@
      identifier fn =~ "allocate_slab|new_slab";
      expression E;
      @@
      
       static
      -struct slab *
      +struct slab *
       fn(...)
       {
      <...
      - slab_page(E)
      + E
      ...>
       }
      
      // rename any former struct page * declarations
      @@
      @@
      
      struct slab *
      (
      - page
      + slab
      |
      - partial_page
      + partial_slab
      |
      - oldpage
      + oldslab
      )
      ;
      
      // this has to be separate from previous rule as page and page2 appear at the
      // same line
      @@
      @@
      
      struct slab *
      -page2
      +slab2
      ;
      
      // similar but with initial assignment
      @@
      expression E;
      @@
      
      struct slab *
      (
      - page
      + slab
      |
      - flush_page
      + flush_slab
      |
      - discard_page
      + slab_to_discard
      |
      - page_to_unfreeze
      + slab_to_unfreeze
      )
      = E;
      
      // convert most of struct page to struct slab usage inside functions (with
      // exceptions), including specific variable renames
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      expression E;
      @@
      
       fn(...)
       {
      <...
      (
      - int pages;
      + int slabs;
      |
      - int pages = E;
      + int slabs = E;
      |
      - page
      + slab
      |
      - flush_page
      + flush_slab
      |
      - partial_page
      + partial_slab
      |
      - oldpage->pages
      + oldslab->slabs
      |
      - oldpage
      + oldslab
      |
      - unsigned int nr_pages;
      + unsigned int nr_slabs;
      |
      - nr_pages
      + nr_slabs
      |
      - unsigned int partial_pages = E;
      + unsigned int partial_slabs = E;
      |
      - partial_pages
      + partial_slabs
      )
      ...>
       }
      
      // this has to be split out from the previous rule so that lines containing
      // multiple matching changes will be fully converted
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      @@
      
       fn(...)
       {
      <...
      (
      - slab->pages
      + slab->slabs
      |
      - pages
      + slabs
      |
      - page2
      + slab2
      |
      - discard_page
      + slab_to_discard
      |
      - page_to_unfreeze
      + slab_to_unfreeze
      )
      ...>
       }
      
      // after we simply changed all occurences of page to slab, some usages need
      // adjustment for slab-specific functions, or use slab_page() wrapper
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      @@
      
       fn(...)
       {
      <...
      (
      - page_slab(slab)
      + slab
      |
      - kasan_poison_slab(slab)
      + kasan_poison_slab(slab_page(slab))
      |
      - page_address(slab)
      + slab_address(slab)
      |
      - page_size(slab)
      + slab_size(slab)
      |
      - PageSlab(slab)
      + folio_test_slab(slab_folio(slab))
      |
      - page_to_nid(slab)
      + slab_nid(slab)
      |
      - compound_order(slab)
      + slab_order(slab)
      )
      ...>
       }
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NRoman Gushchin <guro@fb.com>
      Reviewed-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      bb192ed9