1. 21 5月, 2022 1 次提交
  2. 14 5月, 2022 1 次提交
  3. 12 5月, 2022 2 次提交
  4. 11 5月, 2022 4 次提交
  5. 27 4月, 2022 1 次提交
  6. 26 4月, 2022 7 次提交
    • K
      bpf: Make BTF type match stricter for release arguments · 2ab3b380
      Kumar Kartikeya Dwivedi 提交于
      The current of behavior of btf_struct_ids_match for release arguments is
      that when type match fails, it retries with first member type again
      (recursively). Since the offset is already 0, this is akin to just
      casting the pointer in normal C, since if type matches it was just
      embedded inside parent sturct as an object. However, we want to reject
      cases for release function type matching, be it kfunc or BPF helpers.
      
      An example is the following:
      
      struct foo {
      	struct bar b;
      };
      
      struct foo *v = acq_foo();
      rel_bar(&v->b); // btf_struct_ids_match fails btf_types_are_same, then
      		// retries with first member type and succeeds, while
      		// it should fail.
      
      Hence, don't walk the struct and only rely on btf_types_are_same for
      strict mode. All users of strict mode must be dealing with zero offset
      anyway, since otherwise they would want the struct to be walked.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-10-memxor@gmail.com
      2ab3b380
    • K
      bpf: Wire up freeing of referenced kptr · 14a324f6
      Kumar Kartikeya Dwivedi 提交于
      A destructor kfunc can be defined as void func(type *), where type may
      be void or any other pointer type as per convenience.
      
      In this patch, we ensure that the type is sane and capture the function
      pointer into off_desc of ptr_off_tab for the specific pointer offset,
      with the invariant that the dtor pointer is always set when 'kptr_ref'
      tag is applied to the pointer's pointee type, which is indicated by the
      flag BPF_MAP_VALUE_OFF_F_REF.
      
      Note that only BTF IDs whose destructor kfunc is registered, thus become
      the allowed BTF IDs for embedding as referenced kptr. Hence it serves
      the purpose of finding dtor kfunc BTF ID, as well acting as a check
      against the whitelist of allowed BTF IDs for this purpose.
      
      Finally, wire up the actual freeing of the referenced pointer if any at
      all available offsets, so that no references are leaked after the BPF
      map goes away and the BPF program previously moved the ownership a
      referenced pointer into it.
      
      The behavior is similar to BPF timers, where bpf_map_{update,delete}_elem
      will free any existing referenced kptr. The same case is with LRU map's
      bpf_lru_push_free/htab_lru_push_free functions, which are extended to
      reset unreferenced and free referenced kptr.
      
      Note that unlike BPF timers, kptr is not reset or freed when map uref
      drops to zero.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-8-memxor@gmail.com
      14a324f6
    • K
      bpf: Adapt copy_map_value for multiple offset case · 4d7d7f69
      Kumar Kartikeya Dwivedi 提交于
      Since now there might be at most 10 offsets that need handling in
      copy_map_value, the manual shuffling and special case is no longer going
      to work. Hence, let's generalise the copy_map_value function by using
      a sorted array of offsets to skip regions that must be avoided while
      copying into and out of a map value.
      
      When the map is created, we populate the offset array in struct map,
      Then, copy_map_value uses this sorted offset array is used to memcpy
      while skipping timer, spin lock, and kptr. The array is allocated as
      in most cases none of these special fields would be present in map
      value, hence we can save on space for the common case by not embedding
      the entire object inside bpf_map struct.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-6-memxor@gmail.com
      4d7d7f69
    • K
      bpf: Prevent escaping of kptr loaded from maps · 6efe152d
      Kumar Kartikeya Dwivedi 提交于
      While we can guarantee that even for unreferenced kptr, the object
      pointer points to being freed etc. can be handled by the verifier's
      exception handling (normal load patching to PROBE_MEM loads), we still
      cannot allow the user to pass these pointers to BPF helpers and kfunc,
      because the same exception handling won't be done for accesses inside
      the kernel. The same is true if a referenced pointer is loaded using
      normal load instruction. Since the reference is not guaranteed to be
      held while the pointer is used, it must be marked as untrusted.
      
      Hence introduce a new type flag, PTR_UNTRUSTED, which is used to mark
      all registers loading unreferenced and referenced kptr from BPF maps,
      and ensure they can never escape the BPF program and into the kernel by
      way of calling stable/unstable helpers.
      
      In check_ptr_to_btf_access, the !type_may_be_null check to reject type
      flags is still correct, as apart from PTR_MAYBE_NULL, only MEM_USER,
      MEM_PERCPU, and PTR_UNTRUSTED may be set for PTR_TO_BTF_ID. The first
      two are checked inside the function and rejected using a proper error
      message, but we still want to allow dereference of untrusted case.
      
      Also, we make sure to inherit PTR_UNTRUSTED when chain of pointers are
      walked, so that this flag is never dropped once it has been set on a
      PTR_TO_BTF_ID (i.e. trusted to untrusted transition can only be in one
      direction).
      
      In convert_ctx_accesses, extend the switch case to consider untrusted
      PTR_TO_BTF_ID in addition to normal PTR_TO_BTF_ID for PROBE_MEM
      conversion for BPF_LDX.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-5-memxor@gmail.com
      6efe152d
    • K
      bpf: Allow storing referenced kptr in map · c0a5a21c
      Kumar Kartikeya Dwivedi 提交于
      Extending the code in previous commits, introduce referenced kptr
      support, which needs to be tagged using 'kptr_ref' tag instead. Unlike
      unreferenced kptr, referenced kptr have a lot more restrictions. In
      addition to the type matching, only a newly introduced bpf_kptr_xchg
      helper is allowed to modify the map value at that offset. This transfers
      the referenced pointer being stored into the map, releasing the
      references state for the program, and returning the old value and
      creating new reference state for the returned pointer.
      
      Similar to unreferenced pointer case, return value for this case will
      also be PTR_TO_BTF_ID_OR_NULL. The reference for the returned pointer
      must either be eventually released by calling the corresponding release
      function, otherwise it must be transferred into another map.
      
      It is also allowed to call bpf_kptr_xchg with a NULL pointer, to clear
      the value, and obtain the old value if any.
      
      BPF_LDX, BPF_STX, and BPF_ST cannot access referenced kptr. A future
      commit will permit using BPF_LDX for such pointers, but attempt at
      making it safe, since the lifetime of object won't be guaranteed.
      
      There are valid reasons to enforce the restriction of permitting only
      bpf_kptr_xchg to operate on referenced kptr. The pointer value must be
      consistent in face of concurrent modification, and any prior values
      contained in the map must also be released before a new one is moved
      into the map. To ensure proper transfer of this ownership, bpf_kptr_xchg
      returns the old value, which the verifier would require the user to
      either free or move into another map, and releases the reference held
      for the pointer being moved in.
      
      In the future, direct BPF_XCHG instruction may also be permitted to work
      like bpf_kptr_xchg helper.
      
      Note that process_kptr_func doesn't have to call
      check_helper_mem_access, since we already disallow rdonly/wronly flags
      for map, which is what check_map_access_type checks, and we already
      ensure the PTR_TO_MAP_VALUE refers to kptr by obtaining its off_desc,
      so check_map_access is also not required.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-4-memxor@gmail.com
      c0a5a21c
    • K
      bpf: Tag argument to be released in bpf_func_proto · 8f14852e
      Kumar Kartikeya Dwivedi 提交于
      Add a new type flag for bpf_arg_type that when set tells verifier that
      for a release function, that argument's register will be the one for
      which meta.ref_obj_id will be set, and which will then be released
      using release_reference. To capture the regno, introduce a new field
      release_regno in bpf_call_arg_meta.
      
      This would be required in the next patch, where we may either pass NULL
      or a refcounted pointer as an argument to the release function
      bpf_kptr_xchg. Just releasing only when meta.ref_obj_id is set is not
      enough, as there is a case where the type of argument needed matches,
      but the ref_obj_id is set to 0. Hence, we must enforce that whenever
      meta.ref_obj_id is zero, the register that is to be released can only
      be NULL for a release function.
      
      Since we now indicate whether an argument is to be released in
      bpf_func_proto itself, is_release_function helper has lost its utitlity,
      hence refactor code to work without it, and just rely on
      meta.release_regno to know when to release state for a ref_obj_id.
      Still, the restriction of one release argument and only one ref_obj_id
      passed to BPF helper or kfunc remains. This may be lifted in the future.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-3-memxor@gmail.com
      8f14852e
    • K
      bpf: Allow storing unreferenced kptr in map · 61df10c7
      Kumar Kartikeya Dwivedi 提交于
      This commit introduces a new pointer type 'kptr' which can be embedded
      in a map value to hold a PTR_TO_BTF_ID stored by a BPF program during
      its invocation. When storing such a kptr, BPF program's PTR_TO_BTF_ID
      register must have the same type as in the map value's BTF, and loading
      a kptr marks the destination register as PTR_TO_BTF_ID with the correct
      kernel BTF and BTF ID.
      
      Such kptr are unreferenced, i.e. by the time another invocation of the
      BPF program loads this pointer, the object which the pointer points to
      may not longer exist. Since PTR_TO_BTF_ID loads (using BPF_LDX) are
      patched to PROBE_MEM loads by the verifier, it would safe to allow user
      to still access such invalid pointer, but passing such pointers into
      BPF helpers and kfuncs should not be permitted. A future patch in this
      series will close this gap.
      
      The flexibility offered by allowing programs to dereference such invalid
      pointers while being safe at runtime frees the verifier from doing
      complex lifetime tracking. As long as the user may ensure that the
      object remains valid, it can ensure data read by it from the kernel
      object is valid.
      
      The user indicates that a certain pointer must be treated as kptr
      capable of accepting stores of PTR_TO_BTF_ID of a certain type, by using
      a BTF type tag 'kptr' on the pointed to type of the pointer. Then, this
      information is recorded in the object BTF which will be passed into the
      kernel by way of map's BTF information. The name and kind from the map
      value BTF is used to look up the in-kernel type, and the actual BTF and
      BTF ID is recorded in the map struct in a new kptr_off_tab member. For
      now, only storing pointers to structs is permitted.
      
      An example of this specification is shown below:
      
      	#define __kptr __attribute__((btf_type_tag("kptr")))
      
      	struct map_value {
      		...
      		struct task_struct __kptr *task;
      		...
      	};
      
      Then, in a BPF program, user may store PTR_TO_BTF_ID with the type
      task_struct into the map, and then load it later.
      
      Note that the destination register is marked PTR_TO_BTF_ID_OR_NULL, as
      the verifier cannot know whether the value is NULL or not statically, it
      must treat all potential loads at that map value offset as loading a
      possibly NULL pointer.
      
      Only BPF_LDX, BPF_STX, and BPF_ST (with insn->imm = 0 to denote NULL)
      are allowed instructions that can access such a pointer. On BPF_LDX, the
      destination register is updated to be a PTR_TO_BTF_ID, and on BPF_STX,
      it is checked whether the source register type is a PTR_TO_BTF_ID with
      same BTF type as specified in the map BTF. The access size must always
      be BPF_DW.
      
      For the map in map support, the kptr_off_tab for outer map is copied
      from the inner map's kptr_off_tab. It was chosen to do a deep copy
      instead of introducing a refcount to kptr_off_tab, because the copy only
      needs to be done when paramterizing using inner_map_fd in the map in map
      case, hence would be unnecessary for all other users.
      
      It is not permitted to use MAP_FREEZE command and mmap for BPF map
      having kptrs, similar to the bpf_timer case. A kptr also requires that
      BPF program has both read and write access to the map (hence both
      BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG are disallowed).
      
      Note that check_map_access must be called from both
      check_helper_mem_access and for the BPF instructions, hence the kptr
      check must distinguish between ACCESS_DIRECT and ACCESS_HELPER, and
      reject ACCESS_HELPER cases. We rename stack_access_src to bpf_access_src
      and reuse it for this purpose.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220424214901.2743946-2-memxor@gmail.com
      61df10c7
  7. 20 4月, 2022 1 次提交
  8. 06 3月, 2022 1 次提交
    • H
      bpf: Reject programs that try to load __percpu memory. · 5844101a
      Hao Luo 提交于
      With the introduction of the btf_type_tag "percpu", we can add a
      MEM_PERCPU to identify those pointers that point to percpu memory.
      The ability of differetiating percpu pointers from regular memory
      pointers have two benefits:
      
       1. It forbids unexpected use of percpu pointers, such as direct loads.
          In kernel, there are special functions used for accessing percpu
          memory. Directly loading percpu memory is meaningless. We already
          have BPF helpers like bpf_per_cpu_ptr() and bpf_this_cpu_ptr() that
          wrap the kernel percpu functions. So we can now convert percpu
          pointers into regular pointers in a safe way.
      
       2. Previously, bpf_per_cpu_ptr() and bpf_this_cpu_ptr() only work on
          PTR_TO_PERCPU_BTF_ID, a special reg_type which describes static
          percpu variables in kernel (we rely on pahole to encode them into
          vmlinux BTF). Now, since we can identify __percpu tagged pointers,
          we can also identify dynamically allocated percpu memory as well.
          It means we can use bpf_xxx_cpu_ptr() on dynamic percpu memory.
          This would be very convenient when accessing fields like
          "cgroup->rstat_cpu".
      Signed-off-by: NHao Luo <haoluo@google.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/20220304191657.981240-4-haoluo@google.com
      5844101a
  9. 21 2月, 2022 1 次提交
  10. 12 2月, 2022 2 次提交
    • Y
      bpf: Fix a bpf_timer initialization issue · 5eaed6ee
      Yonghong Song 提交于
      The patch in [1] intends to fix a bpf_timer related issue,
      but the fix caused existing 'timer' selftest to fail with
      hang or some random errors. After some debug, I found
      an issue with check_and_init_map_value() in the hashtab.c.
      More specifically, in hashtab.c, we have code
        l_new = bpf_map_kmalloc_node(&htab->map, ...)
        check_and_init_map_value(&htab->map, l_new...)
      Note that bpf_map_kmalloc_node() does not do initialization
      so l_new contains random value.
      
      The function check_and_init_map_value() intends to zero the
      bpf_spin_lock and bpf_timer if they exist in the map.
      But I found bpf_spin_lock is zero'ed but bpf_timer is not zero'ed.
      With [1], later copy_map_value() skips copying of
      bpf_spin_lock and bpf_timer. The non-zero bpf_timer caused
      random failures for 'timer' selftest.
      Without [1], for both bpf_spin_lock and bpf_timer case,
      bpf_timer will be zero'ed, so 'timer' self test is okay.
      
      For check_and_init_map_value(), why bpf_spin_lock is zero'ed
      properly while bpf_timer not. In bpf uapi header, we have
        struct bpf_spin_lock {
              __u32   val;
        };
        struct bpf_timer {
              __u64 :64;
              __u64 :64;
        } __attribute__((aligned(8)));
      
      The initialization code:
        *(struct bpf_spin_lock *)(dst + map->spin_lock_off) =
            (struct bpf_spin_lock){};
        *(struct bpf_timer *)(dst + map->timer_off) =
            (struct bpf_timer){};
      It appears the compiler has no obligation to initialize anonymous fields.
      For example, let us use clang with bpf target as below:
        $ cat t.c
        struct bpf_timer {
              unsigned long long :64;
        };
        struct bpf_timer2 {
              unsigned long long a;
        };
      
        void test(struct bpf_timer *t) {
          *t = (struct bpf_timer){};
        }
        void test2(struct bpf_timer2 *t) {
          *t = (struct bpf_timer2){};
        }
        $ clang -target bpf -O2 -c -g t.c
        $ llvm-objdump -d t.o
         ...
         0000000000000000 <test>:
             0:       95 00 00 00 00 00 00 00 exit
         0000000000000008 <test2>:
             1:       b7 02 00 00 00 00 00 00 r2 = 0
             2:       7b 21 00 00 00 00 00 00 *(u64 *)(r1 + 0) = r2
             3:       95 00 00 00 00 00 00 00 exit
      
      gcc11.2 does not have the above issue. But from
        INTERNATIONAL STANDARD ©ISO/IEC ISO/IEC 9899:201x
        Programming languages — C
        http://www.open-std.org/Jtc1/sc22/wg14/www/docs/n1547.pdf
        page 157:
        Except where explicitly stated otherwise, for the purposes of
        this subclause unnamed members of objects of structure and union
        type do not participate in initialization. Unnamed members of
        structure objects have indeterminate value even after initialization.
      
      To fix the problem, let use memset for bpf_timer case in
      check_and_init_map_value(). For consistency, memset is also
      used for bpf_spin_lock case.
      
        [1] https://lore.kernel.org/bpf/20220209070324.1093182-2-memxor@gmail.com/
      
      Fixes: 68134668 ("bpf: Add map side support for bpf timers.")
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220211194953.3142152-1-yhs@fb.com
      5eaed6ee
    • K
      bpf: Fix crash due to incorrect copy_map_value · a8abb0c3
      Kumar Kartikeya Dwivedi 提交于
      When both bpf_spin_lock and bpf_timer are present in a BPF map value,
      copy_map_value needs to skirt both objects when copying a value into and
      out of the map. However, the current code does not set both s_off and
      t_off in copy_map_value, which leads to a crash when e.g. bpf_spin_lock
      is placed in map value with bpf_timer, as bpf_map_update_elem call will
      be able to overwrite the other timer object.
      
      When the issue is not fixed, an overwriting can produce the following
      splat:
      
      [root@(none) bpf]# ./test_progs -t timer_crash
      [   15.930339] bpf_testmod: loading out-of-tree module taints kernel.
      [   16.037849] ==================================================================
      [   16.038458] BUG: KASAN: user-memory-access in __pv_queued_spin_lock_slowpath+0x32b/0x520
      [   16.038944] Write of size 8 at addr 0000000000043ec0 by task test_progs/325
      [   16.039399]
      [   16.039514] CPU: 0 PID: 325 Comm: test_progs Tainted: G           OE     5.16.0+ #278
      [   16.039983] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.15.0-1 04/01/2014
      [   16.040485] Call Trace:
      [   16.040645]  <TASK>
      [   16.040805]  dump_stack_lvl+0x59/0x73
      [   16.041069]  ? __pv_queued_spin_lock_slowpath+0x32b/0x520
      [   16.041427]  kasan_report.cold+0x116/0x11b
      [   16.041673]  ? __pv_queued_spin_lock_slowpath+0x32b/0x520
      [   16.042040]  __pv_queued_spin_lock_slowpath+0x32b/0x520
      [   16.042328]  ? memcpy+0x39/0x60
      [   16.042552]  ? pv_hash+0xd0/0xd0
      [   16.042785]  ? lockdep_hardirqs_off+0x95/0xd0
      [   16.043079]  __bpf_spin_lock_irqsave+0xdf/0xf0
      [   16.043366]  ? bpf_get_current_comm+0x50/0x50
      [   16.043608]  ? jhash+0x11a/0x270
      [   16.043848]  bpf_timer_cancel+0x34/0xe0
      [   16.044119]  bpf_prog_c4ea1c0f7449940d_sys_enter+0x7c/0x81
      [   16.044500]  bpf_trampoline_6442477838_0+0x36/0x1000
      [   16.044836]  __x64_sys_nanosleep+0x5/0x140
      [   16.045119]  do_syscall_64+0x59/0x80
      [   16.045377]  ? lock_is_held_type+0xe4/0x140
      [   16.045670]  ? irqentry_exit_to_user_mode+0xa/0x40
      [   16.046001]  ? mark_held_locks+0x24/0x90
      [   16.046287]  ? asm_exc_page_fault+0x1e/0x30
      [   16.046569]  ? asm_exc_page_fault+0x8/0x30
      [   16.046851]  ? lockdep_hardirqs_on+0x7e/0x100
      [   16.047137]  entry_SYSCALL_64_after_hwframe+0x44/0xae
      [   16.047405] RIP: 0033:0x7f9e4831718d
      [   16.047602] Code: b4 0c 00 0f 05 eb a9 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d b3 6c 0c 00 f7 d8 64 89 01 48
      [   16.048764] RSP: 002b:00007fff488086b8 EFLAGS: 00000206 ORIG_RAX: 0000000000000023
      [   16.049275] RAX: ffffffffffffffda RBX: 00007f9e48683740 RCX: 00007f9e4831718d
      [   16.049747] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fff488086d0
      [   16.050225] RBP: 00007fff488086f0 R08: 00007fff488085d7 R09: 00007f9e4cb594a0
      [   16.050648] R10: 0000000000000000 R11: 0000000000000206 R12: 00007f9e484cde30
      [   16.051124] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
      [   16.051608]  </TASK>
      [   16.051762] ==================================================================
      
      Fixes: 68134668 ("bpf: Add map side support for bpf timers.")
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220209070324.1093182-2-memxor@gmail.com
      a8abb0c3
  11. 08 2月, 2022 3 次提交
  12. 28 1月, 2022 2 次提交
  13. 27 1月, 2022 1 次提交
  14. 25 1月, 2022 1 次提交
  15. 22 1月, 2022 2 次提交
  16. 21 1月, 2022 1 次提交
  17. 20 1月, 2022 3 次提交
  18. 19 1月, 2022 2 次提交
    • D
      bpf: Fix ringbuf memory type confusion when passing to helpers · a672b2e3
      Daniel Borkmann 提交于
      The bpf_ringbuf_submit() and bpf_ringbuf_discard() have ARG_PTR_TO_ALLOC_MEM
      in their bpf_func_proto definition as their first argument, and thus both expect
      the result from a prior bpf_ringbuf_reserve() call which has a return type of
      RET_PTR_TO_ALLOC_MEM_OR_NULL.
      
      While the non-NULL memory from bpf_ringbuf_reserve() can be passed to other
      helpers, the two sinks (bpf_ringbuf_submit(), bpf_ringbuf_discard()) right now
      only enforce a register type of PTR_TO_MEM.
      
      This can lead to potential type confusion since it would allow other PTR_TO_MEM
      memory to be passed into the two sinks which did not come from bpf_ringbuf_reserve().
      
      Add a new MEM_ALLOC composable type attribute for PTR_TO_MEM, and enforce that:
      
       - bpf_ringbuf_reserve() returns NULL or PTR_TO_MEM | MEM_ALLOC
       - bpf_ringbuf_submit() and bpf_ringbuf_discard() only take PTR_TO_MEM | MEM_ALLOC
         but not plain PTR_TO_MEM arguments via ARG_PTR_TO_ALLOC_MEM
       - however, other helpers might treat PTR_TO_MEM | MEM_ALLOC as plain PTR_TO_MEM
         to populate the memory area when they use ARG_PTR_TO_{UNINIT_,}MEM in their
         func proto description
      
      Fixes: 457f4436 ("bpf: Implement BPF ring buffer and verifier support for it")
      Reported-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      a672b2e3
    • K
      bpf: Remove check_kfunc_call callback and old kfunc BTF ID API · b202d844
      Kumar Kartikeya Dwivedi 提交于
      Completely remove the old code for check_kfunc_call to help it work
      with modules, and also the callback itself.
      
      The previous commit adds infrastructure to register all sets and put
      them in vmlinux or module BTF, and concatenates all related sets
      organized by the hook and the type. Once populated, these sets remain
      immutable for the lifetime of the struct btf.
      
      Also, since we don't need the 'owner' module anywhere when doing
      check_kfunc_call, drop the 'btf_modp' module parameter from
      find_kfunc_desc_btf.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220114163953.1455836-4-memxor@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      b202d844
  19. 06 1月, 2022 1 次提交
  20. 19 12月, 2021 3 次提交