1. 11 10月, 2022 4 次提交
  2. 18 8月, 2022 2 次提交
  3. 16 8月, 2022 1 次提交
  4. 12 8月, 2022 1 次提交
  5. 30 7月, 2022 1 次提交
  6. 30 6月, 2022 1 次提交
  7. 29 6月, 2022 2 次提交
  8. 20 5月, 2022 1 次提交
  9. 13 5月, 2022 1 次提交
    • A
      libbpf: Add safer high-level wrappers for map operations · 737d0646
      Andrii Nakryiko 提交于
      Add high-level API wrappers for most common and typical BPF map
      operations that works directly on instances of struct bpf_map * (so
      you don't have to call bpf_map__fd()) and validate key/value size
      expectations.
      
      These helpers require users to specify key (and value, where
      appropriate) sizes when performing lookup/update/delete/etc. This forces
      user to actually think and validate (for themselves) those. This is
      a good thing as user is expected by kernel to implicitly provide correct
      key/value buffer sizes and kernel will just read/write necessary amount
      of data. If it so happens that user doesn't set up buffers correctly
      (which bit people for per-CPU maps especially) kernel either randomly
      overwrites stack data or return -EFAULT, depending on user's luck and
      circumstances. These high-level APIs are meant to prevent such
      unpleasant and hard to debug bugs.
      
      This patch also adds bpf_map_delete_elem_flags() low-level API and
      requires passing flags to bpf_map__delete_elem() API for consistency
      across all similar APIs, even though currently kernel doesn't expect
      any extra flags for BPF_MAP_DELETE_ELEM operation.
      
      List of map operations that get these high-level APIs:
      
        - bpf_map_lookup_elem;
        - bpf_map_update_elem;
        - bpf_map_delete_elem;
        - bpf_map_lookup_and_delete_elem;
        - bpf_map_get_next_key.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220512220713.2617964-1-andrii@kernel.org
      737d0646
  10. 11 5月, 2022 1 次提交
  11. 23 4月, 2022 1 次提交
    • A
      libbpf: Teach bpf_link_create() to fallback to bpf_raw_tracepoint_open() · 8462e0b4
      Andrii Nakryiko 提交于
      Teach bpf_link_create() to fallback to bpf_raw_tracepoint_open() on
      older kernels for programs that are attachable through
      BPF_RAW_TRACEPOINT_OPEN. This makes bpf_link_create() more unified and
      convenient interface for creating bpf_link-based attachments.
      
      With this approach end users can just use bpf_link_create() for
      tp_btf/fentry/fexit/fmod_ret/lsm program attachments without needing to
      care about kernel support, as libbpf will handle this transparently. On
      the other hand, as newer features (like BPF cookie) are added to
      LINK_CREATE interface, they will be readily usable though the same
      bpf_link_create() API without any major refactoring from user's
      standpoint.
      
      bpf_program__attach_btf_id() is now using bpf_link_create() internally
      as well and will take advantaged of this unified interface when BPF
      cookie is added for fentry/fexit.
      
      Doing proactive feature detection of LINK_CREATE support for
      fentry/tp_btf/etc is quite involved. It requires parsing vmlinux BTF,
      determining some stable and guaranteed to be in all kernels versions
      target BTF type (either raw tracepoint or fentry target function),
      actually attaching this program and thus potentially affecting the
      performance of the host kernel briefly, etc. So instead we are taking
      much simpler "lazy" approach of falling back to
      bpf_raw_tracepoint_open() call only if initial LINK_CREATE command
      fails. For modern kernels this will mean zero added overhead, while
      older kernels will incur minimal overhead with a single fast-failing
      LINK_CREATE call.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: NKui-Feng Lee <kuifeng@fb.com>
      Link: https://lore.kernel.org/bpf/20220421033945.3602803-3-andrii@kernel.org
      8462e0b4
  12. 18 3月, 2022 1 次提交
  13. 10 3月, 2022 1 次提交
  14. 08 3月, 2022 1 次提交
  15. 13 1月, 2022 1 次提交
  16. 07 1月, 2022 1 次提交
  17. 15 12月, 2021 1 次提交
    • A
      libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF · e542f2c4
      Andrii Nakryiko 提交于
      The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
      one of the first extremely frustrating gotchas that all new BPF users go
      through and in some cases have to learn it a very hard way.
      
      Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
      dropped the dependency on memlock and uses memcg-based memory accounting
      instead. Unfortunately, detecting memcg-based BPF memory accounting is
      far from trivial (as can be evidenced by this patch), so in practice
      most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
      
      As we move towards libbpf 1.0, it would be good to allow users to forget
      about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
      automatically. This patch paves the way forward in this matter. Libbpf
      will do feature detection of memcg-based accounting, and if detected,
      will do nothing. But if the kernel is too old, just like BCC, libbpf
      will automatically increase RLIMIT_MEMLOCK on behalf of user
      application ([0]).
      
      As this is technically a breaking change, during the transition period
      applications have to opt into libbpf 1.0 mode by setting
      LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
      libbpf_set_strict_mode().
      
      Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
      with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
      nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
      called before the first bpf_prog_load(), bpf_btf_load(), or
      bpf_object__load() call, otherwise it has no effect and will return
      -EBUSY.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/369Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
      e542f2c4
  18. 14 12月, 2021 1 次提交
  19. 11 12月, 2021 2 次提交
  20. 27 11月, 2021 1 次提交
    • T
      bpf, mips: Fix build errors about __NR_bpf undeclared · e32cb12f
      Tiezhu Yang 提交于
      Add the __NR_bpf definitions to fix the following build errors for mips:
      
        $ cd tools/bpf/bpftool
        $ make
        [...]
        bpf.c:54:4: error: #error __NR_bpf not defined. libbpf does not support your arch.
         #  error __NR_bpf not defined. libbpf does not support your arch.
            ^~~~~
        bpf.c: In function ‘sys_bpf’:
        bpf.c:66:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’?
          return syscall(__NR_bpf, cmd, attr, size);
                         ^~~~~~~~
                         __NR_brk
        [...]
        In file included from gen_loader.c:15:0:
        skel_internal.h: In function ‘skel_sys_bpf’:
        skel_internal.h:53:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’?
          return syscall(__NR_bpf, cmd, attr, size);
                         ^~~~~~~~
                         __NR_brk
      
      We can see the following generated definitions:
      
        $ grep -r "#define __NR_bpf" arch/mips
        arch/mips/include/generated/uapi/asm/unistd_o32.h:#define __NR_bpf (__NR_Linux + 355)
        arch/mips/include/generated/uapi/asm/unistd_n64.h:#define __NR_bpf (__NR_Linux + 315)
        arch/mips/include/generated/uapi/asm/unistd_n32.h:#define __NR_bpf (__NR_Linux + 319)
      
      The __NR_Linux is defined in arch/mips/include/uapi/asm/unistd.h:
      
        $ grep -r "#define __NR_Linux" arch/mips
        arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux	4000
        arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux	5000
        arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux	6000
      
      That is to say, __NR_bpf is:
      
        4000 + 355 = 4355 for mips o32,
        6000 + 319 = 6319 for mips n32,
        5000 + 315 = 5315 for mips n64.
      
      So use the GCC pre-defined macro _ABIO32, _ABIN32 and _ABI64 [1] to define
      the corresponding __NR_bpf.
      
      This patch is similar with commit bad1926d ("bpf, s390: fix build for
      libbpf and selftest suite").
      
        [1] https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/config/mips/mips.h#l549Signed-off-by: NTiezhu Yang <yangtiezhu@loongson.cn>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/1637804167-8323-1-git-send-email-yangtiezhu@loongson.cn
      e32cb12f
  21. 26 11月, 2021 1 次提交
  22. 08 11月, 2021 3 次提交
    • A
      libbpf: Remove internal use of deprecated bpf_prog_load() variants · e32660ac
      Andrii Nakryiko 提交于
      Remove all the internal uses of bpf_load_program_xattr(), which is
      slated for deprecation in v0.7.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211103220845.2676888-5-andrii@kernel.org
      e32660ac
    • A
      libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load() · d10ef2b8
      Andrii Nakryiko 提交于
      Add a new unified OPTS-based low-level API for program loading,
      bpf_prog_load() ([0]).  bpf_prog_load() accepts few "mandatory"
      parameters as input arguments (program type, name, license,
      instructions) and all the other optional (as in not required to specify
      for all types of BPF programs) fields into struct bpf_prog_load_opts.
      
      This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
      obsolete and they are slated for deprecation in libbpf v0.7:
        - bpf_load_program();
        - bpf_load_program_xattr();
        - bpf_verify_program().
      
      Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
      to become a public bpf_prog_load() API. struct bpf_prog_load_params used
      internally is replaced by public struct bpf_prog_load_opts.
      
      Unfortunately, while conceptually all this is pretty straightforward,
      the biggest complication comes from the already existing bpf_prog_load()
      *high-level* API, which has nothing to do with BPF_PROG_LOAD command.
      
      We try really hard to have a new API named bpf_prog_load(), though,
      because it maps naturally to BPF_PROG_LOAD command.
      
      For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
      and mark it as COMPAT_VERSION() for shared library users compiled
      against old version of libbpf. Statically linked users and shared lib
      users compiled against new version of libbpf headers will get "rerouted"
      to bpf_prog_deprecated() through a macro helper that decides whether to
      use new or old bpf_prog_load() based on number of input arguments (see
      ___libbpf_overload in libbpf_common.h).
      
      To test that existing
      bpf_prog_load()-using code compiles and works as expected, I've compiled
      and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
      -Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
      the macro-based overload approach. I don't expect anyone else to do
      something like this in practice, though. This is testing-specific way to
      replace bpf_prog_load() calls with special testing variant of it, which
      adds extra prog_flags value. After testing I kept this selftests hack,
      but ensured that we use a new bpf_prog_load_deprecated name for this.
      
      This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
      bpf_object interface has to be used for working with struct bpf_program.
      Libbpf doesn't support loading just a bpf_program.
      
      The silver lining is that when we get to libbpf 1.0 all these
      complication will be gone and we'll have one clean bpf_prog_load()
      low-level API with no backwards compatibility hackery surrounding it.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/284Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
      d10ef2b8
    • A
      libbpf: Pass number of prog load attempts explicitly · 45493cba
      Andrii Nakryiko 提交于
      Allow to control number of BPF_PROG_LOAD attempts from outside the
      sys_bpf_prog_load() helper.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/bpf/20211103220845.2676888-3-andrii@kernel.org
      45493cba
  23. 05 11月, 2021 1 次提交
  24. 29 10月, 2021 2 次提交
  25. 06 10月, 2021 1 次提交
    • K
      libbpf: Support kernel module function calls · 9dbe6015
      Kumar Kartikeya Dwivedi 提交于
      This patch adds libbpf support for kernel module function call support.
      The fd_array parameter is used during BPF program load to pass module
      BTFs referenced by the program. insn->off is set to index into this
      array, but starts from 1, because insn->off as 0 is reserved for
      btf_vmlinux.
      
      We try to use existing insn->off for a module, since the kernel limits
      the maximum distinct module BTFs for kfuncs to 256, and also because
      index must never exceed the maximum allowed value that can fit in
      insn->off (INT16_MAX). In the future, if kernel interprets signed offset
      as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.
      
      Also introduce a btf__find_by_name_kind_own helper to start searching
      from module BTF's start id when we know that the BTF ID is not present
      in vmlinux BTF (in find_ksym_btf_id).
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211002011757.311265-7-memxor@gmail.com
      9dbe6015
  26. 17 8月, 2021 1 次提交
  27. 26 5月, 2021 1 次提交
  28. 25 5月, 2021 1 次提交
  29. 04 12月, 2020 3 次提交