1. 11 5月, 2022 2 次提交
  2. 29 4月, 2022 1 次提交
    • A
      libbpf: Allow to opt-out from creating BPF maps · ec41817b
      Andrii Nakryiko 提交于
      Add bpf_map__set_autocreate() API that allows user to opt-out from
      libbpf automatically creating BPF map during BPF object load.
      
      This is a useful feature when building CO-RE-enabled BPF application
      that takes advantage of some new-ish BPF map type (e.g., socket-local
      storage) if kernel supports it, but otherwise uses some alternative way
      (e.g., extra HASH map). In such case, being able to disable the creation
      of a map that kernel doesn't support allows to successfully create and
      load BPF object file with all its other maps and programs.
      
      It's still up to user to make sure that no "live" code in any of their BPF
      programs are referencing such map instance, which can be achieved by
      guarding such code with CO-RE relocation check or by using .rodata
      global variables.
      
      If user fails to properly guard such code to turn it into "dead code",
      libbpf will helpfully post-process BPF verifier log and will provide
      more meaningful error and map name that needs to be guarded properly. As
      such, instead of:
      
        ; value = bpf_map_lookup_elem(&missing_map, &zero);
        4: (85) call unknown#2001000000
        invalid func unknown#2001000000
      
      ... user will see:
      
        ; value = bpf_map_lookup_elem(&missing_map, &zero);
        4: <invalid BPF map reference>
        BPF map 'missing_map' is referenced but wasn't created
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220428041523.4089853-4-andrii@kernel.org
      ec41817b
  3. 06 4月, 2022 1 次提交
    • A
      libbpf: Wire up USDT API and bpf_link integration · 2e4913e0
      Andrii Nakryiko 提交于
      Wire up libbpf USDT support APIs without yet implementing all the
      nitty-gritty details of USDT discovery, spec parsing, and BPF map
      initialization.
      
      User-visible user-space API is simple and is conceptually very similar
      to uprobe API.
      
      bpf_program__attach_usdt() API allows to programmatically attach given
      BPF program to a USDT, specified through binary path (executable or
      shared lib), USDT provider and name. Also, just like in uprobe case, PID
      filter is specified (0 - self, -1 - any process, or specific PID).
      Optionally, USDT cookie value can be specified. Such single API
      invocation will try to discover given USDT in specified binary and will
      use (potentially many) BPF uprobes to attach this program in correct
      locations.
      
      Just like any bpf_program__attach_xxx() APIs, bpf_link is returned that
      represents this attachment. It is a virtual BPF link that doesn't have
      direct kernel object, as it can consist of multiple underlying BPF
      uprobe links. As such, attachment is not atomic operation and there can
      be brief moment when some USDT call sites are attached while others are
      still in the process of attaching. This should be taken into
      consideration by user. But bpf_program__attach_usdt() guarantees that
      in the case of success all USDT call sites are successfully attached, or
      all the successfuly attachments will be detached as soon as some USDT
      call sites failed to be attached. So, in theory, there could be cases of
      failed bpf_program__attach_usdt() call which did trigger few USDT
      program invocations. This is unavoidable due to multi-uprobe nature of
      USDT and has to be handled by user, if it's important to create an
      illusion of atomicity.
      
      USDT BPF programs themselves are marked in BPF source code as either
      SEC("usdt"), in which case they won't be auto-attached through
      skeleton's <skel>__attach() method, or it can have a full definition,
      which follows the spirit of fully-specified uprobes:
      SEC("usdt/<path>:<provider>:<name>"). In the latter case skeleton's
      attach method will attempt auto-attachment. Similarly, generic
      bpf_program__attach() will have enought information to go off of for
      parameterless attachment.
      
      USDT BPF programs are actually uprobes, and as such for kernel they are
      marked as BPF_PROG_TYPE_KPROBE.
      
      Another part of this patch is USDT-related feature probing:
        - BPF cookie support detection from user-space;
        - detection of kernel support for auto-refcounting of USDT semaphore.
      
      The latter is optional. If kernel doesn't support such feature and USDT
      doesn't rely on USDT semaphores, no error is returned. But if libbpf
      detects that USDT requires setting semaphores and kernel doesn't support
      this, libbpf errors out with explicit pr_warn() message. Libbpf doesn't
      support poking process's memory directly to increment semaphore value,
      like BCC does on legacy kernels, due to inherent raciness and danger of
      such process memory manipulation. Libbpf let's kernel take care of this
      properly or gives up.
      
      Logistically, all the extra USDT-related infrastructure of libbpf is put
      into a separate usdt.c file and abstracted behind struct usdt_manager.
      Each bpf_object has lazily-initialized usdt_manager pointer, which is
      only instantiated if USDT programs are attempted to be attached. Closing
      BPF object frees up usdt_manager resources. usdt_manager keeps track of
      USDT spec ID assignment and few other small things.
      
      Subsequent patches will fill out remaining missing pieces of USDT
      initialization and setup logic.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20220404234202.331384-3-andrii@kernel.org
      2e4913e0
  4. 18 3月, 2022 2 次提交
    • D
      libbpf: Add subskeleton scaffolding · 430025e5
      Delyan Kratunov 提交于
      In symmetry with bpf_object__open_skeleton(),
      bpf_object__open_subskeleton() performs the actual walking and linking
      of maps, progs, and globals described by bpf_*_skeleton objects.
      Signed-off-by: NDelyan Kratunov <delyank@fb.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/6942a46fbe20e7ebf970affcca307ba616985b15.1647473511.git.delyank@fb.com
      430025e5
    • J
      libbpf: Add bpf_program__attach_kprobe_multi_opts function · ddc6b049
      Jiri Olsa 提交于
      Adding bpf_program__attach_kprobe_multi_opts function for attaching
      kprobe program to multiple functions.
      
        struct bpf_link *
        bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
                                              const char *pattern,
                                              const struct bpf_kprobe_multi_opts *opts);
      
      User can specify functions to attach with 'pattern' argument that
      allows wildcards (*?' supported) or provide symbols or addresses
      directly through opts argument. These 3 options are mutually
      exclusive.
      
      When using symbols or addresses, user can also provide cookie value
      for each symbol/address that can be retrieved later in bpf program
      with bpf_get_attach_cookie helper.
      
        struct bpf_kprobe_multi_opts {
                size_t sz;
                const char **syms;
                const unsigned long *addrs;
                const __u64 *cookies;
                size_t cnt;
                bool retprobe;
                size_t :0;
        };
      
      Symbols, addresses and cookies are provided through opts object
      (syms/addrs/cookies) as array pointers with specified count (cnt).
      
      Each cookie value is paired with provided function address or symbol
      with the same array index.
      
      The program can be also attached as return probe if 'retprobe' is set.
      
      For quick usage with NULL opts argument, like:
      
        bpf_program__attach_kprobe_multi_opts(prog, "ksys_*", NULL)
      
      the 'prog' will be attached as kprobe to 'ksys_*' functions.
      
      Also adding new program sections for automatic attachment:
      
        kprobe.multi/<symbol_pattern>
        kretprobe.multi/<symbol_pattern>
      
      The symbol_pattern is used as 'pattern' argument in
      bpf_program__attach_kprobe_multi_opts function.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220316122419.933957-10-jolsa@kernel.org
      ddc6b049
  5. 06 3月, 2022 1 次提交
    • A
      libbpf: Support custom SEC() handlers · 697f104d
      Andrii Nakryiko 提交于
      Allow registering and unregistering custom handlers for BPF program.
      This allows user applications and libraries to plug into libbpf's
      declarative SEC() definition handling logic. This allows to offload
      complex and intricate custom logic into external libraries, but still
      provide a great user experience.
      
      One such example is USDT handling library, which has a lot of code and
      complexity which doesn't make sense to put into libbpf directly, but it
      would be really great for users to be able to specify BPF programs with
      something like SEC("usdt/<path-to-binary>:<usdt_provider>:<usdt_name>")
      and have correct BPF program type set (BPF_PROGRAM_TYPE_KPROBE, as it is
      uprobe) and even support BPF skeleton's auto-attach logic.
      
      In some cases, it might be even good idea to override libbpf's default
      handling, like for SEC("perf_event") programs. With custom library, it's
      possible to extend logic to support specifying perf event specification
      right there in SEC() definition without burdening libbpf with lots of
      custom logic or extra library dependecies (e.g., libpfm4). With current
      patch it's possible to override libbpf's SEC("perf_event") handling and
      specify a completely custom ones.
      
      Further, it's possible to specify a generic fallback handling for any
      SEC() that doesn't match any other custom or standard libbpf handlers.
      This allows to accommodate whatever legacy use cases there might be, if
      necessary.
      
      See doc comments for libbpf_register_prog_handler() and
      libbpf_unregister_prog_handler() for detailed semantics.
      
      This patch also bumps libbpf development version to v0.8 and adds new
      APIs there.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Tested-by: NAlan Maguire <alan.maguire@oracle.com>
      Reviewed-by: NAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20220305010129.1549719-3-andrii@kernel.org
      697f104d
  6. 12 2月, 2022 1 次提交
  7. 26 1月, 2022 1 次提交
  8. 21 1月, 2022 1 次提交
    • A
      libbpf: streamline low-level XDP APIs · c359821a
      Andrii Nakryiko 提交于
      Introduce 4 new netlink-based XDP APIs for attaching, detaching, and
      querying XDP programs:
        - bpf_xdp_attach;
        - bpf_xdp_detach;
        - bpf_xdp_query;
        - bpf_xdp_query_id.
      
      These APIs replace bpf_set_link_xdp_fd, bpf_set_link_xdp_fd_opts,
      bpf_get_link_xdp_id, and bpf_get_link_xdp_info APIs ([0]). The latter
      don't follow a consistent naming pattern and some of them use
      non-extensible approaches (e.g., struct xdp_link_info which can't be
      modified without breaking libbpf ABI).
      
      The approach I took with these low-level XDP APIs is similar to what we
      did with low-level TC APIs. There is a nice duality of bpf_tc_attach vs
      bpf_xdp_attach, and so on. I left bpf_xdp_attach() to support detaching
      when -1 is specified for prog_fd for generality and convenience, but
      bpf_xdp_detach() is preferred due to clearer naming and associated
      semantics. Both bpf_xdp_attach() and bpf_xdp_detach() accept the same
      opts struct allowing to specify expected old_prog_fd.
      
      While doing the refactoring, I noticed that old APIs require users to
      specify opts with old_fd == -1 to declare "don't care about already
      attached XDP prog fd" condition. Otherwise, FD 0 is assumed, which is
      essentially never an intended behavior. So I made this behavior
      consistent with other kernel and libbpf APIs, in which zero FD means "no
      FD". This seems to be more in line with the latest thinking in BPF land
      and should cause less user confusion, hopefully.
      
      For querying, I left two APIs, both more generic bpf_xdp_query()
      allowing to query multiple IDs and attach mode, but also
      a specialization of it, bpf_xdp_query_id(), which returns only requested
      prog_id. Uses of prog_id returning bpf_get_link_xdp_id() were so
      prevalent across selftests and samples, that it seemed a very common use
      case and using bpf_xdp_query() for doing it felt very cumbersome with
      a highly branches if/else chain based on flags and attach mode.
      
      Old APIs are scheduled for deprecation in libbpf 0.8 release.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/309Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/r/20220120061422.2710637-2-andrii@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      c359821a
  9. 13 1月, 2022 1 次提交
  10. 18 12月, 2021 1 次提交
    • A
      libbpf: Rework feature-probing APIs · 878d8def
      Andrii Nakryiko 提交于
      Create three extensible alternatives to inconsistently named
      feature-probing APIs:
      
        - libbpf_probe_bpf_prog_type() instead of bpf_probe_prog_type();
        - libbpf_probe_bpf_map_type() instead of bpf_probe_map_type();
        - libbpf_probe_bpf_helper() instead of bpf_probe_helper().
      
      Set up return values such that libbpf can report errors (e.g., if some
      combination of input arguments isn't possible to validate, etc), in
      addition to whether the feature is supported (return value 1) or not
      supported (return value 0).
      
      Also schedule deprecation of those three APIs. Also schedule deprecation
      of bpf_probe_large_insn_limit().
      
      Also fix all the existing detection logic for various program and map
      types that never worked:
      
        - BPF_PROG_TYPE_LIRC_MODE2;
        - BPF_PROG_TYPE_TRACING;
        - BPF_PROG_TYPE_LSM;
        - BPF_PROG_TYPE_EXT;
        - BPF_PROG_TYPE_SYSCALL;
        - BPF_PROG_TYPE_STRUCT_OPS;
        - BPF_MAP_TYPE_STRUCT_OPS;
        - BPF_MAP_TYPE_BLOOM_FILTER.
      
      Above prog/map types needed special setups and detection logic to work.
      Subsequent patch adds selftests that will make sure that all the
      detection logic keeps working for all current and future program and map
      types, avoiding otherwise inevitable bit rot.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/312Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NDave Marchevsky <davemarchevsky@fb.com>
      Cc: Julia Kartseva <hex@fb.com>
      Link: https://lore.kernel.org/bpf/20211217171202.3352835-2-andrii@kernel.org
      878d8def
  11. 15 12月, 2021 1 次提交
    • A
      libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF · e542f2c4
      Andrii Nakryiko 提交于
      The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
      one of the first extremely frustrating gotchas that all new BPF users go
      through and in some cases have to learn it a very hard way.
      
      Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
      dropped the dependency on memlock and uses memcg-based memory accounting
      instead. Unfortunately, detecting memcg-based BPF memory accounting is
      far from trivial (as can be evidenced by this patch), so in practice
      most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
      
      As we move towards libbpf 1.0, it would be good to allow users to forget
      about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
      automatically. This patch paves the way forward in this matter. Libbpf
      will do feature detection of memcg-based accounting, and if detected,
      will do nothing. But if the kernel is too old, just like BCC, libbpf
      will automatically increase RLIMIT_MEMLOCK on behalf of user
      application ([0]).
      
      As this is technically a breaking change, during the transition period
      applications have to opt into libbpf 1.0 mode by setting
      LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
      libbpf_set_strict_mode().
      
      Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
      with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
      nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
      called before the first bpf_prog_load(), bpf_btf_load(), or
      bpf_object__load() call, otherwise it has no effect and will return
      -EBUSY.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/369Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
      e542f2c4
  12. 11 12月, 2021 2 次提交
  13. 03 12月, 2021 1 次提交
  14. 26 11月, 2021 1 次提交
  15. 20 11月, 2021 1 次提交
  16. 19 11月, 2021 1 次提交
  17. 12 11月, 2021 5 次提交
    • Y
      libbpf: Support BTF_KIND_TYPE_TAG · 2dc1e488
      Yonghong Song 提交于
      Add libbpf support for BTF_KIND_TYPE_TAG.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211112012614.1505315-1-yhs@fb.com
      2dc1e488
    • A
      libbpf: Make perf_buffer__new() use OPTS-based interface · 41788934
      Andrii Nakryiko 提交于
      Add new variants of perf_buffer__new() and perf_buffer__new_raw() that
      use OPTS-based options for future extensibility ([0]). Given all the
      currently used API names are best fits, re-use them and use
      ___libbpf_override() approach and symbol versioning to preserve ABI and
      source code compatibility. struct perf_buffer_opts and struct
      perf_buffer_raw_opts are kept as well, but they are restructured such
      that they are OPTS-based when used with new APIs. For struct
      perf_buffer_raw_opts we keep few fields intact, so we have to also
      preserve the memory location of them both when used as OPTS and for
      legacy API variants. This is achieved with anonymous padding for OPTS
      "incarnation" of the struct.  These pads can be eventually used for new
      options.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/311Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-6-andrii@kernel.org
      41788934
    • A
      libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof · 6084f5dc
      Andrii Nakryiko 提交于
      Change btf_dump__new() and corresponding struct btf_dump_ops structure
      to be extensible by using OPTS "framework" ([0]). Given we don't change
      the names, we use a similar approach as with bpf_prog_load(), but this
      time we ended up with two APIs with the same name and same number of
      arguments, so overloading based on number of arguments with
      ___libbpf_override() doesn't work.
      
      Instead, use "overloading" based on types. In this particular case,
      print callback has to be specified, so we detect which argument is
      a callback. If it's 4th (last) argument, old implementation of API is
      used by user code. If not, it must be 2nd, and thus new implementation
      is selected. The rest is handled by the same symbol versioning approach.
      
      btf_ext argument is dropped as it was never used and isn't necessary
      either. If in the future we'll need btf_ext, that will be added into
      OPTS-based struct btf_dump_opts.
      
      struct btf_dump_opts is reused for both old API and new APIs. ctx field
      is marked deprecated in v0.7+ and it's put at the same memory location
      as OPTS's sz field. Any user of new-style btf_dump__new() will have to
      set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
      the callback as mandatory input argument, following the other APIs in
      libbpf that accept callbacks consistently.
      
      Again, this is quite ugly in implementation, but is done in the name of
      backwards compatibility and uniform and extensible future APIs (at the
      same time, sigh). And it will be gone in libbpf 1.0.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/283Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
      6084f5dc
    • A
      libbpf: Turn btf_dedup_opts into OPTS-based struct · 957d350a
      Andrii Nakryiko 提交于
      btf__dedup() and struct btf_dedup_opts were added before we figured out
      OPTS mechanism. As such, btf_dedup_opts is non-extensible without
      breaking an ABI and potentially crashing user application.
      
      Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
      names that would be great to preserve and use going forward. So we use
      ___libbpf_override() macro approach, used previously for bpf_prog_load()
      API, to define a new btf__dedup() variant that accepts only struct btf *
      and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
      implementation into btf__dedup_deprecated(). This keeps both source and
      binary compatibility with old and new applications.
      
      The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
      and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
      is a pretty rarely used API and I believe that the only currently known
      users (besides selftests) are libbpf's own bpf_linker and pahole.
      Neither use case actually uses options and just passes NULL. So instead
      of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
      one, move btf_ext argument into those opts (only bpf_linker needs to
      dedup btf_ext, so it's not a typical thing to specify), and drop never
      used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
      makes BTF dedup much less useful and efficient).
      
      Just in case, for old implementation, btf__dedup_deprecated(), detect
      non-NULL options and error out with helpful message, to help users
      migrate, if there are any user playing with btf__dedup().
      
      The last remaining piece is dedup_table_size, which is another
      anachronism from very early days of BTF dedup. Since then it has been
      reduced to the only valid value, 1, to request forced hash collisions.
      This is only used during testing. So instead introduce a bool flag to
      force collisions explicitly.
      
      This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
      use to avoid selftests breakage.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/281Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
      957d350a
    • A
      libbpf: Add ability to get/set per-program load flags · a6ca7158
      Andrii Nakryiko 提交于
      Add bpf_program__flags() API to retrieve prog_flags that will be (or
      were) supplied to BPF_PROG_LOAD command.
      
      Also add bpf_program__set_extra_flags() API to allow to set *extra*
      flags, in addition to those determined by program's SEC() definition.
      Such flags are logically OR'ed with libbpf-derived flags.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111051758.92283-2-andrii@kernel.org
      a6ca7158
  18. 08 11月, 2021 1 次提交
    • A
      libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load() · d10ef2b8
      Andrii Nakryiko 提交于
      Add a new unified OPTS-based low-level API for program loading,
      bpf_prog_load() ([0]).  bpf_prog_load() accepts few "mandatory"
      parameters as input arguments (program type, name, license,
      instructions) and all the other optional (as in not required to specify
      for all types of BPF programs) fields into struct bpf_prog_load_opts.
      
      This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
      obsolete and they are slated for deprecation in libbpf v0.7:
        - bpf_load_program();
        - bpf_load_program_xattr();
        - bpf_verify_program().
      
      Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
      to become a public bpf_prog_load() API. struct bpf_prog_load_params used
      internally is replaced by public struct bpf_prog_load_opts.
      
      Unfortunately, while conceptually all this is pretty straightforward,
      the biggest complication comes from the already existing bpf_prog_load()
      *high-level* API, which has nothing to do with BPF_PROG_LOAD command.
      
      We try really hard to have a new API named bpf_prog_load(), though,
      because it maps naturally to BPF_PROG_LOAD command.
      
      For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
      and mark it as COMPAT_VERSION() for shared library users compiled
      against old version of libbpf. Statically linked users and shared lib
      users compiled against new version of libbpf headers will get "rerouted"
      to bpf_prog_deprecated() through a macro helper that decides whether to
      use new or old bpf_prog_load() based on number of input arguments (see
      ___libbpf_overload in libbpf_common.h).
      
      To test that existing
      bpf_prog_load()-using code compiles and works as expected, I've compiled
      and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
      -Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
      the macro-based overload approach. I don't expect anyone else to do
      something like this in practice, though. This is testing-specific way to
      replace bpf_prog_load() calls with special testing variant of it, which
      adds extra prog_flags value. After testing I kept this selftests hack,
      but ensured that we use a new bpf_prog_load_deprecated name for this.
      
      This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
      bpf_object interface has to be used for working with struct bpf_program.
      Libbpf doesn't support loading just a bpf_program.
      
      The silver lining is that when we get to libbpf 1.0 all these
      complication will be gone and we'll have one clean bpf_prog_load()
      low-level API with no backwards compatibility hackery surrounding it.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/284Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
      d10ef2b8
  19. 29 10月, 2021 1 次提交
  20. 26 10月, 2021 1 次提交
    • A
      libbpf: Add ability to fetch bpf_program's underlying instructions · 65a7fa2e
      Andrii Nakryiko 提交于
      Add APIs providing read-only access to bpf_program BPF instructions ([0]).
      This is useful for diagnostics purposes, but it also allows a cleaner
      support for cloning BPF programs after libbpf did all the FD resolution
      and CO-RE relocations, subprog instructions appending, etc. Currently,
      cloning BPF program is possible only through hijacking a half-broken
      bpf_program__set_prep() API, which doesn't really work well for anything
      but most primitive programs. For instance, set_prep() API doesn't allow
      adjusting BPF program load parameters which are necessary for loading
      fentry/fexit BPF programs (the case where BPF program cloning is
      a necessity if doing some sort of mass-attachment functionality).
      
      Given bpf_program__set_prep() API is set to be deprecated, having
      a cleaner alternative is a must. libbpf internally already keeps track
      of linear array of struct bpf_insn, so it's not hard to expose it. The
      only gotcha is that libbpf previously freed instructions array during
      bpf_object load time, which would make this API much less useful overall,
      because in between bpf_object__open() and bpf_object__load() a lot of
      changes to instructions are done by libbpf.
      
      So this patch makes libbpf hold onto prog->insns array even after BPF
      program loading. I think this is a small price for added functionality
      and improved introspection of BPF program code.
      
      See retsnoop PR ([1]) for how it can be used in practice and code
      savings compared to relying on bpf_program__set_prep().
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/298
        [1] https://github.com/anakryiko/retsnoop/pull/1Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org
      65a7fa2e
  21. 23 10月, 2021 1 次提交
  22. 19 10月, 2021 1 次提交
  23. 07 10月, 2021 1 次提交
  24. 06 10月, 2021 1 次提交
  25. 15 9月, 2021 1 次提交
  26. 14 9月, 2021 1 次提交
    • A
      libbpf: Make libbpf_version.h non-auto-generated · 2f383041
      Andrii Nakryiko 提交于
      Turn previously auto-generated libbpf_version.h header into a normal
      header file. This prevents various tricky Makefile integration issues,
      simplifies the overall build process, but also allows to further extend
      it with some more versioning-related APIs in the future.
      
      To prevent accidental out-of-sync versions as defined by libbpf.map and
      libbpf_version.h, Makefile checks their consistency at build time.
      
      Simultaneously with this change bump libbpf.map to v0.6.
      
      Also undo adding libbpf's output directory into include path for
      kernel/bpf/preload, bpftool, and resolve_btfids, which is not necessary
      because libbpf_version.h is just a normal header like any other.
      
      Fixes: 0b46b755 ("libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210913222309.3220849-1-andrii@kernel.org
      2f383041
  27. 17 8月, 2021 1 次提交
  28. 31 7月, 2021 1 次提交
  29. 30 7月, 2021 3 次提交
  30. 24 7月, 2021 1 次提交
  31. 23 7月, 2021 1 次提交