1. 03 12月, 2021 1 次提交
  2. 26 11月, 2021 1 次提交
  3. 20 11月, 2021 1 次提交
  4. 19 11月, 2021 1 次提交
  5. 12 11月, 2021 5 次提交
    • Y
      libbpf: Support BTF_KIND_TYPE_TAG · 2dc1e488
      Yonghong Song 提交于
      Add libbpf support for BTF_KIND_TYPE_TAG.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211112012614.1505315-1-yhs@fb.com
      2dc1e488
    • A
      libbpf: Make perf_buffer__new() use OPTS-based interface · 41788934
      Andrii Nakryiko 提交于
      Add new variants of perf_buffer__new() and perf_buffer__new_raw() that
      use OPTS-based options for future extensibility ([0]). Given all the
      currently used API names are best fits, re-use them and use
      ___libbpf_override() approach and symbol versioning to preserve ABI and
      source code compatibility. struct perf_buffer_opts and struct
      perf_buffer_raw_opts are kept as well, but they are restructured such
      that they are OPTS-based when used with new APIs. For struct
      perf_buffer_raw_opts we keep few fields intact, so we have to also
      preserve the memory location of them both when used as OPTS and for
      legacy API variants. This is achieved with anonymous padding for OPTS
      "incarnation" of the struct.  These pads can be eventually used for new
      options.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/311Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-6-andrii@kernel.org
      41788934
    • A
      libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof · 6084f5dc
      Andrii Nakryiko 提交于
      Change btf_dump__new() and corresponding struct btf_dump_ops structure
      to be extensible by using OPTS "framework" ([0]). Given we don't change
      the names, we use a similar approach as with bpf_prog_load(), but this
      time we ended up with two APIs with the same name and same number of
      arguments, so overloading based on number of arguments with
      ___libbpf_override() doesn't work.
      
      Instead, use "overloading" based on types. In this particular case,
      print callback has to be specified, so we detect which argument is
      a callback. If it's 4th (last) argument, old implementation of API is
      used by user code. If not, it must be 2nd, and thus new implementation
      is selected. The rest is handled by the same symbol versioning approach.
      
      btf_ext argument is dropped as it was never used and isn't necessary
      either. If in the future we'll need btf_ext, that will be added into
      OPTS-based struct btf_dump_opts.
      
      struct btf_dump_opts is reused for both old API and new APIs. ctx field
      is marked deprecated in v0.7+ and it's put at the same memory location
      as OPTS's sz field. Any user of new-style btf_dump__new() will have to
      set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
      the callback as mandatory input argument, following the other APIs in
      libbpf that accept callbacks consistently.
      
      Again, this is quite ugly in implementation, but is done in the name of
      backwards compatibility and uniform and extensible future APIs (at the
      same time, sigh). And it will be gone in libbpf 1.0.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/283Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
      6084f5dc
    • A
      libbpf: Turn btf_dedup_opts into OPTS-based struct · 957d350a
      Andrii Nakryiko 提交于
      btf__dedup() and struct btf_dedup_opts were added before we figured out
      OPTS mechanism. As such, btf_dedup_opts is non-extensible without
      breaking an ABI and potentially crashing user application.
      
      Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
      names that would be great to preserve and use going forward. So we use
      ___libbpf_override() macro approach, used previously for bpf_prog_load()
      API, to define a new btf__dedup() variant that accepts only struct btf *
      and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
      implementation into btf__dedup_deprecated(). This keeps both source and
      binary compatibility with old and new applications.
      
      The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
      and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
      is a pretty rarely used API and I believe that the only currently known
      users (besides selftests) are libbpf's own bpf_linker and pahole.
      Neither use case actually uses options and just passes NULL. So instead
      of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
      one, move btf_ext argument into those opts (only bpf_linker needs to
      dedup btf_ext, so it's not a typical thing to specify), and drop never
      used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
      makes BTF dedup much less useful and efficient).
      
      Just in case, for old implementation, btf__dedup_deprecated(), detect
      non-NULL options and error out with helpful message, to help users
      migrate, if there are any user playing with btf__dedup().
      
      The last remaining piece is dedup_table_size, which is another
      anachronism from very early days of BTF dedup. Since then it has been
      reduced to the only valid value, 1, to request forced hash collisions.
      This is only used during testing. So instead introduce a bool flag to
      force collisions explicitly.
      
      This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
      use to avoid selftests breakage.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/281Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
      957d350a
    • A
      libbpf: Add ability to get/set per-program load flags · a6ca7158
      Andrii Nakryiko 提交于
      Add bpf_program__flags() API to retrieve prog_flags that will be (or
      were) supplied to BPF_PROG_LOAD command.
      
      Also add bpf_program__set_extra_flags() API to allow to set *extra*
      flags, in addition to those determined by program's SEC() definition.
      Such flags are logically OR'ed with libbpf-derived flags.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111051758.92283-2-andrii@kernel.org
      a6ca7158
  6. 08 11月, 2021 1 次提交
    • A
      libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load() · d10ef2b8
      Andrii Nakryiko 提交于
      Add a new unified OPTS-based low-level API for program loading,
      bpf_prog_load() ([0]).  bpf_prog_load() accepts few "mandatory"
      parameters as input arguments (program type, name, license,
      instructions) and all the other optional (as in not required to specify
      for all types of BPF programs) fields into struct bpf_prog_load_opts.
      
      This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
      obsolete and they are slated for deprecation in libbpf v0.7:
        - bpf_load_program();
        - bpf_load_program_xattr();
        - bpf_verify_program().
      
      Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
      to become a public bpf_prog_load() API. struct bpf_prog_load_params used
      internally is replaced by public struct bpf_prog_load_opts.
      
      Unfortunately, while conceptually all this is pretty straightforward,
      the biggest complication comes from the already existing bpf_prog_load()
      *high-level* API, which has nothing to do with BPF_PROG_LOAD command.
      
      We try really hard to have a new API named bpf_prog_load(), though,
      because it maps naturally to BPF_PROG_LOAD command.
      
      For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
      and mark it as COMPAT_VERSION() for shared library users compiled
      against old version of libbpf. Statically linked users and shared lib
      users compiled against new version of libbpf headers will get "rerouted"
      to bpf_prog_deprecated() through a macro helper that decides whether to
      use new or old bpf_prog_load() based on number of input arguments (see
      ___libbpf_overload in libbpf_common.h).
      
      To test that existing
      bpf_prog_load()-using code compiles and works as expected, I've compiled
      and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
      -Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
      the macro-based overload approach. I don't expect anyone else to do
      something like this in practice, though. This is testing-specific way to
      replace bpf_prog_load() calls with special testing variant of it, which
      adds extra prog_flags value. After testing I kept this selftests hack,
      but ensured that we use a new bpf_prog_load_deprecated name for this.
      
      This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
      bpf_object interface has to be used for working with struct bpf_program.
      Libbpf doesn't support loading just a bpf_program.
      
      The silver lining is that when we get to libbpf 1.0 all these
      complication will be gone and we'll have one clean bpf_prog_load()
      low-level API with no backwards compatibility hackery surrounding it.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/284Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
      d10ef2b8
  7. 29 10月, 2021 1 次提交
  8. 26 10月, 2021 1 次提交
    • A
      libbpf: Add ability to fetch bpf_program's underlying instructions · 65a7fa2e
      Andrii Nakryiko 提交于
      Add APIs providing read-only access to bpf_program BPF instructions ([0]).
      This is useful for diagnostics purposes, but it also allows a cleaner
      support for cloning BPF programs after libbpf did all the FD resolution
      and CO-RE relocations, subprog instructions appending, etc. Currently,
      cloning BPF program is possible only through hijacking a half-broken
      bpf_program__set_prep() API, which doesn't really work well for anything
      but most primitive programs. For instance, set_prep() API doesn't allow
      adjusting BPF program load parameters which are necessary for loading
      fentry/fexit BPF programs (the case where BPF program cloning is
      a necessity if doing some sort of mass-attachment functionality).
      
      Given bpf_program__set_prep() API is set to be deprecated, having
      a cleaner alternative is a must. libbpf internally already keeps track
      of linear array of struct bpf_insn, so it's not hard to expose it. The
      only gotcha is that libbpf previously freed instructions array during
      bpf_object load time, which would make this API much less useful overall,
      because in between bpf_object__open() and bpf_object__load() a lot of
      changes to instructions are done by libbpf.
      
      So this patch makes libbpf hold onto prog->insns array even after BPF
      program loading. I think this is a small price for added functionality
      and improved introspection of BPF program code.
      
      See retsnoop PR ([1]) for how it can be used in practice and code
      savings compared to relying on bpf_program__set_prep().
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/298
        [1] https://github.com/anakryiko/retsnoop/pull/1Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org
      65a7fa2e
  9. 23 10月, 2021 1 次提交
  10. 19 10月, 2021 1 次提交
  11. 07 10月, 2021 1 次提交
  12. 06 10月, 2021 1 次提交
  13. 15 9月, 2021 1 次提交
  14. 14 9月, 2021 1 次提交
    • A
      libbpf: Make libbpf_version.h non-auto-generated · 2f383041
      Andrii Nakryiko 提交于
      Turn previously auto-generated libbpf_version.h header into a normal
      header file. This prevents various tricky Makefile integration issues,
      simplifies the overall build process, but also allows to further extend
      it with some more versioning-related APIs in the future.
      
      To prevent accidental out-of-sync versions as defined by libbpf.map and
      libbpf_version.h, Makefile checks their consistency at build time.
      
      Simultaneously with this change bump libbpf.map to v0.6.
      
      Also undo adding libbpf's output directory into include path for
      kernel/bpf/preload, bpftool, and resolve_btfids, which is not necessary
      because libbpf_version.h is just a normal header like any other.
      
      Fixes: 0b46b755 ("libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210913222309.3220849-1-andrii@kernel.org
      2f383041
  15. 17 8月, 2021 1 次提交
  16. 31 7月, 2021 1 次提交
  17. 30 7月, 2021 3 次提交
  18. 24 7月, 2021 1 次提交
  19. 23 7月, 2021 1 次提交
  20. 17 7月, 2021 1 次提交
    • A
      libbpf: BTF dumper support for typed data · 920d16af
      Alan Maguire 提交于
      Add a BTF dumper for typed data, so that the user can dump a typed
      version of the data provided.
      
      The API is
      
      int btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
                                   void *data, size_t data_sz,
                                   const struct btf_dump_type_data_opts *opts);
      
      ...where the id is the BTF id of the data pointed to by the "void *"
      argument; for example the BTF id of "struct sk_buff" for a
      "struct skb *" data pointer.  Options supported are
      
       - a starting indent level (indent_lvl)
       - a user-specified indent string which will be printed once per
         indent level; if NULL, tab is chosen but any string <= 32 chars
         can be provided.
       - a set of boolean options to control dump display, similar to those
         used for BPF helper bpf_snprintf_btf().  Options are
              - compact : omit newlines and other indentation
              - skip_names: omit member names
              - emit_zeroes: show zero-value members
      
      Default output format is identical to that dumped by bpf_snprintf_btf(),
      for example a "struct sk_buff" representation would look like this:
      
      struct sk_buff){
      	(union){
      		(struct){
      			.next = (struct sk_buff *)0xffffffffffffffff,
      			.prev = (struct sk_buff *)0xffffffffffffffff,
      		(union){
      			.dev = (struct net_device *)0xffffffffffffffff,
      			.dev_scratch = (long unsigned int)18446744073709551615,
      		},
      	},
      ...
      
      If the data structure is larger than the *data_sz*
      number of bytes that are available in *data*, as much
      of the data as possible will be dumped and -E2BIG will
      be returned.  This is useful as tracers will sometimes
      not be able to capture all of the data associated with
      a type; for example a "struct task_struct" is ~16k.
      Being able to specify that only a subset is available is
      important for such cases.  On success, the amount of data
      dumped is returned.
      Signed-off-by: NAlan Maguire <alan.maguire@oracle.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/1626362126-27775-2-git-send-email-alan.maguire@oracle.com
      920d16af
  21. 03 6月, 2021 1 次提交
  22. 26 5月, 2021 1 次提交
  23. 25 5月, 2021 1 次提交
  24. 19 5月, 2021 2 次提交
    • A
      libbpf: Introduce bpf_map__initial_value(). · 7723256b
      Alexei Starovoitov 提交于
      Introduce bpf_map__initial_value() to read initial contents
      of mmaped data/rodata/bss maps.
      Note that bpf_map__set_initial_value() doesn't allow modifying
      kconfig map while bpf_map__initial_value() allows reading
      its values.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210514003623.28033-17-alexei.starovoitov@gmail.com
      7723256b
    • A
      libbpf: Generate loader program out of BPF ELF file. · 67234743
      Alexei Starovoitov 提交于
      The BPF program loading process performed by libbpf is quite complex
      and consists of the following steps:
      "open" phase:
      - parse elf file and remember relocations, sections
      - collect externs and ksyms including their btf_ids in prog's BTF
      - patch BTF datasec (since llvm couldn't do it)
      - init maps (old style map_def, BTF based, global data map, kconfig map)
      - collect relocations against progs and maps
      "load" phase:
      - probe kernel features
      - load vmlinux BTF
      - resolve externs (kconfig and ksym)
      - load program BTF
      - init struct_ops
      - create maps
      - apply CO-RE relocations
      - patch ld_imm64 insns with src_reg=PSEUDO_MAP, PSEUDO_MAP_VALUE, PSEUDO_BTF_ID
      - reposition subprograms and adjust call insns
      - sanitize and load progs
      
      During this process libbpf does sys_bpf() calls to load BTF, create maps,
      populate maps and finally load programs.
      Instead of actually doing the syscalls generate a trace of what libbpf
      would have done and represent it as the "loader program".
      The "loader program" consists of single map with:
      - union bpf_attr(s)
      - BTF bytes
      - map value bytes
      - insns bytes
      and single bpf program that passes bpf_attr(s) and data into bpf_sys_bpf() helper.
      Executing such "loader program" via bpf_prog_test_run() command will
      replay the sequence of syscalls that libbpf would have done which will result
      the same maps created and programs loaded as specified in the elf file.
      The "loader program" removes libelf and majority of libbpf dependency from
      program loading process.
      
      kconfig, typeless ksym, struct_ops and CO-RE are not supported yet.
      
      The order of relocate_data and relocate_calls had to change, so that
      bpf_gen__prog_load() can see all relocations for a given program with
      correct insn_idx-es.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210514003623.28033-15-alexei.starovoitov@gmail.com
      67234743
  25. 17 5月, 2021 1 次提交
    • K
      libbpf: Add low level TC-BPF management API · 715c5ce4
      Kumar Kartikeya Dwivedi 提交于
      This adds functions that wrap the netlink API used for adding, manipulating,
      and removing traffic control filters.
      
      The API summary:
      
      A bpf_tc_hook represents a location where a TC-BPF filter can be attached.
      This means that creating a hook leads to creation of the backing qdisc,
      while destruction either removes all filters attached to a hook, or destroys
      qdisc if requested explicitly (as discussed below).
      
      The TC-BPF API functions operate on this bpf_tc_hook to attach, replace,
      query, and detach tc filters. All functions return 0 on success, and a
      negative error code on failure.
      
      bpf_tc_hook_create - Create a hook
      Parameters:
      	@hook - Cannot be NULL, ifindex > 0, attach_point must be set to
      		proper enum constant. Note that parent must be unset when
      		attach_point is one of BPF_TC_INGRESS or BPF_TC_EGRESS. Note
      		that as an exception BPF_TC_INGRESS|BPF_TC_EGRESS is also a
      		valid value for attach_point.
      
      		Returns -EOPNOTSUPP when hook has attach_point as BPF_TC_CUSTOM.
      
      bpf_tc_hook_destroy - Destroy a hook
      Parameters:
      	@hook - Cannot be NULL. The behaviour depends on value of
      		attach_point. If BPF_TC_INGRESS, all filters attached to
      		the ingress hook will be detached. If BPF_TC_EGRESS, all
      		filters attached to the egress hook will be detached. If
      		BPF_TC_INGRESS|BPF_TC_EGRESS, the clsact qdisc will be
      		deleted, also detaching all filters. As before, parent must
      		be unset for these attach_points, and set for BPF_TC_CUSTOM.
      
      		It is advised that if the qdisc is operated on by many programs,
      		then the program at least check that there are no other existing
      		filters before deleting the clsact qdisc. An example is shown
      		below:
      
      		DECLARE_LIBBPF_OPTS(bpf_tc_hook, .ifindex = if_nametoindex("lo"),
      				    .attach_point = BPF_TC_INGRESS);
      		/* set opts as NULL, as we're not really interested in
      		 * getting any info for a particular filter, but just
      	 	 * detecting its presence.
      		 */
      		r = bpf_tc_query(&hook, NULL);
      		if (r == -ENOENT) {
      			/* no filters */
      			hook.attach_point = BPF_TC_INGRESS|BPF_TC_EGREESS;
      			return bpf_tc_hook_destroy(&hook);
      		} else {
      			/* failed or r == 0, the latter means filters do exist */
      			return r;
      		}
      
      		Note that there is a small race between checking for no
      		filters and deleting the qdisc. This is currently unavoidable.
      
      		Returns -EOPNOTSUPP when hook has attach_point as BPF_TC_CUSTOM.
      
      bpf_tc_attach - Attach a filter to a hook
      Parameters:
      	@hook - Cannot be NULL. Represents the hook the filter will be
      		attached to. Requirements for ifindex and attach_point are
      		same as described in bpf_tc_hook_create, but BPF_TC_CUSTOM
      		is also supported.  In that case, parent must be set to the
      		handle where the filter will be attached (using BPF_TC_PARENT).
      		E.g. to set parent to 1:16 like in tc command line, the
      		equivalent would be BPF_TC_PARENT(1, 16).
      
      	@opts - Cannot be NULL. The following opts are optional:
      		* handle   - The handle of the filter
      		* priority - The priority of the filter
      			     Must be >= 0 and <= UINT16_MAX
      		Note that when left unset, they will be auto-allocated by
      		the kernel. The following opts must be set:
      		* prog_fd - The fd of the loaded SCHED_CLS prog
      		The following opts must be unset:
      		* prog_id - The ID of the BPF prog
      		The following opts are optional:
      		* flags - Currently only BPF_TC_F_REPLACE is allowed. It
      			  allows replacing an existing filter instead of
      			  failing with -EEXIST.
      		The following opts will be filled by bpf_tc_attach on a
      		successful attach operation if they are unset:
      		* handle   - The handle of the attached filter
      		* priority - The priority of the attached filter
      		* prog_id  - The ID of the attached SCHED_CLS prog
      		This way, the user can know what the auto allocated values
      		for optional opts like handle and priority are for the newly
      		attached filter, if they were unset.
      
      		Note that some other attributes are set to fixed default
      		values listed below (this holds for all bpf_tc_* APIs):
      		protocol as ETH_P_ALL, direct action mode, chain index of 0,
      		and class ID of 0 (this can be set by writing to the
      		skb->tc_classid field from the BPF program).
      
      bpf_tc_detach
      Parameters:
      	@hook - Cannot be NULL. Represents the hook the filter will be
      		detached from. Requirements are same as described above
      		in bpf_tc_attach.
      
      	@opts - Cannot be NULL. The following opts must be set:
      		* handle, priority
      		The following opts must be unset:
      		* prog_fd, prog_id, flags
      
      bpf_tc_query
      Parameters:
      	@hook - Cannot be NULL. Represents the hook where the filter lookup will
      		be performed. Requirements are same as described above in
      		bpf_tc_attach().
      
      	@opts - Cannot be NULL. The following opts must be set:
      		* handle, priority
      		The following opts must be unset:
      		* prog_fd, prog_id, flags
      		The following fields will be filled by bpf_tc_query upon a
      		successful lookup:
      		* prog_id
      
      Some usage examples (using BPF skeleton infrastructure):
      
      BPF program (test_tc_bpf.c):
      
      	#include <linux/bpf.h>
      	#include <bpf/bpf_helpers.h>
      
      	SEC("classifier")
      	int cls(struct __sk_buff *skb)
      	{
      		return 0;
      	}
      
      Userspace loader:
      
      	struct test_tc_bpf *skel = NULL;
      	int fd, r;
      
      	skel = test_tc_bpf__open_and_load();
      	if (!skel)
      		return -ENOMEM;
      
      	fd = bpf_program__fd(skel->progs.cls);
      
      	DECLARE_LIBBPF_OPTS(bpf_tc_hook, hook, .ifindex =
      			    if_nametoindex("lo"), .attach_point =
      			    BPF_TC_INGRESS);
      	/* Create clsact qdisc */
      	r = bpf_tc_hook_create(&hook);
      	if (r < 0)
      		goto end;
      
      	DECLARE_LIBBPF_OPTS(bpf_tc_opts, opts, .prog_fd = fd);
      	r = bpf_tc_attach(&hook, &opts);
      	if (r < 0)
      		goto end;
      	/* Print the auto allocated handle and priority */
      	printf("Handle=%u", opts.handle);
      	printf("Priority=%u", opts.priority);
      
      	opts.prog_fd = opts.prog_id = 0;
      	bpf_tc_detach(&hook, &opts);
      end:
      	test_tc_bpf__destroy(skel);
      
      This is equivalent to doing the following using tc command line:
        # tc qdisc add dev lo clsact
        # tc filter add dev lo ingress bpf obj foo.o sec classifier da
        # tc filter del dev lo ingress handle <h> prio <p> bpf
      ... where the handle and priority can be found using:
        # tc filter show dev lo ingress
      
      Another example replacing a filter (extending prior example):
      
      	/* We can also choose both (or one), let's try replacing an
      	 * existing filter.
      	 */
      	DECLARE_LIBBPF_OPTS(bpf_tc_opts, replace_opts, .handle =
      			    opts.handle, .priority = opts.priority,
      			    .prog_fd = fd);
      	r = bpf_tc_attach(&hook, &replace_opts);
      	if (r == -EEXIST) {
      		/* Expected, now use BPF_TC_F_REPLACE to replace it */
      		replace_opts.flags = BPF_TC_F_REPLACE;
      		return bpf_tc_attach(&hook, &replace_opts);
      	} else if (r < 0) {
      		return r;
      	}
      	/* There must be no existing filter with these
      	 * attributes, so cleanup and return an error.
      	 */
      	replace_opts.prog_fd = replace_opts.prog_id = 0;
      	bpf_tc_detach(&hook, &replace_opts);
      	return -1;
      
      To obtain info of a particular filter:
      
      	/* Find info for filter with handle 1 and priority 50 */
      	DECLARE_LIBBPF_OPTS(bpf_tc_opts, info_opts, .handle = 1,
      			    .priority = 50);
      	r = bpf_tc_query(&hook, &info_opts);
      	if (r == -ENOENT)
      		printf("Filter not found");
      	else if (r < 0)
      		return r;
      	printf("Prog ID: %u", info_opts.prog_id);
      	return 0;
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Co-developed-by: Daniel Borkmann <daniel@iogearbox.net> # libbpf API design
      [ Daniel: also did major patch cleanup ]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20210512103451.989420-3-memxor@gmail.com
      715c5ce4
  26. 09 4月, 2021 1 次提交
  27. 26 3月, 2021 1 次提交
  28. 19 3月, 2021 2 次提交
    • A
      libbpf: Add BPF static linker APIs · faf6ed32
      Andrii Nakryiko 提交于
      Introduce BPF static linker APIs to libbpf. BPF static linker allows to
      perform static linking of multiple BPF object files into a single combined
      resulting object file, preserving all the BPF programs, maps, global
      variables, etc.
      
      Data sections (.bss, .data, .rodata, .maps, maps, etc) with the same name are
      concatenated together. Similarly, code sections are also concatenated. All the
      symbols and ELF relocations are also concatenated in their respective ELF
      sections and are adjusted accordingly to the new object file layout.
      
      Static variables and functions are handled correctly as well, adjusting BPF
      instructions offsets to reflect new variable/function offset within the
      combined ELF section. Such relocations are referencing STT_SECTION symbols and
      that stays intact.
      
      Data sections in different files can have different alignment requirements, so
      that is taken care of as well, adjusting sizes and offsets as necessary to
      satisfy both old and new alignment requirements.
      
      DWARF data sections are stripped out, currently. As well as LLLVM_ADDRSIG
      section, which is ignored by libbpf in bpf_object__open() anyways. So, in
      a way, BPF static linker is an analogue to `llvm-strip -g`, which is a pretty
      nice property, especially if resulting .o file is then used to generate BPF
      skeleton.
      
      Original string sections are ignored and instead we construct our own set of
      unique strings using libbpf-internal `struct strset` API.
      
      To reduce the size of the patch, all the .BTF and .BTF.ext processing was
      moved into a separate patch.
      
      The high-level API consists of just 4 functions:
        - bpf_linker__new() creates an instance of BPF static linker. It accepts
          output filename and (currently empty) options struct;
        - bpf_linker__add_file() takes input filename and appends it to the already
          processed ELF data; it can be called multiple times, one for each BPF
          ELF object file that needs to be linked in;
        - bpf_linker__finalize() needs to be called to dump final ELF contents into
          the output file, specified when bpf_linker was created; after
          bpf_linker__finalize() is called, no more bpf_linker__add_file() and
          bpf_linker__finalize() calls are allowed, they will return error;
        - regardless of whether bpf_linker__finalize() was called or not,
          bpf_linker__free() will free up all the used resources.
      
      Currently, BPF static linker doesn't resolve cross-object file references
      (extern variables and/or functions). This will be added in the follow up patch
      set.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-7-andrii@kernel.org
      faf6ed32
    • A
      libbpf: Add generic BTF type shallow copy API · 9af44bc5
      Andrii Nakryiko 提交于
      Add btf__add_type() API that performs shallow copy of a given BTF type from
      the source BTF into the destination BTF. All the information and type IDs are
      preserved, but all the strings encountered are added into the destination BTF
      and corresponding offsets are rewritten. BTF type IDs are assumed to be
      correct or such that will be (somehow) modified afterwards.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-6-andrii@kernel.org
      9af44bc5
  29. 05 3月, 2021 1 次提交
  30. 15 12月, 2020 1 次提交
  31. 04 12月, 2020 2 次提交