1. 26 11月, 2021 1 次提交
  2. 12 11月, 2021 3 次提交
    • Y
      libbpf: Support BTF_KIND_TYPE_TAG · 2dc1e488
      Yonghong Song 提交于
      Add libbpf support for BTF_KIND_TYPE_TAG.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211112012614.1505315-1-yhs@fb.com
      2dc1e488
    • A
      libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof · 6084f5dc
      Andrii Nakryiko 提交于
      Change btf_dump__new() and corresponding struct btf_dump_ops structure
      to be extensible by using OPTS "framework" ([0]). Given we don't change
      the names, we use a similar approach as with bpf_prog_load(), but this
      time we ended up with two APIs with the same name and same number of
      arguments, so overloading based on number of arguments with
      ___libbpf_override() doesn't work.
      
      Instead, use "overloading" based on types. In this particular case,
      print callback has to be specified, so we detect which argument is
      a callback. If it's 4th (last) argument, old implementation of API is
      used by user code. If not, it must be 2nd, and thus new implementation
      is selected. The rest is handled by the same symbol versioning approach.
      
      btf_ext argument is dropped as it was never used and isn't necessary
      either. If in the future we'll need btf_ext, that will be added into
      OPTS-based struct btf_dump_opts.
      
      struct btf_dump_opts is reused for both old API and new APIs. ctx field
      is marked deprecated in v0.7+ and it's put at the same memory location
      as OPTS's sz field. Any user of new-style btf_dump__new() will have to
      set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
      the callback as mandatory input argument, following the other APIs in
      libbpf that accept callbacks consistently.
      
      Again, this is quite ugly in implementation, but is done in the name of
      backwards compatibility and uniform and extensible future APIs (at the
      same time, sigh). And it will be gone in libbpf 1.0.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/283Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
      6084f5dc
    • A
      libbpf: Turn btf_dedup_opts into OPTS-based struct · 957d350a
      Andrii Nakryiko 提交于
      btf__dedup() and struct btf_dedup_opts were added before we figured out
      OPTS mechanism. As such, btf_dedup_opts is non-extensible without
      breaking an ABI and potentially crashing user application.
      
      Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
      names that would be great to preserve and use going forward. So we use
      ___libbpf_override() macro approach, used previously for bpf_prog_load()
      API, to define a new btf__dedup() variant that accepts only struct btf *
      and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
      implementation into btf__dedup_deprecated(). This keeps both source and
      binary compatibility with old and new applications.
      
      The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
      and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
      is a pretty rarely used API and I believe that the only currently known
      users (besides selftests) are libbpf's own bpf_linker and pahole.
      Neither use case actually uses options and just passes NULL. So instead
      of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
      one, move btf_ext argument into those opts (only bpf_linker needs to
      dedup btf_ext, so it's not a typical thing to specify), and drop never
      used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
      makes BTF dedup much less useful and efficient).
      
      Just in case, for old implementation, btf__dedup_deprecated(), detect
      non-NULL options and error out with helpful message, to help users
      migrate, if there are any user playing with btf__dedup().
      
      The last remaining piece is dedup_table_size, which is another
      anachronism from very early days of BTF dedup. Since then it has been
      reduced to the only valid value, 1, to request forced hash collisions.
      This is only used during testing. So instead introduce a bool flag to
      force collisions explicitly.
      
      This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
      use to avoid selftests breakage.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/281Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
      957d350a
  3. 23 10月, 2021 1 次提交
  4. 22 10月, 2021 1 次提交
    • A
      libbpf: Deprecate btf__finalize_data() and move it into libbpf.c · b96c07f3
      Andrii Nakryiko 提交于
      There isn't a good use case where anyone but libbpf itself needs to call
      btf__finalize_data(). It was implemented for internal use and it's not
      clear why it was made into public API in the first place. To function, it
      requires active ELF data, which is stored inside bpf_object for the
      duration of opening phase only. But the only BTF that needs bpf_object's
      ELF is that bpf_object's BTF itself, which libbpf fixes up automatically
      during bpf_object__open() operation anyways. There is no need for any
      additional fix up and no reasonable scenario where it's useful and
      appropriate.
      
      Thus, btf__finalize_data() is just an API atavism and is better removed.
      So this patch marks it as deprecated immediately (v0.6+) and moves the
      code from btf.c into libbpf.c where it's used in the context of
      bpf_object opening phase. Such code co-location allows to make code
      structure more straightforward and remove bpf_object__section_size() and
      bpf_object__variable_offset() internal helpers from libbpf_internal.h,
      making them static. Their naming is also adjusted to not create
      a wrong illusion that they are some sort of method of bpf_object. They
      are internal helpers and are called appropriately.
      
      This is part of libbpf 1.0 effort ([0]).
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/276Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20211021014404.2635234-2-andrii@kernel.org
      b96c07f3
  5. 19 10月, 2021 1 次提交
  6. 06 10月, 2021 1 次提交
  7. 16 9月, 2021 1 次提交
  8. 15 9月, 2021 1 次提交
  9. 10 9月, 2021 1 次提交
    • Q
      libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations · 0b46b755
      Quentin Monnet 提交于
      Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare
      the deprecation of two API functions. This macro marks functions as deprecated
      when libbpf's version reaches the values passed as an argument.
      
      As part of this change libbpf_version.h header is added with recorded major
      (LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros.
      They are now part of libbpf public API and can be relied upon by user code.
      libbpf_version.h is installed system-wide along other libbpf public headers.
      
      Due to this new build-time auto-generated header, in-kernel applications
      relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to
      include libbpf's output directory as part of a list of include search paths.
      Better fix would be to use libbpf's make_install target to install public API
      headers, but that clean up is left out as a future improvement. The build
      changes were tested by building kernel (with KBUILD_OUTPUT and O= specified
      explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No
      problems were detected.
      
      Note that because of the constraints of the C preprocessor we have to write
      a few lines of macro magic for each version used to prepare deprecation (0.6
      for now).
      
      Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of
      btf__get_from_id() and btf__load(), which are replaced by
      btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively,
      starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]).
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/278Co-developed-by: NQuentin Monnet <quentin@isovalent.com>
      Co-developed-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
      0b46b755
  10. 31 7月, 2021 1 次提交
  11. 30 7月, 2021 3 次提交
  12. 17 7月, 2021 1 次提交
    • A
      libbpf: BTF dumper support for typed data · 920d16af
      Alan Maguire 提交于
      Add a BTF dumper for typed data, so that the user can dump a typed
      version of the data provided.
      
      The API is
      
      int btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
                                   void *data, size_t data_sz,
                                   const struct btf_dump_type_data_opts *opts);
      
      ...where the id is the BTF id of the data pointed to by the "void *"
      argument; for example the BTF id of "struct sk_buff" for a
      "struct skb *" data pointer.  Options supported are
      
       - a starting indent level (indent_lvl)
       - a user-specified indent string which will be printed once per
         indent level; if NULL, tab is chosen but any string <= 32 chars
         can be provided.
       - a set of boolean options to control dump display, similar to those
         used for BPF helper bpf_snprintf_btf().  Options are
              - compact : omit newlines and other indentation
              - skip_names: omit member names
              - emit_zeroes: show zero-value members
      
      Default output format is identical to that dumped by bpf_snprintf_btf(),
      for example a "struct sk_buff" representation would look like this:
      
      struct sk_buff){
      	(union){
      		(struct){
      			.next = (struct sk_buff *)0xffffffffffffffff,
      			.prev = (struct sk_buff *)0xffffffffffffffff,
      		(union){
      			.dev = (struct net_device *)0xffffffffffffffff,
      			.dev_scratch = (long unsigned int)18446744073709551615,
      		},
      	},
      ...
      
      If the data structure is larger than the *data_sz*
      number of bytes that are available in *data*, as much
      of the data as possible will be dumped and -E2BIG will
      be returned.  This is useful as tracers will sometimes
      not be able to capture all of the data associated with
      a type; for example a "struct task_struct" is ~16k.
      Being able to specify that only a subset is available is
      important for such cases.  On success, the amount of data
      dumped is returned.
      Signed-off-by: NAlan Maguire <alan.maguire@oracle.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/1626362126-27775-2-git-send-email-alan.maguire@oracle.com
      920d16af
  13. 20 3月, 2021 1 次提交
  14. 19 3月, 2021 1 次提交
  15. 05 3月, 2021 1 次提交
  16. 04 12月, 2020 1 次提交
  17. 06 11月, 2020 1 次提交
    • A
      libbpf: Implement basic split BTF support · ba451366
      Andrii Nakryiko 提交于
      Support split BTF operation, in which one BTF (base BTF) provides basic set of
      types and strings, while another one (split BTF) builds on top of base's types
      and strings and adds its own new types and strings. From API standpoint, the
      fact that the split BTF is built on top of the base BTF is transparent.
      
      Type numeration is transparent. If the base BTF had last type ID #N, then all
      types in the split BTF start at type ID N+1. Any type in split BTF can
      reference base BTF types, but not vice versa. Programmatically construction of
      a split BTF on top of a base BTF is supported: one can create an empty split
      BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
      binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
      initial set of split types/strings from the ELF file with .BTF section.
      
      String offsets are similarly transparent and are a logical continuation of
      base BTF's strings. When building BTF programmatically and adding a new string
      (explicitly with btf__add_str() or implicitly through appending new
      types/members), string-to-be-added would first be looked up from the base
      BTF's string section and re-used if it's there. If not, it will be looked up
      and/or added to the split BTF string section. Similarly to type IDs, types in
      split BTF can refer to strings from base BTF absolutely transparently (but not
      vice versa, of course, because base BTF doesn't "know" about existence of
      split BTF).
      
      Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
      [0] VOID type. This allows to handle split/base BTF type lookups transparently
      by using btf->start_id type ID offset, which is always 1 for base/non-split
      BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
      
      BTF deduplication is not yet supported for split BTF and support for it will
      be added in separate patch.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
      ba451366
  18. 30 9月, 2020 1 次提交
    • A
      libbpf: Support BTF loading and raw data output in both endianness · 3289959b
      Andrii Nakryiko 提交于
      Teach BTF to recognized wrong endianness and transparently convert it
      internally to host endianness. Original endianness of BTF will be preserved
      and used during btf__get_raw_data() to convert resulting raw data to the same
      endianness and a source raw_data. This means that little-endian host can parse
      big-endian BTF with no issues, all the type data will be presented to the
      client application in native endianness, but when it's time for emitting BTF
      to persist it in a file (e.g., after BTF deduplication), original non-native
      endianness will be preserved and stored.
      
      It's possible to query original endianness of BTF data with new
      btf__endianness() API. It's also possible to override desired output
      endianness with btf__set_endianness(), so that if application needs to load,
      say, big-endian BTF and store it as little-endian BTF, it's possible to
      manually override this. If btf__set_endianness() was used to change
      endianness, btf__endianness() will reflect overridden endianness.
      
      Given there are no known use cases for supporting cross-endianness for
      .BTF.ext, loading .BTF.ext in non-native endianness is not supported.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200929043046.1324350-3-andriin@fb.com
      3289959b
  19. 29 9月, 2020 4 次提交
    • A
      libbpf: Add btf__str_by_offset() as a more generic variant of name_by_offset · f86ed050
      Andrii Nakryiko 提交于
      BTF strings are used not just for names, they can be arbitrary strings used
      for CO-RE relocations, line/func infos, etc. Thus "name_by_offset" terminology
      is too specific and might be misleading. Instead, introduce
      btf__str_by_offset() API which uses generic string terminology.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200929020533.711288-3-andriin@fb.com
      f86ed050
    • A
      libbpf: Add BTF writing APIs · 4a3b33f8
      Andrii Nakryiko 提交于
      Add APIs for appending new BTF types at the end of BTF object.
      
      Each BTF kind has either one API of the form btf__add_<kind>(). For types
      that have variable amount of additional items (struct/union, enum, func_proto,
      datasec), additional API is provided to emit each such item. E.g., for
      emitting a struct, one would use the following sequence of API calls:
      
      btf__add_struct(...);
      btf__add_field(...);
      ...
      btf__add_field(...);
      
      Each btf__add_field() will ensure that the last BTF type is of STRUCT or
      UNION kind and will automatically increment that type's vlen field.
      
      All the strings are provided as C strings (const char *), not a string offset.
      This significantly improves usability of BTF writer APIs. All such strings
      will be automatically appended to string section or existing string will be
      re-used, if such string was already added previously.
      
      Each API attempts to do all the reasonable validations, like enforcing
      non-empty names for entities with required names, proper value bounds, various
      bit offset restrictions, etc.
      
      Type ID validation is minimal because it's possible to emit a type that refers
      to type that will be emitted later, so libbpf has no way to enforce such
      cases. User must be careful to properly emit all the necessary types and
      specify type IDs that will be valid in the finally generated BTF.
      
      Each of btf__add_<kind>() APIs return new type ID on success or negative
      value on error. APIs like btf__add_field() that emit additional items
      return zero on success and negative value on error.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200929020533.711288-2-andriin@fb.com
      4a3b33f8
    • A
      libbpf: Add btf__new_empty() to create an empty BTF object · a871b043
      Andrii Nakryiko 提交于
      Add an ability to create an empty BTF object from scratch. This is going to be
      used by pahole for BTF encoding. And also by selftest for convenient creation
      of BTF objects.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200926011357.2366158-7-andriin@fb.com
      a871b043
    • A
      libbpf: Allow modification of BTF and add btf__add_str API · 919d2b1d
      Andrii Nakryiko 提交于
      Allow internal BTF representation to switch from default read-only mode, in
      which raw BTF data is a single non-modifiable block of memory with BTF header,
      types, and strings layed out sequentially and contiguously in memory, into
      a writable representation with types and strings data split out into separate
      memory regions, that can be dynamically expanded.
      
      Such writable internal representation is transparent to users of libbpf APIs,
      but allows to append new types and strings at the end of BTF, which is
      a typical use case when generating BTF programmatically. All the basic
      guarantees of BTF types and strings layout is preserved, i.e., user can get
      `struct btf_type *` pointer and read it directly. Such btf_type pointers might
      be invalidated if BTF is modified, so some care is required in such mixed
      read/write scenarios.
      
      Switch from read-only to writable configuration happens automatically the
      first time when user attempts to modify BTF by either adding a new type or new
      string. It is still possible to get raw BTF data, which is a single piece of
      memory that can be persisted in ELF section or into a file as raw BTF. Such
      raw data memory is also still owned by BTF and will be freed either when BTF
      object is freed or if another modification to BTF happens, as any modification
      invalidates BTF raw representation.
      
      This patch adds the first two BTF manipulation APIs: btf__add_str(), which
      allows to add arbitrary strings to BTF string section, and btf__find_str()
      which allows to find existing string offset, but not add it if it's missing.
      All the added strings are automatically deduplicated. This is achieved by
      maintaining an additional string lookup index for all unique strings. Such
      index is built when BTF is switched to modifiable mode. If at that time BTF
      strings section contained duplicate strings, they are not de-duplicated. This
      is done specifically to not modify the existing content of BTF (types, their
      string offsets, etc), which can cause confusion and is especially important
      property if there is struct btf_ext associated with struct btf. By following
      this "imperfect deduplication" process, btf_ext is kept consitent and correct.
      If deduplication of strings is necessary, it can be forced by doing BTF
      deduplication, at which point all the strings will be eagerly deduplicated and
      all string offsets both in struct btf and struct btf_ext will be updated.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200926011357.2366158-6-andriin@fb.com
      919d2b1d
  20. 04 9月, 2020 1 次提交
  21. 19 8月, 2020 1 次提交
  22. 14 8月, 2020 1 次提交
    • A
      libbpf: Handle BTF pointer sizes more carefully · 44ad23df
      Andrii Nakryiko 提交于
      With libbpf and BTF it is pretty common to have libbpf built for one
      architecture, while BTF information was generated for a different architecture
      (typically, but not always, BPF). In such case, the size of a pointer might
      differ betweem architectures. libbpf previously was always making an
      assumption that pointer size for BTF is the same as native architecture
      pointer size, but that breaks for cases where libbpf is built as 32-bit
      library, while BTF is for 64-bit architecture.
      
      To solve this, add heuristic to determine pointer size by searching for `long`
      or `unsigned long` integer type and using its size as a pointer size. Also,
      allow to override the pointer size with a new API btf__set_pointer_size(), for
      cases where application knows which pointer size should be used. User
      application can check what libbpf "guessed" by looking at the result of
      btf__pointer_size(). If it's not 0, then libbpf successfully determined a
      pointer size, otherwise native arch pointer size will be used.
      
      For cases where BTF is parsed from ELF file, use ELF's class (32-bit or
      64-bit) to determine pointer size.
      
      Fixes: 8a138aed ("bpf: btf: Add BTF support to libbpf")
      Fixes: 351131b5 ("libbpf: add btf_dump API for BTF-to-C conversion")
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200813204945.1020225-5-andriin@fb.com
      44ad23df
  23. 03 8月, 2020 1 次提交
  24. 14 7月, 2020 2 次提交
  25. 10 7月, 2020 1 次提交
  26. 09 7月, 2020 1 次提交
  27. 23 6月, 2020 1 次提交
    • A
      libbpf: Add support for extracting kernel symbol addresses · 1c0c7074
      Andrii Nakryiko 提交于
      Add support for another (in addition to existing Kconfig) special kind of
      externs in BPF code, kernel symbol externs. Such externs allow BPF code to
      "know" kernel symbol address and either use it for comparisons with kernel
      data structures (e.g., struct file's f_op pointer, to distinguish different
      kinds of file), or, with the help of bpf_probe_user_kernel(), to follow
      pointers and read data from global variables. Kernel symbol addresses are
      found through /proc/kallsyms, which should be present in the system.
      
      Currently, such kernel symbol variables are typeless: they have to be defined
      as `extern const void <symbol>` and the only operation you can do (in C code)
      with them is to take its address. Such extern should reside in a special
      section '.ksyms'. bpf_helpers.h header provides __ksym macro for this. Strong
      vs weak semantics stays the same as with Kconfig externs. If symbol is not
      found in /proc/kallsyms, this will be a failure for strong (non-weak) extern,
      but will be defaulted to 0 for weak externs.
      
      If the same symbol is defined multiple times in /proc/kallsyms, then it will
      be error if any of the associated addresses differs. In that case, address is
      ambiguous, so libbpf falls on the side of caution, rather than confusing user
      with randomly chosen address.
      
      In the future, once kernel is extended with variables BTF information, such
      ksym externs will be supported in a typed version, which will allow BPF
      program to read variable's contents directly, similarly to how it's done for
      fentry/fexit input arguments.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NHao Luo <haoluo@google.com>
      Link: https://lore.kernel.org/bpf/20200619231703.738941-3-andriin@fb.com
      1c0c7074
  28. 16 1月, 2020 1 次提交
  29. 16 12月, 2019 3 次提交
  30. 16 11月, 2019 1 次提交