1. 06 4月, 2022 1 次提交
    • A
      libbpf: Wire up USDT API and bpf_link integration · 2e4913e0
      Andrii Nakryiko 提交于
      Wire up libbpf USDT support APIs without yet implementing all the
      nitty-gritty details of USDT discovery, spec parsing, and BPF map
      initialization.
      
      User-visible user-space API is simple and is conceptually very similar
      to uprobe API.
      
      bpf_program__attach_usdt() API allows to programmatically attach given
      BPF program to a USDT, specified through binary path (executable or
      shared lib), USDT provider and name. Also, just like in uprobe case, PID
      filter is specified (0 - self, -1 - any process, or specific PID).
      Optionally, USDT cookie value can be specified. Such single API
      invocation will try to discover given USDT in specified binary and will
      use (potentially many) BPF uprobes to attach this program in correct
      locations.
      
      Just like any bpf_program__attach_xxx() APIs, bpf_link is returned that
      represents this attachment. It is a virtual BPF link that doesn't have
      direct kernel object, as it can consist of multiple underlying BPF
      uprobe links. As such, attachment is not atomic operation and there can
      be brief moment when some USDT call sites are attached while others are
      still in the process of attaching. This should be taken into
      consideration by user. But bpf_program__attach_usdt() guarantees that
      in the case of success all USDT call sites are successfully attached, or
      all the successfuly attachments will be detached as soon as some USDT
      call sites failed to be attached. So, in theory, there could be cases of
      failed bpf_program__attach_usdt() call which did trigger few USDT
      program invocations. This is unavoidable due to multi-uprobe nature of
      USDT and has to be handled by user, if it's important to create an
      illusion of atomicity.
      
      USDT BPF programs themselves are marked in BPF source code as either
      SEC("usdt"), in which case they won't be auto-attached through
      skeleton's <skel>__attach() method, or it can have a full definition,
      which follows the spirit of fully-specified uprobes:
      SEC("usdt/<path>:<provider>:<name>"). In the latter case skeleton's
      attach method will attempt auto-attachment. Similarly, generic
      bpf_program__attach() will have enought information to go off of for
      parameterless attachment.
      
      USDT BPF programs are actually uprobes, and as such for kernel they are
      marked as BPF_PROG_TYPE_KPROBE.
      
      Another part of this patch is USDT-related feature probing:
        - BPF cookie support detection from user-space;
        - detection of kernel support for auto-refcounting of USDT semaphore.
      
      The latter is optional. If kernel doesn't support such feature and USDT
      doesn't rely on USDT semaphores, no error is returned. But if libbpf
      detects that USDT requires setting semaphores and kernel doesn't support
      this, libbpf errors out with explicit pr_warn() message. Libbpf doesn't
      support poking process's memory directly to increment semaphore value,
      like BCC does on legacy kernels, due to inherent raciness and danger of
      such process memory manipulation. Libbpf let's kernel take care of this
      properly or gives up.
      
      Logistically, all the extra USDT-related infrastructure of libbpf is put
      into a separate usdt.c file and abstracted behind struct usdt_manager.
      Each bpf_object has lazily-initialized usdt_manager pointer, which is
      only instantiated if USDT programs are attempted to be attached. Closing
      BPF object frees up usdt_manager resources. usdt_manager keeps track of
      USDT spec ID assignment and few other small things.
      
      Subsequent patches will fill out remaining missing pieces of USDT
      initialization and setup logic.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20220404234202.331384-3-andrii@kernel.org
      2e4913e0
  2. 18 3月, 2022 1 次提交
  3. 17 2月, 2022 1 次提交
  4. 26 1月, 2022 1 次提交
  5. 29 12月, 2021 1 次提交
  6. 15 12月, 2021 1 次提交
    • A
      libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF · e542f2c4
      Andrii Nakryiko 提交于
      The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
      one of the first extremely frustrating gotchas that all new BPF users go
      through and in some cases have to learn it a very hard way.
      
      Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
      dropped the dependency on memlock and uses memcg-based memory accounting
      instead. Unfortunately, detecting memcg-based BPF memory accounting is
      far from trivial (as can be evidenced by this patch), so in practice
      most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
      
      As we move towards libbpf 1.0, it would be good to allow users to forget
      about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
      automatically. This patch paves the way forward in this matter. Libbpf
      will do feature detection of memcg-based accounting, and if detected,
      will do nothing. But if the kernel is too old, just like BCC, libbpf
      will automatically increase RLIMIT_MEMLOCK on behalf of user
      application ([0]).
      
      As this is technically a breaking change, during the transition period
      applications have to opt into libbpf 1.0 mode by setting
      LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
      libbpf_set_strict_mode().
      
      Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
      with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
      nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
      called before the first bpf_prog_load(), bpf_btf_load(), or
      bpf_object__load() call, otherwise it has no effect and will return
      -EBUSY.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/369Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
      e542f2c4
  7. 14 12月, 2021 1 次提交
  8. 11 12月, 2021 1 次提交
    • A
      libbpf: Allow passing preallocated log_buf when loading BTF into kernel · 1a190d1e
      Andrii Nakryiko 提交于
      Add libbpf-internal btf_load_into_kernel() that allows to pass
      preallocated log_buf and custom log_level to be passed into kernel
      during BPF_BTF_LOAD call. When custom log_buf is provided,
      btf_load_into_kernel() won't attempt an retry with automatically
      allocated internal temporary buffer to capture BTF validation log.
      
      It's important to note the relation between log_buf and log_level, which
      slightly deviates from stricter kernel logic. From kernel's POV, if
      log_buf is specified, log_level has to be > 0, and vice versa. While
      kernel has good reasons to request such "sanity, this, in practice, is
      a bit unconvenient and restrictive for libbpf's high-level bpf_object APIs.
      
      So libbpf will allow to set non-NULL log_buf and log_level == 0. This is
      fine and means to attempt to load BTF without logging requested, but if
      it failes, retry the load with custom log_buf and log_level 1. Similar
      logic will be implemented for program loading. In practice this means
      that users can provide custom log buffer just in case error happens, but
      not really request slower verbose logging all the time. This is also
      consistent with libbpf behavior when custom log_buf is not set: libbpf
      first tries to load everything with log_level=0, and only if error
      happens allocates internal log buffer and retries with log_level=1.
      
      Also, while at it, make BTF validation log more obvious and follow the log
      pattern libbpf is using for dumping BPF verifier log during
      BPF_PROG_LOAD. BTF loading resulting in an error will look like this:
      
      libbpf: BTF loading error: -22
      libbpf: -- BEGIN BTF LOAD LOG ---
      magic: 0xeb9f
      version: 1
      flags: 0x0
      hdr_len: 24
      type_off: 0
      type_len: 1040
      str_off: 1040
      str_len: 2063598257
      btf_total_size: 1753
      Total section length too long
      -- END BTF LOAD LOG --
      libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring.
      
      This makes it much easier to find relevant parts in libbpf log output.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-4-andrii@kernel.org
      1a190d1e
  9. 03 12月, 2021 1 次提交
  10. 26 11月, 2021 1 次提交
  11. 12 11月, 2021 1 次提交
  12. 08 11月, 2021 1 次提交
    • A
      libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load() · d10ef2b8
      Andrii Nakryiko 提交于
      Add a new unified OPTS-based low-level API for program loading,
      bpf_prog_load() ([0]).  bpf_prog_load() accepts few "mandatory"
      parameters as input arguments (program type, name, license,
      instructions) and all the other optional (as in not required to specify
      for all types of BPF programs) fields into struct bpf_prog_load_opts.
      
      This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
      obsolete and they are slated for deprecation in libbpf v0.7:
        - bpf_load_program();
        - bpf_load_program_xattr();
        - bpf_verify_program().
      
      Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
      to become a public bpf_prog_load() API. struct bpf_prog_load_params used
      internally is replaced by public struct bpf_prog_load_opts.
      
      Unfortunately, while conceptually all this is pretty straightforward,
      the biggest complication comes from the already existing bpf_prog_load()
      *high-level* API, which has nothing to do with BPF_PROG_LOAD command.
      
      We try really hard to have a new API named bpf_prog_load(), though,
      because it maps naturally to BPF_PROG_LOAD command.
      
      For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
      and mark it as COMPAT_VERSION() for shared library users compiled
      against old version of libbpf. Statically linked users and shared lib
      users compiled against new version of libbpf headers will get "rerouted"
      to bpf_prog_deprecated() through a macro helper that decides whether to
      use new or old bpf_prog_load() based on number of input arguments (see
      ___libbpf_overload in libbpf_common.h).
      
      To test that existing
      bpf_prog_load()-using code compiles and works as expected, I've compiled
      and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
      -Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
      the macro-based overload approach. I don't expect anyone else to do
      something like this in practice, though. This is testing-specific way to
      replace bpf_prog_load() calls with special testing variant of it, which
      adds extra prog_flags value. After testing I kept this selftests hack,
      but ensured that we use a new bpf_prog_load_deprecated name for this.
      
      This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
      bpf_object interface has to be used for working with struct bpf_program.
      Libbpf doesn't support loading just a bpf_program.
      
      The silver lining is that when we get to libbpf 1.0 all these
      complication will be gone and we'll have one clean bpf_prog_load()
      low-level API with no backwards compatibility hackery surrounding it.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/284Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
      d10ef2b8
  13. 29 10月, 2021 2 次提交
  14. 22 10月, 2021 2 次提交
    • A
      libbpf: Use Elf64-specific types explicitly for dealing with ELF · ad23b723
      Andrii Nakryiko 提交于
      Minimize the usage of class-agnostic gelf_xxx() APIs from libelf. These
      APIs require copying ELF data structures into local GElf_xxx structs and
      have a more cumbersome API. BPF ELF file is defined to be always 64-bit
      ELF object, even when intended to be run on 32-bit host architectures,
      so there is no need to do class-agnostic conversions everywhere. BPF
      static linker implementation within libbpf has been using Elf64-specific
      types since initial implementation.
      
      Add two simple helpers, elf_sym_by_idx() and elf_rel_by_idx(), for more
      succinct direct access to ELF symbol and relocation records within ELF
      data itself and switch all the GElf_xxx usage into Elf64_xxx
      equivalents. The only remaining place within libbpf.c that's still using
      gelf API is gelf_getclass(), as there doesn't seem to be a direct way to
      get underlying ELF bitness.
      
      No functional changes intended.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20211021014404.2635234-4-andrii@kernel.org
      ad23b723
    • A
      libbpf: Deprecate btf__finalize_data() and move it into libbpf.c · b96c07f3
      Andrii Nakryiko 提交于
      There isn't a good use case where anyone but libbpf itself needs to call
      btf__finalize_data(). It was implemented for internal use and it's not
      clear why it was made into public API in the first place. To function, it
      requires active ELF data, which is stored inside bpf_object for the
      duration of opening phase only. But the only BTF that needs bpf_object's
      ELF is that bpf_object's BTF itself, which libbpf fixes up automatically
      during bpf_object__open() operation anyways. There is no need for any
      additional fix up and no reasonable scenario where it's useful and
      appropriate.
      
      Thus, btf__finalize_data() is just an API atavism and is better removed.
      So this patch marks it as deprecated immediately (v0.6+) and moves the
      code from btf.c into libbpf.c where it's used in the context of
      bpf_object opening phase. Such code co-location allows to make code
      structure more straightforward and remove bpf_object__section_size() and
      bpf_object__variable_offset() internal helpers from libbpf_internal.h,
      making them static. Their naming is also adjusted to not create
      a wrong illusion that they are some sort of method of bpf_object. They
      are internal helpers and are called appropriately.
      
      This is part of libbpf 1.0 effort ([0]).
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/276Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20211021014404.2635234-2-andrii@kernel.org
      b96c07f3
  15. 19 10月, 2021 1 次提交
  16. 06 10月, 2021 1 次提交
    • K
      libbpf: Support kernel module function calls · 9dbe6015
      Kumar Kartikeya Dwivedi 提交于
      This patch adds libbpf support for kernel module function call support.
      The fd_array parameter is used during BPF program load to pass module
      BTFs referenced by the program. insn->off is set to index into this
      array, but starts from 1, because insn->off as 0 is reserved for
      btf_vmlinux.
      
      We try to use existing insn->off for a module, since the kernel limits
      the maximum distinct module BTFs for kfuncs to 256, and also because
      index must never exceed the maximum allowed value that can fit in
      insn->off (INT16_MAX). In the future, if kernel interprets signed offset
      as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.
      
      Also introduce a btf__find_by_name_kind_own helper to start searching
      from module BTF's start id when we know that the BTF ID is not present
      in vmlinux BTF (in find_ksym_btf_id).
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211002011757.311265-7-memxor@gmail.com
      9dbe6015
  17. 29 9月, 2021 1 次提交
    • A
      libbpf: Reduce reliance of attach_fns on sec_def internals · 13d35a0c
      Andrii Nakryiko 提交于
      Move closer to not relying on bpf_sec_def internals that won't be part
      of public API, when pluggable SEC() handlers will be allowed. Drop
      pre-calculated prefix length, and in various helpers don't rely on this
      prefix length availability. Also minimize reliance on knowing
      bpf_sec_def's prefix for few places where section prefix shortcuts are
      supported (e.g., tp vs tracepoint, raw_tp vs raw_tracepoint).
      
      Given checking some string for having a given string-constant prefix is
      such a common operation and so annoying to be done with pure C code, add
      a small macro helper, str_has_pfx(), and reuse it throughout libbpf.c
      where prefix comparison is performed. With __builtin_constant_p() it's
      possible to have a convenient helper that checks some string for having
      a given prefix, where prefix is either string literal (or compile-time
      known string due to compiler optimization) or just a runtime string
      pointer, which is quite convenient and saves a lot of typing and string
      literal duplication.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/bpf/20210928161946.2512801-7-andrii@kernel.org
      13d35a0c
  18. 15 9月, 2021 1 次提交
  19. 08 9月, 2021 1 次提交
  20. 17 8月, 2021 1 次提交
  21. 27 7月, 2021 2 次提交
  22. 26 5月, 2021 2 次提交
  23. 25 5月, 2021 1 次提交
    • Y
      libbpf: Add support for new llvm bpf relocations · 9f0c317f
      Yonghong Song 提交于
      LLVM patch https://reviews.llvm.org/D102712
      narrowed the scope of existing R_BPF_64_64
      and R_BPF_64_32 relocations, and added three
      new relocations, R_BPF_64_ABS64, R_BPF_64_ABS32
      and R_BPF_64_NODYLD32. The main motivation is
      to make relocations linker friendly.
      
      This change, unfortunately, breaks libbpf build,
      and we will see errors like below:
        libbpf: ELF relo #0 in section #6 has unexpected type 2 in
           /home/yhs/work/bpf-next/tools/testing/selftests/bpf/bpf_tcp_nogpl.o
        Error: failed to link
           '/home/yhs/work/bpf-next/tools/testing/selftests/bpf/bpf_tcp_nogpl.o':
           Unknown error -22 (-22)
      The new relocation R_BPF_64_ABS64 is generated
      and libbpf linker sanity check doesn't understand it.
      Relocation section '.rel.struct_ops' at offset 0x1410 contains 1 entries:
          Offset             Info             Type               Symbol's Value  Symbol's Name
      0000000000000018  0000000700000002 R_BPF_64_ABS64         0000000000000000 nogpltcp_init
      
      Look at the selftests/bpf/bpf_tcp_nogpl.c,
        void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk)
        {
        }
      
        SEC(".struct_ops")
        struct tcp_congestion_ops bpf_nogpltcp = {
                .init           = (void *)nogpltcp_init,
                .name           = "bpf_nogpltcp",
        };
      The new llvm relocation scheme categorizes 'nogpltcp_init' reference
      as R_BPF_64_ABS64 instead of R_BPF_64_64 which is used to specify
      ld_imm64 relocation in the new scheme.
      
      Let us fix the linker sanity checking by including
      R_BPF_64_ABS64 and R_BPF_64_ABS32. There is no need to
      check R_BPF_64_NODYLD32 which is used for .BTF and .BTF.ext.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20210522162341.3687617-1-yhs@fb.com
      9f0c317f
  24. 19 5月, 2021 1 次提交
    • A
      libbpf: Generate loader program out of BPF ELF file. · 67234743
      Alexei Starovoitov 提交于
      The BPF program loading process performed by libbpf is quite complex
      and consists of the following steps:
      "open" phase:
      - parse elf file and remember relocations, sections
      - collect externs and ksyms including their btf_ids in prog's BTF
      - patch BTF datasec (since llvm couldn't do it)
      - init maps (old style map_def, BTF based, global data map, kconfig map)
      - collect relocations against progs and maps
      "load" phase:
      - probe kernel features
      - load vmlinux BTF
      - resolve externs (kconfig and ksym)
      - load program BTF
      - init struct_ops
      - create maps
      - apply CO-RE relocations
      - patch ld_imm64 insns with src_reg=PSEUDO_MAP, PSEUDO_MAP_VALUE, PSEUDO_BTF_ID
      - reposition subprograms and adjust call insns
      - sanitize and load progs
      
      During this process libbpf does sys_bpf() calls to load BTF, create maps,
      populate maps and finally load programs.
      Instead of actually doing the syscalls generate a trace of what libbpf
      would have done and represent it as the "loader program".
      The "loader program" consists of single map with:
      - union bpf_attr(s)
      - BTF bytes
      - map value bytes
      - insns bytes
      and single bpf program that passes bpf_attr(s) and data into bpf_sys_bpf() helper.
      Executing such "loader program" via bpf_prog_test_run() command will
      replay the sequence of syscalls that libbpf would have done which will result
      the same maps created and programs loaded as specified in the elf file.
      The "loader program" removes libelf and majority of libbpf dependency from
      program loading process.
      
      kconfig, typeless ksym, struct_ops and CO-RE are not supported yet.
      
      The order of relocate_data and relocate_calls had to change, so that
      bpf_gen__prog_load() can see all relocations for a given program with
      correct insn_idx-es.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210514003623.28033-15-alexei.starovoitov@gmail.com
      67234743
  25. 12 5月, 2021 1 次提交
  26. 24 4月, 2021 3 次提交
  27. 19 3月, 2021 4 次提交
    • A
      libbpf: Add BPF static linker APIs · faf6ed32
      Andrii Nakryiko 提交于
      Introduce BPF static linker APIs to libbpf. BPF static linker allows to
      perform static linking of multiple BPF object files into a single combined
      resulting object file, preserving all the BPF programs, maps, global
      variables, etc.
      
      Data sections (.bss, .data, .rodata, .maps, maps, etc) with the same name are
      concatenated together. Similarly, code sections are also concatenated. All the
      symbols and ELF relocations are also concatenated in their respective ELF
      sections and are adjusted accordingly to the new object file layout.
      
      Static variables and functions are handled correctly as well, adjusting BPF
      instructions offsets to reflect new variable/function offset within the
      combined ELF section. Such relocations are referencing STT_SECTION symbols and
      that stays intact.
      
      Data sections in different files can have different alignment requirements, so
      that is taken care of as well, adjusting sizes and offsets as necessary to
      satisfy both old and new alignment requirements.
      
      DWARF data sections are stripped out, currently. As well as LLLVM_ADDRSIG
      section, which is ignored by libbpf in bpf_object__open() anyways. So, in
      a way, BPF static linker is an analogue to `llvm-strip -g`, which is a pretty
      nice property, especially if resulting .o file is then used to generate BPF
      skeleton.
      
      Original string sections are ignored and instead we construct our own set of
      unique strings using libbpf-internal `struct strset` API.
      
      To reduce the size of the patch, all the .BTF and .BTF.ext processing was
      moved into a separate patch.
      
      The high-level API consists of just 4 functions:
        - bpf_linker__new() creates an instance of BPF static linker. It accepts
          output filename and (currently empty) options struct;
        - bpf_linker__add_file() takes input filename and appends it to the already
          processed ELF data; it can be called multiple times, one for each BPF
          ELF object file that needs to be linked in;
        - bpf_linker__finalize() needs to be called to dump final ELF contents into
          the output file, specified when bpf_linker was created; after
          bpf_linker__finalize() is called, no more bpf_linker__add_file() and
          bpf_linker__finalize() calls are allowed, they will return error;
        - regardless of whether bpf_linker__finalize() was called or not,
          bpf_linker__free() will free up all the used resources.
      
      Currently, BPF static linker doesn't resolve cross-object file references
      (extern variables and/or functions). This will be added in the follow up patch
      set.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-7-andrii@kernel.org
      faf6ed32
    • A
      libbpf: Rename internal memory-management helpers · 3b029e06
      Andrii Nakryiko 提交于
      Rename btf_add_mem() and btf_ensure_mem() helpers that abstract away details
      of dynamically resizable memory to use libbpf_ prefix, as they are not
      BTF-specific. No functional changes.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-4-andrii@kernel.org
      3b029e06
    • A
      libbpf: Generalize BTF and BTF.ext type ID and strings iteration · f36e99a4
      Andrii Nakryiko 提交于
      Extract and generalize the logic to iterate BTF type ID and string offset
      fields within BTF types and .BTF.ext data. Expose this internally in libbpf
      for re-use by bpf_linker.
      
      Additionally, complete strings deduplication handling for BTF.ext (e.g., CO-RE
      access strings), which was previously missing. There previously was no
      case of deduplicating .BTF.ext data, but bpf_linker is going to use it.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-3-andrii@kernel.org
      f36e99a4
    • A
      libbpf: Expose btf_type_by_id() internally · e14ef4bf
      Andrii Nakryiko 提交于
      btf_type_by_id() is internal-only convenience API returning non-const pointer
      to struct btf_type. Expose it outside of btf.c for re-use.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-2-andrii@kernel.org
      e14ef4bf
  28. 05 3月, 2021 1 次提交
  29. 04 12月, 2020 3 次提交