1. 30 3月, 2020 2 次提交
  2. 29 3月, 2020 1 次提交
  3. 03 3月, 2020 1 次提交
    • A
      libbpf: Add bpf_link pinning/unpinning · c016b68e
      Andrii Nakryiko 提交于
      With bpf_link abstraction supported by kernel explicitly, add
      pinning/unpinning API for links. Also allow to create (open) bpf_link from BPF
      FS file.
      
      This API allows to have an "ephemeral" FD-based BPF links (like raw tracepoint
      or fexit/freplace attachments) surviving user process exit, by pinning them in
      a BPF FS, which is an important use case for long-running BPF programs.
      
      As part of this, expose underlying FD for bpf_link. While legacy bpf_link's
      might not have a FD associated with them (which will be expressed as
      a bpf_link with fd=-1), kernel's abstraction is based around FD-based usage,
      so match it closely. This, subsequently, allows to have a generic
      pinning/unpinning API for generalized bpf_link. For some types of bpf_links
      kernel might not support pinning, in which case bpf_link__pin() will return
      error.
      
      With FD being part of generic bpf_link, also get rid of bpf_link_fd in favor
      of using vanialla bpf_link.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200303043159.323675-3-andriin@fb.com
      c016b68e
  4. 21 2月, 2020 2 次提交
  5. 23 1月, 2020 1 次提交
  6. 16 1月, 2020 2 次提交
  7. 10 1月, 2020 1 次提交
    • M
      bpf: libbpf: Add STRUCT_OPS support · 590a0088
      Martin KaFai Lau 提交于
      This patch adds BPF STRUCT_OPS support to libbpf.
      
      The only sec_name convention is SEC(".struct_ops") to identify the
      struct_ops implemented in BPF,
      e.g. To implement a tcp_congestion_ops:
      
      SEC(".struct_ops")
      struct tcp_congestion_ops dctcp = {
      	.init           = (void *)dctcp_init,  /* <-- a bpf_prog */
      	/* ... some more func prts ... */
      	.name           = "bpf_dctcp",
      };
      
      Each struct_ops is defined as a global variable under SEC(".struct_ops")
      as above.  libbpf creates a map for each variable and the variable name
      is the map's name.  Multiple struct_ops is supported under
      SEC(".struct_ops").
      
      In the bpf_object__open phase, libbpf will look for the SEC(".struct_ops")
      section and find out what is the btf-type the struct_ops is
      implementing.  Note that the btf-type here is referring to
      a type in the bpf_prog.o's btf.  A "struct bpf_map" is added
      by bpf_object__add_map() as other maps do.  It will then
      collect (through SHT_REL) where are the bpf progs that the
      func ptrs are referring to.  No btf_vmlinux is needed in
      the open phase.
      
      In the bpf_object__load phase, the map-fields, which depend
      on the btf_vmlinux, are initialized (in bpf_map__init_kern_struct_ops()).
      It will also set the prog->type, prog->attach_btf_id, and
      prog->expected_attach_type.  Thus, the prog's properties do
      not rely on its section name.
      [ Currently, the bpf_prog's btf-type ==> btf_vmlinux's btf-type matching
        process is as simple as: member-name match + btf-kind match + size match.
        If these matching conditions fail, libbpf will reject.
        The current targeting support is "struct tcp_congestion_ops" which
        most of its members are function pointers.
        The member ordering of the bpf_prog's btf-type can be different from
        the btf_vmlinux's btf-type. ]
      
      Then, all obj->maps are created as usual (in bpf_object__create_maps()).
      
      Once the maps are created and prog's properties are all set,
      the libbpf will proceed to load all the progs.
      
      bpf_map__attach_struct_ops() is added to register a struct_ops
      map to a kernel subsystem.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200109003514.3856730-1-kafai@fb.com
      590a0088
  8. 09 1月, 2020 1 次提交
  9. 20 12月, 2019 1 次提交
    • A
      libbpf: Introduce bpf_prog_attach_xattr · cdbee383
      Andrey Ignatov 提交于
      Introduce a new bpf_prog_attach_xattr function that, in addition to
      program fd, target fd and attach type, accepts an extendable struct
      bpf_prog_attach_opts.
      
      bpf_prog_attach_opts relies on DECLARE_LIBBPF_OPTS macro to maintain
      backward and forward compatibility and has the following "optional"
      attach attributes:
      
      * existing attach_flags, since it's not required when attaching in NONE
        mode. Even though it's quite often used in MULTI and OVERRIDE mode it
        seems to be a good idea to reduce number of arguments to
        bpf_prog_attach_xattr;
      
      * newly introduced attribute of BPF_PROG_ATTACH command: replace_prog_fd
        that is fd of previously attached cgroup-bpf program to replace if
        BPF_F_REPLACE flag is used.
      
      The new function is named to be consistent with other xattr-functions
      (bpf_prog_test_run_xattr, bpf_create_map_xattr, bpf_load_program_xattr).
      
      The struct bpf_prog_attach_opts is supposed to be used with
      DECLARE_LIBBPF_OPTS macro.
      Signed-off-by: NAndrey Ignatov <rdna@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/bd6e0732303eb14e4b79cb128268d9e9ad6db208.1576741281.git.rdna@fb.com
      cdbee383
  10. 19 12月, 2019 1 次提交
    • A
      libbpf: Add bpf_link__disconnect() API to preserve underlying BPF resource · d6958706
      Andrii Nakryiko 提交于
      There are cases in which BPF resource (program, map, etc) has to outlive
      userspace program that "installed" it in the system in the first place.
      When BPF program is attached, libbpf returns bpf_link object, which
      is supposed to be destroyed after no longer necessary through
      bpf_link__destroy() API. Currently, bpf_link destruction causes both automatic
      detachment and frees up any resources allocated to for bpf_link in-memory
      representation. This is inconvenient for the case described above because of
      coupling of detachment and resource freeing.
      
      This patch introduces bpf_link__disconnect() API call, which marks bpf_link as
      disconnected from its underlying BPF resouces. This means that when bpf_link
      is destroyed later, all its memory resources will be freed, but BPF resource
      itself won't be detached.
      
      This design allows to follow strict and resource-leak-free design by default,
      while giving easy and straightforward way for user code to opt for keeping BPF
      resource attached beyond lifetime of a bpf_link. For some BPF programs (i.e.,
      FS-based tracepoints, kprobes, raw tracepoint, etc), user has to make sure to
      pin BPF program to prevent kernel to automatically detach it on process exit.
      This should typically be achived by pinning BPF program (or map in some cases)
      in BPF FS.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20191218225039.2668205-1-andriin@fb.com
      d6958706
  11. 16 12月, 2019 5 次提交
  12. 11 12月, 2019 1 次提交
  13. 16 11月, 2019 2 次提交
  14. 11 11月, 2019 2 次提交
  15. 03 11月, 2019 1 次提交
  16. 31 10月, 2019 1 次提交
  17. 21 10月, 2019 1 次提交
  18. 06 10月, 2019 1 次提交
    • A
      libbpf: add bpf_object__open_{file, mem} w/ extensible opts · 2ce8450e
      Andrii Nakryiko 提交于
      Add new set of bpf_object__open APIs using new approach to optional
      parameters extensibility allowing simpler ABI compatibility approach.
      
      This patch demonstrates an approach to implementing libbpf APIs that
      makes it easy to extend existing APIs with extra optional parameters in
      such a way, that ABI compatibility is preserved without having to do
      symbol versioning and generating lots of boilerplate code to handle it.
      To facilitate succinct code for working with options, add OPTS_VALID,
      OPTS_HAS, and OPTS_GET macros that hide all the NULL, size, and zero
      checks.
      
      Additionally, newly added libbpf APIs are encouraged to follow similar
      pattern of having all mandatory parameters as formal function parameters
      and always have optional (NULL-able) xxx_opts struct, which should
      always have real struct size as a first field and the rest would be
      optional parameters added over time, which tune the behavior of existing
      API, if specified by user.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      2ce8450e
  19. 02 10月, 2019 1 次提交
  20. 31 8月, 2019 1 次提交
  21. 21 8月, 2019 1 次提交
  22. 16 8月, 2019 1 次提交
    • A
      libbpf: make libbpf.map source of truth for libbpf version · dadb81d0
      Andrii Nakryiko 提交于
      Currently libbpf version is specified in 2 places: libbpf.map and
      Makefile. They easily get out of sync and it's very easy to update one,
      but forget to update another one. In addition, Github projection of
      libbpf has to maintain its own version which has to be remembered to be
      kept in sync manually, which is very error-prone approach.
      
      This patch makes libbpf.map a source of truth for libbpf version and
      uses shell invocation to parse out correct full and major libbpf version
      to use during build. Now we need to make sure that once new release
      cycle starts, we need to add (initially) empty section to libbpf.map
      with correct latest version.
      
      This also will make it possible to keep Github projection consistent
      with kernel sources version of libbpf by adopting similar parsing of
      version from libbpf.map.
      
      v2->v3:
      - grep -o + sort -rV (Andrey);
      
      v1->v2:
      - eager version vars evaluation (Jakub);
      - simplified version regex (Andrey);
      
      Cc: Andrey Ignatov <rdna@fb.com>
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NAndrey Ignatov <rdna@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      dadb81d0
  23. 08 7月, 2019 1 次提交
    • A
      libbpf: add perf buffer API · fb84b822
      Andrii Nakryiko 提交于
      BPF_MAP_TYPE_PERF_EVENT_ARRAY map is often used to send data from BPF program
      to user space for additional processing. libbpf already has very low-level API
      to read single CPU perf buffer, bpf_perf_event_read_simple(), but it's hard to
      use and requires a lot of code to set everything up. This patch adds
      perf_buffer abstraction on top of it, abstracting setting up and polling
      per-CPU logic into simple and convenient API, similar to what BCC provides.
      
      perf_buffer__new() sets up per-CPU ring buffers and updates corresponding BPF
      map entries. It accepts two user-provided callbacks: one for handling raw
      samples and one for get notifications of lost samples due to buffer overflow.
      
      perf_buffer__new_raw() is similar, but provides more control over how
      perf events are set up (by accepting user-provided perf_event_attr), how
      they are handled (perf_event_header pointer is passed directly to
      user-provided callback), and on which CPUs ring buffers are created
      (it's possible to provide a list of CPUs and corresponding map keys to
      update). This API allows advanced users fuller control.
      
      perf_buffer__poll() is used to fetch ring buffer data across all CPUs,
      utilizing epoll instance.
      
      perf_buffer__free() does corresponding clean up and unsets FDs from BPF map.
      
      All APIs are not thread-safe. User should ensure proper locking/coordination if
      used in multi-threaded set up.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      fb84b822
  24. 06 7月, 2019 5 次提交
  25. 11 6月, 2019 1 次提交
  26. 28 5月, 2019 1 次提交
  27. 25 5月, 2019 1 次提交
    • A
      libbpf: add btf_dump API for BTF-to-C conversion · 351131b5
      Andrii Nakryiko 提交于
      BTF contains enough type information to allow generating valid
      compilable C header w/ correct layout of structs/unions and all the
      typedef/enum definitions. This patch adds a new "object" - btf_dump to
      facilitate dumping BTF as valid C. btf_dump__dump_type() is the main API
      which takes care of dumping out (through user-provided printf-like
      callback function) C definitions for given type ID and it's required
      dependencies. This allows for not just dumping out entirety of BTF types,
      but also selective filtering based on user-provided criterias w/ minimal
      set of dependent types.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      351131b5