1. 22 8月, 2020 1 次提交
    • A
      libbpf: Add perf_buffer APIs for better integration with outside epoll loop · dca5612f
      Andrii Nakryiko 提交于
      Add a set of APIs to perf_buffer manage to allow applications to integrate
      perf buffer polling into existing epoll-based infrastructure. One example is
      applications using libevent already and wanting to plug perf_buffer polling,
      instead of relying on perf_buffer__poll() and waste an extra thread to do it.
      But perf_buffer is still extremely useful to set up and consume perf buffer
      rings even for such use cases.
      
      So to accomodate such new use cases, add three new APIs:
        - perf_buffer__buffer_cnt() returns number of per-CPU buffers maintained by
          given instance of perf_buffer manager;
        - perf_buffer__buffer_fd() returns FD of perf_event corresponding to
          a specified per-CPU buffer; this FD is then polled independently;
        - perf_buffer__consume_buffer() consumes data from single per-CPU buffer,
          identified by its slot index.
      
      To support a simpler, but less efficient, way to integrate perf_buffer into
      external polling logic, also expose underlying epoll FD through
      perf_buffer__epoll_fd() API. It will need to be followed by
      perf_buffer__poll(), wasting extra syscall, or perf_buffer__consume(), wasting
      CPU to iterate buffers with no data. But could be simpler and more convenient
      for some cases.
      
      These APIs allow for great flexiblity, but do not sacrifice general usability
      of perf_buffer.
      
      Also exercise and check new APIs in perf_buffer selftest.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20200821165927.849538-1-andriin@fb.com
      dca5612f
  2. 07 8月, 2020 1 次提交
  3. 02 8月, 2020 1 次提交
  4. 26 7月, 2020 2 次提交
  5. 18 7月, 2020 1 次提交
  6. 29 6月, 2020 1 次提交
    • A
      libbpf: Support disabling auto-loading BPF programs · d9297581
      Andrii Nakryiko 提交于
      Currently, bpf_object__load() (and by induction skeleton's load), will always
      attempt to prepare, relocate, and load into kernel every single BPF program
      found inside the BPF object file. This is often convenient and the right thing
      to do and what users expect.
      
      But there are plenty of cases (especially with BPF development constantly
      picking up the pace), where BPF application is intended to work with old
      kernels, with potentially reduced set of features. But on kernels supporting
      extra features, it would like to take a full advantage of them, by employing
      extra BPF program. This could be a choice of using fentry/fexit over
      kprobe/kretprobe, if kernel is recent enough and is built with BTF. Or BPF
      program might be providing optimized bpf_iter-based solution that user-space
      might want to use, whenever available. And so on.
      
      With libbpf and BPF CO-RE in particular, it's advantageous to not have to
      maintain two separate BPF object files to achieve this. So to enable such use
      cases, this patch adds ability to request not auto-loading chosen BPF
      programs. In such case, libbpf won't attempt to perform relocations (which
      might fail due to old kernel), won't try to resolve BTF types for
      BTF-aware (tp_btf/fentry/fexit/etc) program types, because BTF might not be
      present, and so on. Skeleton will also automatically skip auto-attachment step
      for such not loaded BPF programs.
      
      Overall, this feature allows to simplify development and deployment of
      real-world BPF applications with complicated compatibility requirements.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200625232629.3444003-2-andriin@fb.com
      d9297581
  7. 23 6月, 2020 1 次提交
    • A
      libbpf: Add a bunch of attribute getters/setters for map definitions · 1bdb6c9a
      Andrii Nakryiko 提交于
      Add a bunch of getter for various aspects of BPF map. Some of these attribute
      (e.g., key_size, value_size, type, etc) are available right now in struct
      bpf_map_def, but this patch adds getter allowing to fetch them individually.
      bpf_map_def approach isn't very scalable, when ABI stability requirements are
      taken into account. It's much easier to extend libbpf and add support for new
      features, when each aspect of BPF map has separate getter/setter.
      
      Getters follow the common naming convention of not explicitly having "get" in
      its name: bpf_map__type() returns map type, bpf_map__key_size() returns
      key_size. Setters, though, explicitly have set in their name:
      bpf_map__set_type(), bpf_map__set_key_size().
      
      This patch ensures we now have a getter and a setter for the following
      map attributes:
        - type;
        - max_entries;
        - map_flags;
        - numa_node;
        - key_size;
        - value_size;
        - ifindex.
      
      bpf_map__resize() enforces unnecessary restriction of max_entries > 0. It is
      unnecessary, because libbpf actually supports zero max_entries for some cases
      (e.g., for PERF_EVENT_ARRAY map) and treats it specially during map creation
      time. To allow setting max_entries=0, new bpf_map__set_max_entries() setter is
      added. bpf_map__resize()'s behavior is preserved for backwards compatibility
      reasons.
      
      Map ifindex getter is added as well. There is a setter already, but no
      corresponding getter. Fix this assymetry as well. bpf_map__set_ifindex()
      itself is converted from void function into error-returning one, similar to
      other setters. The only error returned right now is -EBUSY, if BPF map is
      already loaded and has corresponding FD.
      
      One lacking attribute with no ability to get/set or even specify it
      declaratively is numa_node. This patch fixes this gap and both adds
      programmatic getter/setter, as well as adds support for numa_node field in
      BTF-defined map.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20200621062112.3006313-1-andriin@fb.com
      1bdb6c9a
  8. 02 6月, 2020 3 次提交
    • J
      libbpf: Add support for bpf_link-based netns attachment · d60d81ac
      Jakub Sitnicki 提交于
      Add bpf_program__attach_nets(), which uses LINK_CREATE subcommand to create
      an FD-based kernel bpf_link, for attach types tied to network namespace,
      that is BPF_FLOW_DISSECTOR for the moment.
      Signed-off-by: NJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200531082846.2117903-7-jakub@cloudflare.com
      d60d81ac
    • A
      libbpf: Add BPF ring buffer support · bf99c936
      Andrii Nakryiko 提交于
      Declaring and instantiating BPF ring buffer doesn't require any changes to
      libbpf, as it's just another type of maps. So using existing BTF-defined maps
      syntax with __uint(type, BPF_MAP_TYPE_RINGBUF) and __uint(max_elements,
      <size-of-ring-buf>) is all that's necessary to create and use BPF ring buffer.
      
      This patch adds BPF ring buffer consumer to libbpf. It is very similar to
      perf_buffer implementation in terms of API, but also attempts to fix some
      minor problems and inconveniences with existing perf_buffer API.
      
      ring_buffer support both single ring buffer use case (with just using
      ring_buffer__new()), as well as allows to add more ring buffers, each with its
      own callback and context. This allows to efficiently poll and consume
      multiple, potentially completely independent, ring buffers, using single
      epoll instance.
      
      The latter is actually a problem in practice for applications
      that are using multiple sets of perf buffers. They have to create multiple
      instances for struct perf_buffer and poll them independently or in a loop,
      each approach having its own problems (e.g., inability to use a common poll
      timeout). struct ring_buffer eliminates this problem by aggregating many
      independent ring buffer instances under the single "ring buffer manager".
      
      Second, perf_buffer's callback can't return error, so applications that need
      to stop polling due to error in data or data signalling the end, have to use
      extra mechanisms to signal that polling has to stop. ring_buffer's callback
      can return error, which will be passed through back to user code and can be
      acted upon appropariately.
      
      Two APIs allow to consume ring buffer data:
        - ring_buffer__poll(), which will wait for data availability notification
          and will consume data only from reported ring buffer(s); this API allows
          to efficiently use resources by reading data only when it becomes
          available;
        - ring_buffer__consume(), will attempt to read new records regardless of
          data availablity notification sub-system. This API is useful for cases
          when lowest latency is required, in expense of burning CPU resources.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-3-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      bf99c936
    • E
      libbpf: Add API to consume the perf ring buffer content · 272d51af
      Eelco Chaudron 提交于
      This new API, perf_buffer__consume, can be used as follows:
      
      - When you have a perf ring where wakeup_events is higher than 1,
        and you have remaining data in the rings you would like to pull
        out on exit (or maybe based on a timeout).
      
      - For low latency cases where you burn a CPU that constantly polls
        the queues.
      Signed-off-by: NEelco Chaudron <echaudro@redhat.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/159048487929.89441.7465713173442594608.stgit@ebuildSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      272d51af
  9. 10 5月, 2020 1 次提交
  10. 15 4月, 2020 1 次提交
  11. 31 3月, 2020 1 次提交
  12. 30 3月, 2020 2 次提交
  13. 29 3月, 2020 1 次提交
  14. 03 3月, 2020 1 次提交
    • A
      libbpf: Add bpf_link pinning/unpinning · c016b68e
      Andrii Nakryiko 提交于
      With bpf_link abstraction supported by kernel explicitly, add
      pinning/unpinning API for links. Also allow to create (open) bpf_link from BPF
      FS file.
      
      This API allows to have an "ephemeral" FD-based BPF links (like raw tracepoint
      or fexit/freplace attachments) surviving user process exit, by pinning them in
      a BPF FS, which is an important use case for long-running BPF programs.
      
      As part of this, expose underlying FD for bpf_link. While legacy bpf_link's
      might not have a FD associated with them (which will be expressed as
      a bpf_link with fd=-1), kernel's abstraction is based around FD-based usage,
      so match it closely. This, subsequently, allows to have a generic
      pinning/unpinning API for generalized bpf_link. For some types of bpf_links
      kernel might not support pinning, in which case bpf_link__pin() will return
      error.
      
      With FD being part of generic bpf_link, also get rid of bpf_link_fd in favor
      of using vanialla bpf_link.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200303043159.323675-3-andriin@fb.com
      c016b68e
  15. 21 2月, 2020 1 次提交
  16. 25 1月, 2020 1 次提交
    • A
      libbpf: Improve handling of failed CO-RE relocations · d7a25270
      Andrii Nakryiko 提交于
      Previously, if libbpf failed to resolve CO-RE relocation for some
      instructions, it would either return error immediately, or, if
      .relaxed_core_relocs option was set, would replace relocatable offset/imm part
      of an instruction with a bogus value (-1). Neither approach is good, because
      there are many possible scenarios where relocation is expected to fail (e.g.,
      when some field knowingly can be missing on specific kernel versions). On the
      other hand, replacing offset with invalid one can hide programmer errors, if
      this relocation failue wasn't anticipated.
      
      This patch deprecates .relaxed_core_relocs option and changes the approach to
      always replacing instruction, for which relocation failed, with invalid BPF
      helper call instruction. For cases where this is expected, BPF program should
      already ensure that that instruction is unreachable, in which case this
      invalid instruction is going to be silently ignored. But if instruction wasn't
      guarded, BPF program will be rejected at verification step with verifier log
      pointing precisely to the place in assembly where the problem is.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20200124053837.2434679-1-andriin@fb.com
      d7a25270
  17. 23 1月, 2020 1 次提交
  18. 10 1月, 2020 1 次提交
    • M
      bpf: libbpf: Add STRUCT_OPS support · 590a0088
      Martin KaFai Lau 提交于
      This patch adds BPF STRUCT_OPS support to libbpf.
      
      The only sec_name convention is SEC(".struct_ops") to identify the
      struct_ops implemented in BPF,
      e.g. To implement a tcp_congestion_ops:
      
      SEC(".struct_ops")
      struct tcp_congestion_ops dctcp = {
      	.init           = (void *)dctcp_init,  /* <-- a bpf_prog */
      	/* ... some more func prts ... */
      	.name           = "bpf_dctcp",
      };
      
      Each struct_ops is defined as a global variable under SEC(".struct_ops")
      as above.  libbpf creates a map for each variable and the variable name
      is the map's name.  Multiple struct_ops is supported under
      SEC(".struct_ops").
      
      In the bpf_object__open phase, libbpf will look for the SEC(".struct_ops")
      section and find out what is the btf-type the struct_ops is
      implementing.  Note that the btf-type here is referring to
      a type in the bpf_prog.o's btf.  A "struct bpf_map" is added
      by bpf_object__add_map() as other maps do.  It will then
      collect (through SHT_REL) where are the bpf progs that the
      func ptrs are referring to.  No btf_vmlinux is needed in
      the open phase.
      
      In the bpf_object__load phase, the map-fields, which depend
      on the btf_vmlinux, are initialized (in bpf_map__init_kern_struct_ops()).
      It will also set the prog->type, prog->attach_btf_id, and
      prog->expected_attach_type.  Thus, the prog's properties do
      not rely on its section name.
      [ Currently, the bpf_prog's btf-type ==> btf_vmlinux's btf-type matching
        process is as simple as: member-name match + btf-kind match + size match.
        If these matching conditions fail, libbpf will reject.
        The current targeting support is "struct tcp_congestion_ops" which
        most of its members are function pointers.
        The member ordering of the bpf_prog's btf-type can be different from
        the btf_vmlinux's btf-type. ]
      
      Then, all obj->maps are created as usual (in bpf_object__create_maps()).
      
      Once the maps are created and prog's properties are all set,
      the libbpf will proceed to load all the progs.
      
      bpf_map__attach_struct_ops() is added to register a struct_ops
      map to a kernel subsystem.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200109003514.3856730-1-kafai@fb.com
      590a0088
  19. 09 1月, 2020 1 次提交
  20. 19 12月, 2019 2 次提交
    • A
      libbpf: Allow to augment system Kconfig through extra optional config · 8601fd42
      Andrii Nakryiko 提交于
      Instead of all or nothing approach of overriding Kconfig file location, allow
      to extend it with extra values and override chosen subset of values though
      optional user-provided extra config, passed as a string through open options'
      .kconfig option. If same config key is present in both user-supplied config
      and Kconfig, user-supplied one wins. This allows applications to more easily
      test various conditions despite host kernel's real configuration. If all of
      BPF object's __kconfig externs are satisfied from user-supplied config, system
      Kconfig won't be read at all.
      
      Simplify selftests by not needing to create temporary Kconfig files.
      Suggested-by: NAlexei Starovoitov <ast@fb.com>
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20191219002837.3074619-3-andriin@fb.com
      8601fd42
    • A
      libbpf: Add bpf_link__disconnect() API to preserve underlying BPF resource · d6958706
      Andrii Nakryiko 提交于
      There are cases in which BPF resource (program, map, etc) has to outlive
      userspace program that "installed" it in the system in the first place.
      When BPF program is attached, libbpf returns bpf_link object, which
      is supposed to be destroyed after no longer necessary through
      bpf_link__destroy() API. Currently, bpf_link destruction causes both automatic
      detachment and frees up any resources allocated to for bpf_link in-memory
      representation. This is inconvenient for the case described above because of
      coupling of detachment and resource freeing.
      
      This patch introduces bpf_link__disconnect() API call, which marks bpf_link as
      disconnected from its underlying BPF resouces. This means that when bpf_link
      is destroyed later, all its memory resources will be freed, but BPF resource
      itself won't be detached.
      
      This design allows to follow strict and resource-leak-free design by default,
      while giving easy and straightforward way for user code to opt for keeping BPF
      resource attached beyond lifetime of a bpf_link. For some BPF programs (i.e.,
      FS-based tracepoints, kprobes, raw tracepoint, etc), user has to make sure to
      pin BPF program to prevent kernel to automatically detach it on process exit.
      This should typically be achived by pinning BPF program (or map in some cases)
      in BPF FS.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20191218225039.2668205-1-andriin@fb.com
      d6958706
  21. 18 12月, 2019 1 次提交
  22. 16 12月, 2019 7 次提交
  23. 16 11月, 2019 2 次提交
  24. 11 11月, 2019 2 次提交
  25. 03 11月, 2019 2 次提交
  26. 31 10月, 2019 1 次提交