1. 11 11月, 2019 1 次提交
  2. 07 11月, 2019 2 次提交
  3. 04 11月, 2019 2 次提交
  4. 03 11月, 2019 4 次提交
  5. 31 10月, 2019 1 次提交
  6. 29 10月, 2019 1 次提交
  7. 24 10月, 2019 1 次提交
  8. 23 10月, 2019 1 次提交
  9. 21 10月, 2019 4 次提交
  10. 19 10月, 2019 1 次提交
  11. 17 10月, 2019 1 次提交
  12. 16 10月, 2019 3 次提交
  13. 06 10月, 2019 3 次提交
    • A
      libbpf: fix bpf_object__name() to actually return object name · c9e4c301
      Andrii Nakryiko 提交于
      bpf_object__name() was returning file path, not name. Fix this.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      c9e4c301
    • A
      libbpf: add bpf_object__open_{file, mem} w/ extensible opts · 2ce8450e
      Andrii Nakryiko 提交于
      Add new set of bpf_object__open APIs using new approach to optional
      parameters extensibility allowing simpler ABI compatibility approach.
      
      This patch demonstrates an approach to implementing libbpf APIs that
      makes it easy to extend existing APIs with extra optional parameters in
      such a way, that ABI compatibility is preserved without having to do
      symbol versioning and generating lots of boilerplate code to handle it.
      To facilitate succinct code for working with options, add OPTS_VALID,
      OPTS_HAS, and OPTS_GET macros that hide all the NULL, size, and zero
      checks.
      
      Additionally, newly added libbpf APIs are encouraged to follow similar
      pattern of having all mandatory parameters as formal function parameters
      and always have optional (NULL-able) xxx_opts struct, which should
      always have real struct size as a first field and the rest would be
      optional parameters added over time, which tune the behavior of existing
      API, if specified by user.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      2ce8450e
    • A
      libbpf: stop enforcing kern_version, populate it for users · 5e61f270
      Andrii Nakryiko 提交于
      Kernel version enforcement for kprobes/kretprobes was removed from
      5.0 kernel in 6c4fc209 ("bpf: remove useless version check for prog load").
      Since then, BPF programs were specifying SEC("version") just to please
      libbpf. We should stop enforcing this in libbpf, if even kernel doesn't
      care. Furthermore, libbpf now will pre-populate current kernel version
      of the host system, in case we are still running on old kernel.
      
      This patch also removes __bpf_object__open_xattr from libbpf.h, as
      nothing in libbpf is relying on having it in that header. That function
      was never exported as LIBBPF_API and even name suggests its internal
      version. So this should be safe to remove, as it doesn't break ABI.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      5e61f270
  14. 14 8月, 2019 1 次提交
  15. 08 8月, 2019 2 次提交
  16. 02 8月, 2019 1 次提交
  17. 01 8月, 2019 1 次提交
  18. 28 7月, 2019 1 次提交
  19. 27 7月, 2019 1 次提交
    • A
      libbpf: fix erroneous multi-closing of BTF FD · 5d01ab7b
      Andrii Nakryiko 提交于
      Libbpf stores associated BTF FD per each instance of bpf_program. When
      program is unloaded, that FD is closed. This is wrong, because leads to
      a race and possibly closing of unrelated files, if application
      simultaneously opens new files while bpf_programs are unloaded.
      
      It's also unnecessary, because struct btf "owns" that FD, and
      btf__free(), called from bpf_object__close() will close it. Thus the fix
      is to never have per-program BTF FD and fetch it from obj->btf, when
      necessary.
      
      Fixes: 2993e051 ("tools/bpf: add support to read .BTF.ext sections")
      Reported-by: NAndrey Ignatov <rdna@fb.com>
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      5d01ab7b
  20. 24 7月, 2019 1 次提交
  21. 22 7月, 2019 2 次提交
  22. 20 7月, 2019 2 次提交
  23. 12 7月, 2019 1 次提交
  24. 08 7月, 2019 2 次提交
    • A
      libbpf: auto-set PERF_EVENT_ARRAY size to number of CPUs · d7ff34d5
      Andrii Nakryiko 提交于
      For BPF_MAP_TYPE_PERF_EVENT_ARRAY typically correct size is number of
      possible CPUs. This is impossible to specify at compilation time. This
      change adds automatic setting of PERF_EVENT_ARRAY size to number of
      system CPUs, unless non-zero size is specified explicitly. This allows
      to adjust size for advanced specific cases, while providing convenient
      and logical defaults.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      d7ff34d5
    • A
      libbpf: add perf buffer API · fb84b822
      Andrii Nakryiko 提交于
      BPF_MAP_TYPE_PERF_EVENT_ARRAY map is often used to send data from BPF program
      to user space for additional processing. libbpf already has very low-level API
      to read single CPU perf buffer, bpf_perf_event_read_simple(), but it's hard to
      use and requires a lot of code to set everything up. This patch adds
      perf_buffer abstraction on top of it, abstracting setting up and polling
      per-CPU logic into simple and convenient API, similar to what BCC provides.
      
      perf_buffer__new() sets up per-CPU ring buffers and updates corresponding BPF
      map entries. It accepts two user-provided callbacks: one for handling raw
      samples and one for get notifications of lost samples due to buffer overflow.
      
      perf_buffer__new_raw() is similar, but provides more control over how
      perf events are set up (by accepting user-provided perf_event_attr), how
      they are handled (perf_event_header pointer is passed directly to
      user-provided callback), and on which CPUs ring buffers are created
      (it's possible to provide a list of CPUs and corresponding map keys to
      update). This API allows advanced users fuller control.
      
      perf_buffer__poll() is used to fetch ring buffer data across all CPUs,
      utilizing epoll instance.
      
      perf_buffer__free() does corresponding clean up and unsets FDs from BPF map.
      
      All APIs are not thread-safe. User should ensure proper locking/coordination if
      used in multi-threaded set up.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      fb84b822