1. 22 7月, 2019 2 次提交
  2. 20 7月, 2019 2 次提交
  3. 12 7月, 2019 1 次提交
  4. 08 7月, 2019 2 次提交
    • A
      libbpf: auto-set PERF_EVENT_ARRAY size to number of CPUs · d7ff34d5
      Andrii Nakryiko 提交于
      For BPF_MAP_TYPE_PERF_EVENT_ARRAY typically correct size is number of
      possible CPUs. This is impossible to specify at compilation time. This
      change adds automatic setting of PERF_EVENT_ARRAY size to number of
      system CPUs, unless non-zero size is specified explicitly. This allows
      to adjust size for advanced specific cases, while providing convenient
      and logical defaults.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      d7ff34d5
    • A
      libbpf: add perf buffer API · fb84b822
      Andrii Nakryiko 提交于
      BPF_MAP_TYPE_PERF_EVENT_ARRAY map is often used to send data from BPF program
      to user space for additional processing. libbpf already has very low-level API
      to read single CPU perf buffer, bpf_perf_event_read_simple(), but it's hard to
      use and requires a lot of code to set everything up. This patch adds
      perf_buffer abstraction on top of it, abstracting setting up and polling
      per-CPU logic into simple and convenient API, similar to what BCC provides.
      
      perf_buffer__new() sets up per-CPU ring buffers and updates corresponding BPF
      map entries. It accepts two user-provided callbacks: one for handling raw
      samples and one for get notifications of lost samples due to buffer overflow.
      
      perf_buffer__new_raw() is similar, but provides more control over how
      perf events are set up (by accepting user-provided perf_event_attr), how
      they are handled (perf_event_header pointer is passed directly to
      user-provided callback), and on which CPUs ring buffers are created
      (it's possible to provide a list of CPUs and corresponding map keys to
      update). This API allows advanced users fuller control.
      
      perf_buffer__poll() is used to fetch ring buffer data across all CPUs,
      utilizing epoll instance.
      
      perf_buffer__free() does corresponding clean up and unsets FDs from BPF map.
      
      All APIs are not thread-safe. User should ensure proper locking/coordination if
      used in multi-threaded set up.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      fb84b822
  5. 06 7月, 2019 6 次提交
  6. 03 7月, 2019 1 次提交
    • L
      bpf, libbpf, smatch: Fix potential NULL pointer dereference · 33bae185
      Leo Yan 提交于
      Based on the following report from Smatch, fix the potential NULL
      pointer dereference check:
      
        tools/lib/bpf/libbpf.c:3493
        bpf_prog_load_xattr() warn: variable dereferenced before check 'attr'
        (see line 3483)
      
        3479 int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
        3480                         struct bpf_object **pobj, int *prog_fd)
        3481 {
        3482         struct bpf_object_open_attr open_attr = {
        3483                 .file           = attr->file,
        3484                 .prog_type      = attr->prog_type,
                                               ^^^^^^
        3485         };
      
      At the head of function, it directly access 'attr' without checking
      if it's NULL pointer. This patch moves the values assignment after
      validating 'attr' and 'attr->file'.
      Signed-off-by: NLeo Yan <leo.yan@linaro.org>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      33bae185
  7. 28 6月, 2019 1 次提交
  8. 26 6月, 2019 1 次提交
  9. 25 6月, 2019 1 次提交
  10. 19 6月, 2019 1 次提交
  11. 18 6月, 2019 6 次提交
    • A
      libbpf: allow specifying map definitions using BTF · abd29c93
      Andrii Nakryiko 提交于
      This patch adds support for a new way to define BPF maps. It relies on
      BTF to describe mandatory and optional attributes of a map, as well as
      captures type information of key and value naturally. This eliminates
      the need for BPF_ANNOTATE_KV_PAIR hack and ensures key/value sizes are
      always in sync with the key/value type.
      
      Relying on BTF, this approach allows for both forward and backward
      compatibility w.r.t. extending supported map definition features. By
      default, any unrecognized attributes are treated as an error, but it's
      possible relax this using MAPS_RELAX_COMPAT flag. New attributes, added
      in the future will need to be optional.
      
      The outline of the new map definition (short, BTF-defined maps) is as follows:
      1. All the maps should be defined in .maps ELF section. It's possible to
         have both "legacy" map definitions in `maps` sections and BTF-defined
         maps in .maps sections. Everything will still work transparently.
      2. The map declaration and initialization is done through
         a global/static variable of a struct type with few mandatory and
         extra optional fields:
         - type field is mandatory and specified type of BPF map;
         - key/value fields are mandatory and capture key/value type/size information;
         - max_entries attribute is optional; if max_entries is not specified or
           initialized, it has to be provided in runtime through libbpf API
           before loading bpf_object;
         - map_flags is optional and if not defined, will be assumed to be 0.
      3. Key/value fields should be **a pointer** to a type describing
         key/value. The pointee type is assumed (and will be recorded as such
         and used for size determination) to be a type describing key/value of
         the map. This is done to save excessive amounts of space allocated in
         corresponding ELF sections for key/value of big size.
      4. As some maps disallow having BTF type ID associated with key/value,
         it's possible to specify key/value size explicitly without
         associating BTF type ID with it. Use key_size and value_size fields
         to do that (see example below).
      
      Here's an example of simple ARRAY map defintion:
      
      struct my_value { int x, y, z; };
      
      struct {
      	int type;
      	int max_entries;
      	int *key;
      	struct my_value *value;
      } btf_map SEC(".maps") = {
      	.type = BPF_MAP_TYPE_ARRAY,
      	.max_entries = 16,
      };
      
      This will define BPF ARRAY map 'btf_map' with 16 elements. The key will
      be of type int and thus key size will be 4 bytes. The value is struct
      my_value of size 12 bytes. This map can be used from C code exactly the
      same as with existing maps defined through struct bpf_map_def.
      
      Here's an example of STACKMAP definition (which currently disallows BTF type
      IDs for key/value):
      
      struct {
      	__u32 type;
      	__u32 max_entries;
      	__u32 map_flags;
      	__u32 key_size;
      	__u32 value_size;
      } stackmap SEC(".maps") = {
      	.type = BPF_MAP_TYPE_STACK_TRACE,
      	.max_entries = 128,
      	.map_flags = BPF_F_STACK_BUILD_ID,
      	.key_size = sizeof(__u32),
      	.value_size = PERF_MAX_STACK_DEPTH * sizeof(struct bpf_stack_build_id),
      };
      
      This approach is naturally extended to support map-in-map, by making a value
      field to be another struct that describes inner map. This feature is not
      implemented yet. It's also possible to incrementally add features like pinning
      with full backwards and forward compatibility. Support for static
      initialization of BPF_MAP_TYPE_PROG_ARRAY using pointers to BPF programs
      is also on the roadmap.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      abd29c93
    • A
      libbpf: split initialization and loading of BTF · 063183bf
      Andrii Nakryiko 提交于
      Libbpf does sanitization of BTF before loading it into kernel, if kernel
      doesn't support some of newer BTF features. This removes some of the
      important information from BTF (e.g., DATASEC and VAR description),
      which will be used for map construction. This patch splits BTF
      processing into initialization step, in which BTF is initialized from
      ELF and all the original data is still preserved; and
      sanitization/loading step, which ensures that BTF is safe to load into
      kernel. This allows to use full BTF information to construct maps, while
      still loading valid BTF into older kernels.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      063183bf
    • A
      libbpf: identify maps by section index in addition to offset · db48814b
      Andrii Nakryiko 提交于
      To support maps to be defined in multiple sections, it's important to
      identify map not just by offset within its section, but section index as
      well. This patch adds tracking of section index.
      
      For global data, we record section index of corresponding
      .data/.bss/.rodata ELF section for uniformity, and thus don't need
      a special value of offset for those maps.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      db48814b
    • A
      libbpf: refactor map initialization · bf829271
      Andrii Nakryiko 提交于
      User and global data maps initialization has gotten pretty complicated
      and unnecessarily convoluted. This patch splits out the logic for global
      data map and user-defined map initialization. It also removes the
      restriction of pre-calculating how many maps will be initialized,
      instead allowing to keep adding new maps as they are discovered, which
      will be used later for BTF-defined map definitions.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      bf829271
    • A
      libbpf: streamline ELF parsing error-handling · 01b29d1d
      Andrii Nakryiko 提交于
      Simplify ELF parsing logic by exiting early, as there is no common clean
      up path to execute. That makes it unnecessary to track when err was set
      and when it was cleared. It also reduces nesting in some places.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      01b29d1d
    • A
      libbpf: extract BTF loading logic · 9c6660d0
      Andrii Nakryiko 提交于
      As a preparation for adding BTF-based BPF map loading, extract .BTF and
      .BTF.ext loading logic.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      9c6660d0
  12. 15 6月, 2019 1 次提交
  13. 11 6月, 2019 1 次提交
  14. 07 6月, 2019 1 次提交
  15. 01 6月, 2019 1 次提交
    • M
      libbpf: Return btf_fd for load_sk_storage_btf · cfd49210
      Michal Rostecki 提交于
      Before this change, function load_sk_storage_btf expected that
      libbpf__probe_raw_btf was returning a BTF descriptor, but in fact it was
      returning an information about whether the probe was successful (0 or
      1). load_sk_storage_btf was using that value as an argument of the close
      function, which was resulting in closing stdout and thus terminating the
      process which called that function.
      
      That bug was visible in bpftool. `bpftool feature` subcommand was always
      exiting too early (because of closed stdout) and it didn't display all
      requested probes. `bpftool -j feature` or `bpftool -p feature` were not
      returning a valid json object.
      
      This change renames the libbpf__probe_raw_btf function to
      libbpf__load_raw_btf, which now returns a BTF descriptor, as expected in
      load_sk_storage_btf.
      
      v2:
      - Fix typo in the commit message.
      
      v3:
      - Simplify BTF descriptor handling in bpf_object__probe_btf_* functions.
      - Rename libbpf__probe_raw_btf function to libbpf__load_raw_btf and
      return a BTF descriptor.
      
      v4:
      - Fix typo in the commit message.
      
      Fixes: d7c4b398 ("libbpf: detect supported kernel BTF features and sanitize BTF")
      Signed-off-by: NMichal Rostecki <mrostecki@opensuse.org>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      cfd49210
  16. 30 5月, 2019 10 次提交
  17. 28 5月, 2019 2 次提交