1. 18 12月, 2019 1 次提交
  2. 16 12月, 2019 7 次提交
  3. 16 11月, 2019 2 次提交
  4. 11 11月, 2019 2 次提交
  5. 03 11月, 2019 2 次提交
  6. 31 10月, 2019 1 次提交
  7. 23 10月, 2019 1 次提交
  8. 21 10月, 2019 1 次提交
  9. 16 10月, 2019 1 次提交
  10. 06 10月, 2019 2 次提交
    • A
      libbpf: add bpf_object__open_{file, mem} w/ extensible opts · 2ce8450e
      Andrii Nakryiko 提交于
      Add new set of bpf_object__open APIs using new approach to optional
      parameters extensibility allowing simpler ABI compatibility approach.
      
      This patch demonstrates an approach to implementing libbpf APIs that
      makes it easy to extend existing APIs with extra optional parameters in
      such a way, that ABI compatibility is preserved without having to do
      symbol versioning and generating lots of boilerplate code to handle it.
      To facilitate succinct code for working with options, add OPTS_VALID,
      OPTS_HAS, and OPTS_GET macros that hide all the NULL, size, and zero
      checks.
      
      Additionally, newly added libbpf APIs are encouraged to follow similar
      pattern of having all mandatory parameters as formal function parameters
      and always have optional (NULL-able) xxx_opts struct, which should
      always have real struct size as a first field and the rest would be
      optional parameters added over time, which tune the behavior of existing
      API, if specified by user.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      2ce8450e
    • A
      libbpf: stop enforcing kern_version, populate it for users · 5e61f270
      Andrii Nakryiko 提交于
      Kernel version enforcement for kprobes/kretprobes was removed from
      5.0 kernel in 6c4fc209 ("bpf: remove useless version check for prog load").
      Since then, BPF programs were specifying SEC("version") just to please
      libbpf. We should stop enforcing this in libbpf, if even kernel doesn't
      care. Furthermore, libbpf now will pre-populate current kernel version
      of the host system, in case we are still running on old kernel.
      
      This patch also removes __bpf_object__open_xattr from libbpf.h, as
      nothing in libbpf is relying on having it in that header. That function
      was never exported as LIBBPF_API and even name suggests its internal
      version. So this should be safe to remove, as it doesn't break ABI.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      5e61f270
  11. 08 8月, 2019 1 次提交
  12. 28 7月, 2019 1 次提交
  13. 08 7月, 2019 1 次提交
    • A
      libbpf: add perf buffer API · fb84b822
      Andrii Nakryiko 提交于
      BPF_MAP_TYPE_PERF_EVENT_ARRAY map is often used to send data from BPF program
      to user space for additional processing. libbpf already has very low-level API
      to read single CPU perf buffer, bpf_perf_event_read_simple(), but it's hard to
      use and requires a lot of code to set everything up. This patch adds
      perf_buffer abstraction on top of it, abstracting setting up and polling
      per-CPU logic into simple and convenient API, similar to what BCC provides.
      
      perf_buffer__new() sets up per-CPU ring buffers and updates corresponding BPF
      map entries. It accepts two user-provided callbacks: one for handling raw
      samples and one for get notifications of lost samples due to buffer overflow.
      
      perf_buffer__new_raw() is similar, but provides more control over how
      perf events are set up (by accepting user-provided perf_event_attr), how
      they are handled (perf_event_header pointer is passed directly to
      user-provided callback), and on which CPUs ring buffers are created
      (it's possible to provide a list of CPUs and corresponding map keys to
      update). This API allows advanced users fuller control.
      
      perf_buffer__poll() is used to fetch ring buffer data across all CPUs,
      utilizing epoll instance.
      
      perf_buffer__free() does corresponding clean up and unsets FDs from BPF map.
      
      All APIs are not thread-safe. User should ensure proper locking/coordination if
      used in multi-threaded set up.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      fb84b822
  14. 06 7月, 2019 5 次提交
  15. 19 6月, 2019 1 次提交
  16. 11 6月, 2019 1 次提交
  17. 28 5月, 2019 1 次提交
  18. 25 5月, 2019 1 次提交
  19. 10 4月, 2019 2 次提交
    • D
      bpf, libbpf: add support for BTF Var and DataSec · 1713d68b
      Daniel Borkmann 提交于
      This adds libbpf support for BTF Var and DataSec kinds. Main point
      here is that libbpf needs to do some preparatory work before the
      whole BTF object can be loaded into the kernel, that is, fixing up
      of DataSec size taken from the ELF section size and non-static
      variable offset which needs to be taken from the ELF's string section.
      
      Upstream LLVM doesn't fix these up since at time of BTF emission
      it is too early in the compilation process thus this information
      isn't available yet, hence loader needs to take care of it.
      
      Note, deduplication handling has not been in the scope of this work
      and needs to be addressed in a future commit.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://reviews.llvm.org/D59441Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      1713d68b
    • D
      bpf, libbpf: support global data/bss/rodata sections · d859900c
      Daniel Borkmann 提交于
      This work adds BPF loader support for global data sections
      to libbpf. This allows to write BPF programs in more natural
      C-like way by being able to define global variables and const
      data.
      
      Back at LPC 2018 [0] we presented a first prototype which
      implemented support for global data sections by extending BPF
      syscall where union bpf_attr would get additional memory/size
      pair for each section passed during prog load in order to later
      add this base address into the ldimm64 instruction along with
      the user provided offset when accessing a variable. Consensus
      from LPC was that for proper upstream support, it would be
      more desirable to use maps instead of bpf_attr extension as
      this would allow for introspection of these sections as well
      as potential live updates of their content. This work follows
      this path by taking the following steps from loader side:
      
       1) In bpf_object__elf_collect() step we pick up ".data",
          ".rodata", and ".bss" section information.
      
       2) If present, in bpf_object__init_internal_map() we add
          maps to the obj's map array that corresponds to each
          of the present sections. Given section size and access
          properties can differ, a single entry array map is
          created with value size that is corresponding to the
          ELF section size of .data, .bss or .rodata. These
          internal maps are integrated into the normal map
          handling of libbpf such that when user traverses all
          obj maps, they can be differentiated from user-created
          ones via bpf_map__is_internal(). In later steps when
          we actually create these maps in the kernel via
          bpf_object__create_maps(), then for .data and .rodata
          sections their content is copied into the map through
          bpf_map_update_elem(). For .bss this is not necessary
          since array map is already zero-initialized by default.
          Additionally, for .rodata the map is frozen as read-only
          after setup, such that neither from program nor syscall
          side writes would be possible.
      
       3) In bpf_program__collect_reloc() step, we record the
          corresponding map, insn index, and relocation type for
          the global data.
      
       4) And last but not least in the actual relocation step in
          bpf_program__relocate(), we mark the ldimm64 instruction
          with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
          imm field the map's file descriptor is stored as similarly
          done as in BPF_PSEUDO_MAP_FD, and in the second imm field
          (as ldimm64 is 2-insn wide) we store the access offset
          into the section. Given these maps have only single element
          ldimm64's off remains zero in both parts.
      
       5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
          load will then store the actual target address in order
          to have a 'map-lookup'-free access. That is, the actual
          map value base address + offset. The destination register
          in the verifier will then be marked as PTR_TO_MAP_VALUE,
          containing the fixed offset as reg->off and backing BPF
          map as reg->map_ptr. Meaning, it's treated as any other
          normal map value from verification side, only with
          efficient, direct value access instead of actual call to
          map lookup helper as in the typical case.
      
      Currently, only support for static global variables has been
      added, and libbpf rejects non-static global variables from
      loading. This can be lifted until we have proper semantics
      for how BPF will treat multi-object BPF loads. From BTF side,
      libbpf will set the value type id of the types corresponding
      to the ".bss", ".data" and ".rodata" names which LLVM will
      emit without the object name prefix. The key type will be
      left as zero, thus making use of the key-less BTF option in
      array maps.
      
      Simple example dump of program using globals vars in each
      section:
      
        # bpftool prog
        [...]
        6784: sched_cls  name load_static_dat  tag a7e1291567277844  gpl
              loaded_at 2019-03-11T15:39:34+0000  uid 0
              xlated 1776B  jited 993B  memlock 4096B  map_ids 2238,2237,2235,2236,2239,2240
      
        # bpftool map show id 2237
        2237: array  name test_glo.bss  flags 0x0
              key 4B  value 64B  max_entries 1  memlock 4096B
        # bpftool map show id 2235
        2235: array  name test_glo.data  flags 0x0
              key 4B  value 64B  max_entries 1  memlock 4096B
        # bpftool map show id 2236
        2236: array  name test_glo.rodata  flags 0x80
              key 4B  value 96B  max_entries 1  memlock 4096B
      
        # bpftool prog dump xlated id 6784
        int load_static_data(struct __sk_buff * skb):
        ; int load_static_data(struct __sk_buff *skb)
           0: (b7) r6 = 0
        ; test_reloc(number, 0, &num0);
           1: (63) *(u32 *)(r10 -4) = r6
           2: (bf) r2 = r10
        ; int load_static_data(struct __sk_buff *skb)
           3: (07) r2 += -4
        ; test_reloc(number, 0, &num0);
           4: (18) r1 = map[id:2238]
           6: (18) r3 = map[id:2237][0]+0    <-- direct addr in .bss area
           8: (b7) r4 = 0
           9: (85) call array_map_update_elem#100464
          10: (b7) r1 = 1
        ; test_reloc(number, 1, &num1);
        [...]
        ; test_reloc(string, 2, str2);
         120: (18) r8 = map[id:2237][0]+16   <-- same here at offset +16
         122: (18) r1 = map[id:2239]
         124: (18) r3 = map[id:2237][0]+16
         126: (b7) r4 = 0
         127: (85) call array_map_update_elem#100464
         128: (b7) r1 = 120
        ; str1[5] = 'x';
         129: (73) *(u8 *)(r9 +5) = r1
        ; test_reloc(string, 3, str1);
         130: (b7) r1 = 3
         131: (63) *(u32 *)(r10 -4) = r1
         132: (b7) r9 = 3
         133: (bf) r2 = r10
        ; int load_static_data(struct __sk_buff *skb)
         134: (07) r2 += -4
        ; test_reloc(string, 3, str1);
         135: (18) r1 = map[id:2239]
         137: (18) r3 = map[id:2235][0]+16   <-- direct addr in .data area
         139: (b7) r4 = 0
         140: (85) call array_map_update_elem#100464
         141: (b7) r1 = 111
        ; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
         142: (73) *(u8 *)(r8 +6) = r1       <-- further access based on .bss data
         143: (b7) r1 = 108
         144: (73) *(u8 *)(r8 +5) = r1
        [...]
      
      For Cilium use-case in particular, this enables migrating configuration
      constants from Cilium daemon's generated header defines into global
      data sections such that expensive runtime recompilations with LLVM can
      be avoided altogether. Instead, the ELF file becomes effectively a
      "template", meaning, it is compiled only once (!) and the Cilium daemon
      will then rewrite relevant configuration data from the ELF's .data or
      .rodata sections directly instead of recompiling the program. The
      updated ELF is then loaded into the kernel and atomically replaces
      the existing program in the networking datapath. More info in [0].
      
      Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
      for static variables").
      
        [0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
            http://vger.kernel.org/lpc-bpf2018.html#session-3Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      d859900c
  20. 04 4月, 2019 1 次提交
    • A
      libbpf: teach libbpf about log_level bit 2 · da11b417
      Alexei Starovoitov 提交于
      Allow bpf_prog_load_xattr() to specify log_level for program loading.
      
      Teach libbpf to accept log_level with bit 2 set.
      
      Increase default BPF_LOG_BUF_SIZE from 256k to 16M.
      There is no downside to increase it to a maximum allowed by old kernels.
      Existing 256k limit caused ENOSPC errors and users were not able to see
      verifier error which is printed at the end of the verifier log.
      
      If ENOSPC is hit, double the verifier log and try again to capture
      the verifier error.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      da11b417
  21. 20 3月, 2019 1 次提交
  22. 12 3月, 2019 1 次提交
    • A
      tools lib bpf: Fix the build by adding a missing stdarg.h include · dfcbc2f2
      Arnaldo Carvalho de Melo 提交于
      The libbpf_print_fn_t typedef uses va_list without including the header
      where that type is defined, stdarg.h, breaking in places where we're
      unlucky for that type not to be already defined by some previously
      included header.
      
      Noticed while building on fedora 24 cross building tools/perf to the ARC
      architecture using the uClibc C library:
      
        28 fedora:24-x-ARC-uClibc   : FAIL arc-linux-gcc (ARCompact ISA Linux uClibc toolchain 2017.09-rc2) 7.1.1 20170710
      
          CC       /tmp/build/perf/tests/llvm.o
        In file included from tests/llvm.c:3:0:
        /git/linux/tools/lib/bpf/libbpf.h:57:20: error: unknown type name 'va_list'
              const char *, va_list ap);
                            ^~~~~~~
        /git/linux/tools/lib/bpf/libbpf.h:59:34: error: unknown type name 'libbpf_print_fn_t'
         LIBBPF_API void libbpf_set_print(libbpf_print_fn_t fn);
                                          ^~~~~~~~~~~~~~~~~
        mv: cannot stat '/tmp/build/perf/tests/.llvm.o.tmp': No such file or directory
      
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Quentin Monnet <quentin.monnet@netronome.com>
      Cc: Stanislav Fomichev <sdf@google.com>
      Cc: Yonghong Song <yhs@fb.com>
      Fixes: a8a1f7d0 ("libbpf: fix libbpf_print")
      Link: https://lkml.kernel.org/n/tip-5270n2quu2gqz22o7itfdx00@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      dfcbc2f2
  23. 01 3月, 2019 1 次提交
  24. 15 2月, 2019 2 次提交
    • A
      libbpf: Introduce bpf_object__btf · 789f6bab
      Andrey Ignatov 提交于
      Add new accessor for bpf_object to get opaque struct btf * from it.
      
      struct btf * is needed for all operations with BTF and it's present in
      bpf_object. The only thing missing is a way to get it.
      
      Example use-case is to get BTF key_type_id and value_type_id for a map in
      bpf_object. It can be done with btf__get_map_kv_tids() but that function
      requires struct btf *.
      
      Similar API can be added for struct btf_ext but no use-case for it yet.
      Signed-off-by: NAndrey Ignatov <rdna@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      789f6bab
    • A
      libbpf: Introduce bpf_map__resize · 1a11a4c7
      Andrey Ignatov 提交于
      Add bpf_map__resize() to change max_entries for a map.
      
      Quite often necessary map size is unknown at compile time and can be
      calculated only at run time.
      
      Currently the following approach is used to do so:
      * bpf_object__open_buffer() to open Elf file from a buffer;
      * bpf_object__find_map_by_name() to find relevant map;
      * bpf_map__def() to get map attributes and create struct
        bpf_create_map_attr from them;
      * update max_entries in bpf_create_map_attr;
      * bpf_create_map_xattr() to create new map with updated max_entries;
      * bpf_map__reuse_fd() to replace the map in bpf_object with newly
        created one.
      
      And after all this bpf_object can finally be loaded. The map will have
      new size.
      
      It 1) is quite a lot of steps; 2) doesn't take BTF into account.
      
      For "2)" even more steps should be made and some of them require changes
      to libbpf (e.g. to get struct btf * from bpf_object).
      
      Instead the whole problem can be solved by introducing simple
      bpf_map__resize() API that checks the map and sets new max_entries if
      the map is not loaded yet.
      
      So the new steps are:
      * bpf_object__open_buffer() to open Elf file from a buffer;
      * bpf_object__find_map_by_name() to find relevant map;
      * bpf_map__resize() to update max_entries.
      
      That's much simpler and works with BTF.
      Signed-off-by: NAndrey Ignatov <rdna@fb.com>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      1a11a4c7