1. 13 1月, 2022 1 次提交
  2. 06 1月, 2022 3 次提交
  3. 29 12月, 2021 1 次提交
  4. 15 12月, 2021 2 次提交
    • A
      libbpf: Avoid reading past ELF data section end when copying license · f9798239
      Andrii Nakryiko 提交于
      Fix possible read beyond ELF "license" data section if the license
      string is not properly zero-terminated. Use the fact that libbpf_strlcpy
      never accesses the (N-1)st byte of the source string because it's
      replaced with '\0' anyways.
      
      If this happens, it's a violation of contract between libbpf and a user,
      but not handling this more robustly upsets CIFuzz, so given the fix is
      trivial, let's fix the potential issue.
      
      Fixes: 9fc205b4 ("libbpf: Add sane strncpy alternative and use it internally")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211214232054.3458774-1-andrii@kernel.org
      f9798239
    • A
      libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF · e542f2c4
      Andrii Nakryiko 提交于
      The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
      one of the first extremely frustrating gotchas that all new BPF users go
      through and in some cases have to learn it a very hard way.
      
      Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
      dropped the dependency on memlock and uses memcg-based memory accounting
      instead. Unfortunately, detecting memcg-based BPF memory accounting is
      far from trivial (as can be evidenced by this patch), so in practice
      most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
      
      As we move towards libbpf 1.0, it would be good to allow users to forget
      about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
      automatically. This patch paves the way forward in this matter. Libbpf
      will do feature detection of memcg-based accounting, and if detected,
      will do nothing. But if the kernel is too old, just like BCC, libbpf
      will automatically increase RLIMIT_MEMLOCK on behalf of user
      application ([0]).
      
      As this is technically a breaking change, during the transition period
      applications have to opt into libbpf 1.0 mode by setting
      LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
      libbpf_set_strict_mode().
      
      Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
      with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
      nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
      called before the first bpf_prog_load(), bpf_btf_load(), or
      bpf_object__load() call, otherwise it has no effect and will return
      -EBUSY.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/369Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
      e542f2c4
  5. 14 12月, 2021 1 次提交
  6. 11 12月, 2021 6 次提交
  7. 04 12月, 2021 1 次提交
  8. 03 12月, 2021 5 次提交
  9. 30 11月, 2021 1 次提交
  10. 29 11月, 2021 1 次提交
    • H
      libbpf: Support static initialization of BPF_MAP_TYPE_PROG_ARRAY · 341ac5ff
      Hengqi Chen 提交于
      Support static initialization of BPF_MAP_TYPE_PROG_ARRAY with a
      syntax similar to map-in-map initialization ([0]):
      
          SEC("socket")
          int tailcall_1(void *ctx)
          {
              return 0;
          }
      
          struct {
              __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
              __uint(max_entries, 2);
              __uint(key_size, sizeof(__u32));
              __array(values, int (void *));
          } prog_array_init SEC(".maps") = {
              .values = {
                  [1] = (void *)&tailcall_1,
              },
          };
      
      Here's the relevant part of libbpf debug log showing what's
      going on with prog-array initialization:
      
      libbpf: sec '.relsocket': collecting relocation for section(3) 'socket'
      libbpf: sec '.relsocket': relo #0: insn #2 against 'prog_array_init'
      libbpf: prog 'entry': found map 0 (prog_array_init, sec 4, off 0) for insn #0
      libbpf: .maps relo #0: for 3 value 0 rel->r_offset 32 name 53 ('tailcall_1')
      libbpf: .maps relo #0: map 'prog_array_init' slot [1] points to prog 'tailcall_1'
      libbpf: map 'prog_array_init': created successfully, fd=5
      libbpf: map 'prog_array_init': slot [1] set to prog 'tailcall_1' fd=6
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/354Signed-off-by: NHengqi Chen <hengqi.chen@gmail.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211128141633.502339-2-hengqi.chen@gmail.com
      341ac5ff
  11. 26 11月, 2021 4 次提交
  12. 20 11月, 2021 1 次提交
  13. 19 11月, 2021 1 次提交
  14. 13 11月, 2021 1 次提交
    • K
      libbpf: Perform map fd cleanup for gen_loader in case of error · ba05fd36
      Kumar Kartikeya Dwivedi 提交于
      Alexei reported a fd leak issue in gen loader (when invoked from
      bpftool) [0]. When adding ksym support, map fd allocation was moved from
      stack to loader map, however I missed closing these fds (relevant when
      cleanup label is jumped to on error). For the success case, the
      allocated fd is returned in loader ctx, hence this problem is not
      noticed.
      
      Make three changes, first MAX_USED_MAPS in MAX_FD_ARRAY_SZ instead of
      MAX_USED_PROGS, the braino was not a problem until now for this case as
      we didn't try to close map fds (otherwise use of it would have tried
      closing 32 additional fds in ksym btf fd range). Then, do a cleanup for
      all nr_maps fds in cleanup label code, so that in case of error all
      temporary map fds from bpf_gen__map_create are closed.
      
      Then, adjust the cleanup label to only generate code for the required
      number of program and map fds.  To trim code for remaining program
      fds, lay out prog_fd array in stack in the end, so that we can
      directly skip the remaining instances.  Still stack size remains same,
      since changing that would require changes in a lot of places
      (including adjustment of stack_off macro), so nr_progs_sz variable is
      only used to track required number of iterations (and jump over
      cleanup size calculated from that), stack offset calculation remains
      unaffected.
      
      The difference for test_ksyms_module.o is as follows:
      libbpf: //prog cleanup iterations: before = 34, after = 5
      libbpf: //maps cleanup iterations: before = 64, after = 2
      
      Also, move allocation of gen->fd_array offset to bpf_gen__init. Since
      offset can now be 0, and we already continue even if add_data returns 0
      in case of failure, we do not need to distinguish between 0 offset and
      failure case 0, as we rely on bpf_gen__finish to check errors. We can
      also skip check for gen->fd_array in add_*_fd functions, since
      bpf_gen__init will take care of it.
      
        [0]: https://lore.kernel.org/bpf/CAADnVQJ6jSitKSNKyxOrUzwY2qDRX0sPkJ=VLGHuCLVJ=qOt9g@mail.gmail.com
      
      Fixes: 18f4fccb ("libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations")
      Reported-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211112232022.899074-1-memxor@gmail.com
      ba05fd36
  15. 12 11月, 2021 3 次提交
  16. 08 11月, 2021 4 次提交
  17. 04 11月, 2021 4 次提交