1. 27 7月, 2022 2 次提交
  2. 18 7月, 2022 1 次提交
    • Y
      libbpf: Fix build issue with llvm-readelf · e2e71220
      Yonghong Song 提交于
      stable inclusion
      from stable-v5.10.111
      commit 5baf92a2c46c543ddcc2e3b1a96cd67a10f7e7fd
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I5GL1Z
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=5baf92a2c46c543ddcc2e3b1a96cd67a10f7e7fd
      
      --------------------------------
      
      [ Upstream commit 0908a66a ]
      
      There are cases where clang compiler is packaged in a way
      readelf is a symbolic link to llvm-readelf. In such cases,
      llvm-readelf will be used instead of default binutils readelf,
      and the following error will appear during libbpf build:
      
      #  Warning: Num of global symbols in
      #   /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/sharedobjs/libbpf-in.o (367)
      #   does NOT match with num of versioned symbols in
      #   /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/libbpf.so libbpf.map (383).
      #   Please make sure all LIBBPF_API symbols are versioned in libbpf.map.
      #  --- /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/libbpf_global_syms.tmp ...
      #  +++ /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/libbpf_versioned_syms.tmp ...
      #  @@ -324,6 +324,22 @@
      #   btf__str_by_offset
      #   btf__type_by_id
      #   btf__type_cnt
      #  +LIBBPF_0.0.1
      #  +LIBBPF_0.0.2
      #  +LIBBPF_0.0.3
      #  +LIBBPF_0.0.4
      #  +LIBBPF_0.0.5
      #  +LIBBPF_0.0.6
      #  +LIBBPF_0.0.7
      #  +LIBBPF_0.0.8
      #  +LIBBPF_0.0.9
      #  +LIBBPF_0.1.0
      #  +LIBBPF_0.2.0
      #  +LIBBPF_0.3.0
      #  +LIBBPF_0.4.0
      #  +LIBBPF_0.5.0
      #  +LIBBPF_0.6.0
      #  +LIBBPF_0.7.0
      #   libbpf_attach_type_by_name
      #   libbpf_find_kernel_btf
      #   libbpf_find_vmlinux_btf_id
      #  make[2]: *** [Makefile:184: check_abi] Error 1
      #  make[1]: *** [Makefile:140: all] Error 2
      
      The above failure is due to different printouts for some ABS
      versioned symbols. For example, with the same libbpf.so,
        $ /bin/readelf --dyn-syms --wide tools/lib/bpf/libbpf.so | grep "LIBBPF" | grep ABS
           134: 0000000000000000     0 OBJECT  GLOBAL DEFAULT  ABS LIBBPF_0.5.0
           202: 0000000000000000     0 OBJECT  GLOBAL DEFAULT  ABS LIBBPF_0.6.0
           ...
        $ /opt/llvm/bin/readelf --dyn-syms --wide tools/lib/bpf/libbpf.so | grep "LIBBPF" | grep ABS
           134: 0000000000000000     0 OBJECT  GLOBAL DEFAULT   ABS LIBBPF_0.5.0@@LIBBPF_0.5.0
           202: 0000000000000000     0 OBJECT  GLOBAL DEFAULT   ABS LIBBPF_0.6.0@@LIBBPF_0.6.0
           ...
      The binutils readelf doesn't print out the symbol LIBBPF_* version and llvm-readelf does.
      Such a difference caused libbpf build failure with llvm-readelf.
      
      The proposed fix filters out all ABS symbols as they are not part of the comparison.
      This works for both binutils readelf and llvm-readelf.
      Reported-by: NDelyan Kratunov <delyank@fb.com>
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20220204214355.502108-1-yhs@fb.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      Reviewed-by: NWei Li <liwei391@huawei.com>
      e2e71220
  3. 06 7月, 2022 3 次提交
  4. 23 5月, 2022 1 次提交
  5. 06 12月, 2021 5 次提交
  6. 21 10月, 2021 2 次提交
  7. 19 10月, 2021 4 次提交
  8. 06 7月, 2021 1 次提交
  9. 03 6月, 2021 5 次提交
    • B
      libbpf: Fix signed overflow in ringbuf_process_ring · dd40e96b
      Brendan Jackman 提交于
      stable inclusion
      from stable-5.10.38
      commit 4aae6eb6af7d1ac2ee5762077892185884d8f169
      bugzilla: 51875
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit 2a30f944 ]
      
      One of our benchmarks running in (Google-internal) CI pushes data
      through the ringbuf faster htan than userspace is able to consume
      it. In this case it seems we're actually able to get >INT_MAX entries
      in a single ring_buffer__consume() call. ASAN detected that cnt
      overflows in this case.
      
      Fix by using 64-bit counter internally and then capping the result to
      INT_MAX before converting to the int return type. Do the same for
      the ring_buffer__poll().
      
      Fixes: bf99c936 (libbpf: Add BPF ring buffer support)
      Signed-off-by: NBrendan Jackman <jackmanb@google.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210429130510.1621665-1-jackmanb@google.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      dd40e96b
    • A
      selftests/bpf: Fix BPF_CORE_READ_BITFIELD() macro · d44c2aa5
      Andrii Nakryiko 提交于
      stable inclusion
      from stable-5.10.37
      commit 3769c54d341cf94b7e289b070c8fa5d1f57b2029
      bugzilla: 51868
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit 0f20615d ]
      
      Fix BPF_CORE_READ_BITFIELD() macro used for reading CO-RE-relocatable
      bitfields. Missing breaks in a switch caused 8-byte reads always. This can
      confuse libbpf because it does strict checks that memory load size corresponds
      to the original size of the field, which in this case quite often would be
      wrong.
      
      After fixing that, we run into another problem, which quite subtle, so worth
      documenting here. The issue is in Clang optimization and CO-RE relocation
      interactions. Without that asm volatile construct (also known as
      barrier_var()), Clang will re-order BYTE_OFFSET and BYTE_SIZE relocations and
      will apply BYTE_OFFSET 4 times for each switch case arm. This will result in
      the same error from libbpf about mismatch of memory load size and original
      field size. I.e., if we were reading u32, we'd still have *(u8 *), *(u16 *),
      *(u32 *), and *(u64 *) memory loads, three of which will fail. Using
      barrier_var() forces Clang to apply BYTE_OFFSET relocation first (and once) to
      calculate p, after which value of p is used without relocation in each of
      switch case arms, doing appropiately-sized memory load.
      
      Here's the list of relevant relocations and pieces of generated BPF code
      before and after this patch for test_core_reloc_bitfields_direct selftests.
      
      BEFORE
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      
      =====
       #45: core_reloc: insn #160 --> [5] + 0:5: byte_sz --> struct core_reloc_bitfields.u32
       #46: core_reloc: insn #167 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #47: core_reloc: insn #174 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #48: core_reloc: insn #178 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #49: core_reloc: insn #182 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
      
           157:       18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0 ll
           159:       7b 12 20 01 00 00 00 00 *(u64 *)(r2 + 288) = r1
           160:       b7 02 00 00 04 00 00 00 r2 = 4
      ; BYTE_SIZE relocation here                 ^^^
           161:       66 02 07 00 03 00 00 00 if w2 s> 3 goto +7 <LBB0_63>
           162:       16 02 0d 00 01 00 00 00 if w2 == 1 goto +13 <LBB0_65>
           163:       16 02 01 00 02 00 00 00 if w2 == 2 goto +1 <LBB0_66>
           164:       05 00 12 00 00 00 00 00 goto +18 <LBB0_69>
      
      0000000000000528 <LBB0_66>:
           165:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           167:       69 11 08 00 00 00 00 00 r1 = *(u16 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ WRONG size        ^^^^^^^^^^^^^^^^
           168:       05 00 0e 00 00 00 00 00 goto +14 <LBB0_69>
      
      0000000000000548 <LBB0_63>:
           169:       16 02 0a 00 04 00 00 00 if w2 == 4 goto +10 <LBB0_67>
           170:       16 02 01 00 08 00 00 00 if w2 == 8 goto +1 <LBB0_68>
           171:       05 00 0b 00 00 00 00 00 goto +11 <LBB0_69>
      
      0000000000000560 <LBB0_68>:
           172:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           174:       79 11 08 00 00 00 00 00 r1 = *(u64 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ WRONG size        ^^^^^^^^^^^^^^^^
           175:       05 00 07 00 00 00 00 00 goto +7 <LBB0_69>
      
      0000000000000580 <LBB0_65>:
           176:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           178:       71 11 08 00 00 00 00 00 r1 = *(u8 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ WRONG size        ^^^^^^^^^^^^^^^^
           179:       05 00 03 00 00 00 00 00 goto +3 <LBB0_69>
      
      00000000000005a0 <LBB0_67>:
           180:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           182:       61 11 08 00 00 00 00 00 r1 = *(u32 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ RIGHT size        ^^^^^^^^^^^^^^^^
      
      00000000000005b8 <LBB0_69>:
           183:       67 01 00 00 20 00 00 00 r1 <<= 32
           184:       b7 02 00 00 00 00 00 00 r2 = 0
           185:       16 02 02 00 00 00 00 00 if w2 == 0 goto +2 <LBB0_71>
           186:       c7 01 00 00 20 00 00 00 r1 s>>= 32
           187:       05 00 01 00 00 00 00 00 goto +1 <LBB0_72>
      
      00000000000005e0 <LBB0_71>:
           188:       77 01 00 00 20 00 00 00 r1 >>= 32
      
      AFTER
      =====
      
       #30: core_reloc: insn #132 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #31: core_reloc: insn #134 --> [5] + 0:5: byte_sz --> struct core_reloc_bitfields.u32
      
           129:       18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0 ll
           131:       7b 12 20 01 00 00 00 00 *(u64 *)(r2 + 288) = r1
           132:       b7 01 00 00 08 00 00 00 r1 = 8
      ; BYTE_OFFSET relo here                     ^^^
      ; no size check for non-memory dereferencing instructions
           133:       0f 12 00 00 00 00 00 00 r2 += r1
           134:       b7 03 00 00 04 00 00 00 r3 = 4
      ; BYTE_SIZE relocation here                 ^^^
           135:       66 03 05 00 03 00 00 00 if w3 s> 3 goto +5 <LBB0_63>
           136:       16 03 09 00 01 00 00 00 if w3 == 1 goto +9 <LBB0_65>
           137:       16 03 01 00 02 00 00 00 if w3 == 2 goto +1 <LBB0_66>
           138:       05 00 0a 00 00 00 00 00 goto +10 <LBB0_69>
      
      0000000000000458 <LBB0_66>:
           139:       69 21 00 00 00 00 00 00 r1 = *(u16 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
           140:       05 00 08 00 00 00 00 00 goto +8 <LBB0_69>
      
      0000000000000468 <LBB0_63>:
           141:       16 03 06 00 04 00 00 00 if w3 == 4 goto +6 <LBB0_67>
           142:       16 03 01 00 08 00 00 00 if w3 == 8 goto +1 <LBB0_68>
           143:       05 00 05 00 00 00 00 00 goto +5 <LBB0_69>
      
      0000000000000480 <LBB0_68>:
           144:       79 21 00 00 00 00 00 00 r1 = *(u64 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
           145:       05 00 03 00 00 00 00 00 goto +3 <LBB0_69>
      
      0000000000000490 <LBB0_65>:
           146:       71 21 00 00 00 00 00 00 r1 = *(u8 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
           147:       05 00 01 00 00 00 00 00 goto +1 <LBB0_69>
      
      00000000000004a0 <LBB0_67>:
           148:       61 21 00 00 00 00 00 00 r1 = *(u32 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
      
      00000000000004a8 <LBB0_69>:
           149:       67 01 00 00 20 00 00 00 r1 <<= 32
           150:       b7 02 00 00 00 00 00 00 r2 = 0
           151:       16 02 02 00 00 00 00 00 if w2 == 0 goto +2 <LBB0_71>
           152:       c7 01 00 00 20 00 00 00 r1 s>>= 32
           153:       05 00 01 00 00 00 00 00 goto +1 <LBB0_72>
      
      00000000000004d0 <LBB0_71>:
           154:       77 01 00 00 20 00 00 00 r1 >>= 323
      
      Fixes: ee26dade ("libbpf: Add support for relocatable bitfields")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NLorenz Bauer <lmb@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20210426192949.416837-4-andrii@kernel.orgSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      d44c2aa5
    • F
      libbpf: Initialize the bpf_seq_printf parameters array field by field · 2e0a0267
      Florent Revest 提交于
      stable inclusion
      from stable-5.10.37
      commit 78d8b34751cf3c61b8dcd6ac40b0fc453de3c6a3
      bugzilla: 51868
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit 83cd92b4 ]
      
      When initializing the __param array with a one liner, if all args are
      const, the initial array value will be placed in the rodata section but
      because libbpf does not support relocation in the rodata section, any
      pointer in this array will stay NULL.
      
      Fixes: c09add2f ("tools/libbpf: Add bpf_iter support")
      Signed-off-by: NFlorent Revest <revest@chromium.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210419155243.1632274-5-revest@chromium.orgSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      2e0a0267
    • K
      libbpf: Add explicit padding to btf_dump_emit_type_decl_opts · f71b06b2
      KP Singh 提交于
      stable inclusion
      from stable-5.10.37
      commit 454fb207476b34daa26fca1692eacd763b0adea9
      bugzilla: 51868
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit ea24b195 ]
      
      Similar to
      https://lore.kernel.org/bpf/20210313210920.1959628-2-andrii@kernel.org/
      
      When DECLARE_LIBBPF_OPTS is used with inline field initialization, e.g:
      
        DECLARE_LIBBPF_OPTS(btf_dump_emit_type_decl_opts, opts,
          .field_name = var_ident,
          .indent_level = 2,
          .strip_mods = strip_mods,
        );
      
      and compiled in debug mode, the compiler generates code which
      leaves the padding uninitialized and triggers errors within libbpf APIs
      which require strict zero initialization of OPTS structs.
      
      Adding anonymous padding field fixes the issue.
      
      Fixes: 9f81654e ("libbpf: Expose BTF-to-C type declaration emitting API")
      Suggested-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NKP Singh <kpsingh@kernel.org>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210319192117.2310658-1-kpsingh@kernel.orgSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      f71b06b2
    • A
      libbpf: Add explicit padding to bpf_xdp_set_link_opts · f2d36210
      Andrii Nakryiko 提交于
      stable inclusion
      from stable-5.10.37
      commit b1ed7a57175082024eed73259dbd97d7f5d888fc
      bugzilla: 51868
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit dde7b3f5 ]
      
      Adding such anonymous padding fixes the issue with uninitialized portions of
      bpf_xdp_set_link_opts when using LIBBPF_DECLARE_OPTS macro with inline field
      initialization:
      
      DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, .old_fd = -1);
      
      When such code is compiled in debug mode, compiler is generating code that
      leaves padding bytes uninitialized, which triggers error inside libbpf APIs
      that do strict zero initialization checks for OPTS structs.
      
      Adding anonymous padding field fixes the issue.
      
      Fixes: bd5ca3ef ("libbpf: Add function to set link XDP fd while specifying old program")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210313210920.1959628-2-andrii@kernel.orgSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      f2d36210
  10. 26 4月, 2021 5 次提交
  11. 19 4月, 2021 4 次提交
  12. 09 4月, 2021 2 次提交
  13. 12 1月, 2021 1 次提交
  14. 02 12月, 2020 1 次提交
  15. 20 11月, 2020 1 次提交
  16. 10 11月, 2020 1 次提交
  17. 05 11月, 2020 1 次提交