1. 17 5月, 2022 1 次提交
  2. 09 5月, 2022 1 次提交
    • A
      selftests/bpf: Prevent skeleton generation race · 1e2666e0
      Andrii Nakryiko 提交于
      Prevent "classic" and light skeleton generation rules from stomping on
      each other's toes due to the use of the same <obj>.linked{1,2,3}.o
      naming pattern. There is no coordination and synchronizataion between
      .skel.h and .lskel.h rules, so they can easily overwrite each other's
      intermediate object files, leading to errors like:
      
        /bin/sh: line 1: 170928 Bus error               (core dumped)
        /data/users/andriin/linux/tools/testing/selftests/bpf/tools/sbin/bpftool gen skeleton
        /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.linked3.o
        name test_ksyms_weak
        > /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h
        make: *** [Makefile:507: /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h] Error 135
        make: *** Deleting file '/data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h'
      
      Fix by using different suffix for light skeleton rule.
      
      Fixes: c48e51c8 ("bpf: selftests: Add selftests for module kfunc support")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220509004148.1801791-2-andrii@kernel.org
      1e2666e0
  3. 06 4月, 2022 2 次提交
    • A
      selftests/bpf: Add urandom_read shared lib and USDTs · 00a0fa2d
      Andrii Nakryiko 提交于
      Extend urandom_read helper binary to include USDTs of 4 combinations:
      semaphore/semaphoreless (refcounted and non-refcounted) and based in
      executable or shared library. We also extend urandom_read with ability
      to report it's own PID to parent process and wait for parent process to
      ready itself up for tracing urandom_read. We utilize popen() and
      underlying pipe properties for proper signaling.
      
      Once urandom_read is ready, we add few tests to validate that libbpf's
      USDT attachment handles all the above combinations of semaphore (or lack
      of it) and static or shared library USDTs. Also, we validate that libbpf
      handles shared libraries both with PID filter and without one (i.e., -1
      for PID argument).
      
      Having the shared library case tested with and without PID is important
      because internal logic differs on kernels that don't support BPF
      cookies. On such older kernels, attaching to USDTs in shared libraries
      without specifying concrete PID doesn't work in principle, because it's
      impossible to determine shared library's load address to derive absolute
      IPs for uprobe attachments. Without absolute IPs, it's impossible to
      perform correct look up of USDT spec based on uprobe's absolute IP (the
      only kind available from BPF at runtime). This is not the problem on
      newer kernels with BPF cookie as we don't need IP-to-ID lookup because
      BPF cookie value *is* spec ID.
      
      So having those two situations as separate subtests is good because
      libbpf CI is able to test latest selftests against old kernels (e.g.,
      4.9 and 5.5), so we'll be able to disable PID-less shared lib attachment
      for old kernels, but will still leave PID-specific one enabled to validate
      this legacy logic is working correctly.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/bpf/20220404234202.331384-8-andrii@kernel.org
      00a0fa2d
    • A
      selftests/bpf: Add basic USDT selftests · 630301b0
      Andrii Nakryiko 提交于
      Add semaphore-based USDT to test_progs itself and write basic tests to
      valicate both auto-attachment and manual attachment logic, as well as
      BPF-side functionality.
      
      Also add subtests to validate that libbpf properly deduplicates USDT
      specs and handles spec overflow situations correctly, as well as proper
      "rollback" of partially-attached multi-spec USDT.
      
      BPF-side of selftest intentionally consists of two files to validate
      that usdt.bpf.h header can be included from multiple source code files
      that are subsequently linked into final BPF object file without causing
      any symbol duplication or other issues. We are validating that __weak
      maps and bpf_usdt_xxx() API functions defined in usdt.bpf.h do work as
      intended.
      
      USDT selftests utilize sys/sdt.h header that on Ubuntu systems comes
      from systemtap-sdt-devel package. But to simplify everyone's life,
      including CI but especially casual contributors to bpf/bpf-next that
      are trying to build selftests, I've checked in sys/sdt.h header from [0]
      directly. This way it will work on all architectures and distros without
      having to figure it out for every relevant combination and adding any
      extra implicit package dependencies.
      
        [0] https://sourceware.org/git?p=systemtap.git;a=blob_plain;f=includes/sys/sdt.h;h=ca0162b4dc57520b96638c8ae79ad547eb1dd3a1;hb=HEADSigned-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NAlan Maguire <alan.maguire@oracle.com>
      Acked-by: NDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/bpf/20220404234202.331384-7-andrii@kernel.org
      630301b0
  4. 18 3月, 2022 1 次提交
  5. 17 3月, 2022 2 次提交
  6. 21 2月, 2022 1 次提交
    • A
      selftests/bpf: Fix btfgen tests · b03e1946
      Andrii Nakryiko 提交于
      There turned out to be a few problems with btfgen selftests.
      
      First, core_btfgen tests are failing in BPF CI due to the use of
      full-featured bpftool, which has extra dependencies on libbfd, libcap,
      etc, which are present in BPF CI's build environment, but those shared
      libraries are missing in QEMU image in which test_progs is running.
      
      To fix this problem, use minimal bootstrap version of bpftool instead.
      It only depend on libelf and libz, same as libbpf, so doesn't add any
      new requirements (and bootstrap bpftool still implementes entire
      `bpftool gen` functionality, which is quite convenient).
      
      Second problem is even more interesting. Both core_btfgen and core_reloc
      reuse the same set of struct core_reloc_test_case array of test case
      definitions. That in itself is not a problem, but btfgen test replaces
      test_case->btf_src_file property with the path to temporary file into
      which minimized BTF is output by bpftool. This interferes with original
      core_reloc tests, depending on order of tests execution (core_btfgen is
      run first in sequential mode and skrews up subsequent core_reloc run by
      pointing to already deleted temporary file, instead of the original BTF
      files) and whether those two runs share the same process (in parallel
      mode the chances are high for them to run in two separate processes and
      so not interfere with each other).
      
      To prevent this interference, create and use local copy of a test
      definition. Mark original array as constant to catch accidental
      modifcations. Note that setup_type_id_case_success() and
      setup_type_id_case_success() still modify common test_case->output
      memory area, but it is ok as each setup function has to re-initialize it
      completely anyways. In sequential mode it leads to deterministic and
      correct initialization. In parallel mode they will either each have
      their own process, or if core_reloc and core_btfgen happen to be run by
      the same worker process, they will still do that sequentially within the
      worker process. If they are sharded across multiple processes, they
      don't really share anything anyways.
      
      Also, rename core_btfgen into core_reloc_btfgen, as it is indeed just
      a "flavor" of core_reloc test, not an independent set of tests. So make
      it more obvious.
      
      Last problem that needed solving was that location of bpftool differs
      between test_progs and test_progs' flavors (e.g., test_progs-no_alu32).
      To keep it simple, create a symlink to bpftool both inside
      selftests/bpf/ directory and selftests/bpf/<flavor> subdirectory. That
      way, from inside core_reloc test, location to bpftool is just "./bpftool".
      
      v2->v3:
        - fix bpftool location relative the test_progs-no_alu32;
      v1->v2:
        - fix corruption of core_reloc_test_case.
      
      Fixes: 704c91e5 ("selftests/bpf: Test "bpftool gen min_core_btf")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NYucong Sun <sunyucong@gmail.com>
      Link: https://lore.kernel.org/bpf/20220220042720.3336684-1-andrii@kernel.org
      b03e1946
  7. 05 2月, 2022 1 次提交
  8. 21 1月, 2022 1 次提交
  9. 17 12月, 2021 1 次提交
  10. 12 12月, 2021 1 次提交
    • H
      selftests/bpf: Add benchmark for bpf_strncmp() helper · 9c42652f
      Hou Tao 提交于
      Add benchmark to compare the performance between home-made strncmp()
      in bpf program and bpf_strncmp() helper. In summary, the performance
      win of bpf_strncmp() under x86-64 is greater than 18% when the compared
      string length is greater than 64, and is 179% when the length is 4095.
      Under arm64 the performance win is even bigger: 33% when the length
      is greater than 64 and 600% when the length is 4095.
      
      The following is the details:
      
      no-helper-X: use home-made strncmp() to compare X-sized string
      helper-Y: use bpf_strncmp() to compare Y-sized string
      
      Under x86-64:
      
      no-helper-1          3.504 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-1             3.347 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-8          3.357 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      helper-8             3.307 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-32         3.064 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-32            3.253 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-64         2.563 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      helper-64            3.040 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-128        1.975 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-128           2.641 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-512        0.759 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-512           1.574 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-2048       0.329 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-2048          0.602 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-4095       0.117 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-4095          0.327 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      
      Under arm64:
      
      no-helper-1          2.806 ± 0.004M/s (drops 0.000 ± 0.000M/s)
      helper-1             2.819 ± 0.002M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-8          2.797 ± 0.109M/s (drops 0.000 ± 0.000M/s)
      helper-8             2.786 ± 0.025M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-32         2.399 ± 0.011M/s (drops 0.000 ± 0.000M/s)
      helper-32            2.703 ± 0.002M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-64         2.020 ± 0.015M/s (drops 0.000 ± 0.000M/s)
      helper-64            2.702 ± 0.073M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-128        1.604 ± 0.001M/s (drops 0.000 ± 0.000M/s)
      helper-128           2.516 ± 0.002M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-512        0.699 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-512           2.106 ± 0.003M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-2048       0.215 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-2048          1.223 ± 0.003M/s (drops 0.000 ± 0.000M/s)
      
      no-helper-4095       0.112 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      helper-4095          0.796 ± 0.000M/s (drops 0.000 ± 0.000M/s)
      Signed-off-by: NHou Tao <houtao1@huawei.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211210141652.877186-4-houtao1@huawei.com
      9c42652f
  11. 03 12月, 2021 4 次提交
  12. 01 12月, 2021 1 次提交
    • J
      selftest/bpf/benchs: Add bpf_loop benchmark · ec151037
      Joanne Koong 提交于
      Add benchmark to measure the throughput and latency of the bpf_loop
      call.
      
      Testing this on my dev machine on 1 thread, the data is as follows:
      
              nr_loops: 10
      bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op
      
              nr_loops: 100
      bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op
      
              nr_loops: 500
      bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op
      
              nr_loops: 1000
      bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op
      
              nr_loops: 5000
      bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op
      
              nr_loops: 10000
      bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op
      
              nr_loops: 50000
      bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op
      
              nr_loops: 100000
      bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op
      
              nr_loops: 500000
      bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op
      
              nr_loops: 1000000
      bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op
      
      >From this data, we can see that the latency per loop decreases as the
      number of loops increases. On this particular machine, each loop had an
      overhead of about ~4 ns, and we were able to run ~250 million loops
      per second.
      Signed-off-by: NJoanne Koong <joannekoong@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211130030622.4131246-5-joannekoong@fb.com
      ec151037
  13. 16 11月, 2021 1 次提交
    • A
      selftests/bpf: Add uprobe triggering overhead benchmarks · d41bc48b
      Andrii Nakryiko 提交于
      Add benchmark to measure overhead of uprobes and uretprobes. Also have
      a baseline (no uprobe attached) benchmark.
      
      On my dev machine, baseline benchmark can trigger 130M user_target()
      invocations. When uprobe is attached, this falls to just 700K. With
      uretprobe, we get down to 520K:
      
        $ sudo ./bench trig-uprobe-base -a
        Summary: hits  131.289 ± 2.872M/s
      
        # UPROBE
        $ sudo ./bench -a trig-uprobe-without-nop
        Summary: hits    0.729 ± 0.007M/s
      
        $ sudo ./bench -a trig-uprobe-with-nop
        Summary: hits    1.798 ± 0.017M/s
      
        # URETPROBE
        $ sudo ./bench -a trig-uretprobe-without-nop
        Summary: hits    0.508 ± 0.012M/s
      
        $ sudo ./bench -a trig-uretprobe-with-nop
        Summary: hits    0.883 ± 0.008M/s
      
      So there is almost 2.5x performance difference between probing nop vs
      non-nop instruction for entry uprobe. And 1.7x difference for uretprobe.
      
      This means that non-nop uprobe overhead is around 1.4 microseconds for uprobe
      and 2 microseconds for non-nop uretprobe.
      
      For nop variants, uprobe and uretprobe overhead is down to 0.556 and
      1.13 microseconds, respectively.
      
      For comparison, just doing a very low-overhead syscall (with no BPF
      programs attached anywhere) gives:
      
        $ sudo ./bench trig-base -a
        Summary: hits    4.830 ± 0.036M/s
      
      So uprobes are about 2.67x slower than pure context switch.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211116013041.4072571-1-andrii@kernel.org
      d41bc48b
  14. 13 11月, 2021 1 次提交
    • J
      tools/runqslower: Fix cross-build · e4ac80ef
      Jean-Philippe Brucker 提交于
      Commit be79505c ("tools/runqslower: Install libbpf headers when
      building") uses the target libbpf to build the host bpftool, which
      doesn't work when cross-building:
      
        make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -C tools/bpf/runqslower O=/tmp/runqslower
        ...
          LINK    /tmp/runqslower/bpftool/bpftool
        /usr/bin/ld: /tmp/runqslower/libbpf/libbpf.a(libbpf-in.o): Relocations in generic ELF (EM: 183)
        /usr/bin/ld: /tmp/runqslower/libbpf/libbpf.a: error adding symbols: file in wrong format
        collect2: error: ld returned 1 exit status
      
      When cross-building, the target architecture differs from the host. The
      bpftool used for building runqslower is executed on the host, and thus
      must use a different libbpf than that used for runqslower itself.
      Remove the LIBBPF_OUTPUT and LIBBPF_DESTDIR parameters, so the bpftool
      build makes its own library if necessary.
      
      In the selftests, pass the host bpftool, already a prerequisite for the
      runqslower recipe, as BPFTOOL_OUTPUT. The runqslower Makefile will use
      the bpftool that's already built for selftests instead of making a new
      one.
      
      Fixes: be79505c ("tools/runqslower: Install libbpf headers when building")
      Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: NQuentin Monnet <quentin@isovalent.com>
      Link: https://lore.kernel.org/bpf/20211112155128.565680-1-jean-philippe@linaro.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      e4ac80ef
  15. 12 11月, 2021 1 次提交
  16. 08 11月, 2021 5 次提交
  17. 02 11月, 2021 1 次提交
  18. 29 10月, 2021 2 次提交
    • K
      selftests/bpf: Add weak/typeless ksym test for light skeleton · 087cba79
      Kumar Kartikeya Dwivedi 提交于
      Also, avoid using CO-RE features, as lskel doesn't support CO-RE, yet.
      Include both light and libbpf skeleton in same file to test both of them
      together.
      
      In c48e51c8 ("bpf: selftests: Add selftests for module kfunc support"),
      I added support for generating both lskel and libbpf skel for a BPF
      object, however the name parameter for bpftool caused collisions when
      included in same file together. This meant that every test needed a
      separate file for a libbpf/light skeleton separation instead of
      subtests.
      
      Change that by appending a "_lskel" suffix to the name for files using
      light skeleton, and convert all existing users.
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-7-memxor@gmail.com
      087cba79
    • J
      bpf/benchs: Add benchmark tests for bloom filter throughput + false positive · 57fd1c63
      Joanne Koong 提交于
      This patch adds benchmark tests for the throughput (for lookups + updates)
      and the false positive rate of bloom filter lookups, as well as some
      minor refactoring of the bash script for running the benchmarks.
      
      These benchmarks show that as the number of hash functions increases,
      the throughput and the false positive rate of the bloom filter decreases.
      >From the benchmark data, the approximate average false-positive rates
      are roughly as follows:
      
      1 hash function = ~30%
      2 hash functions = ~15%
      3 hash functions = ~5%
      4 hash functions = ~2.5%
      5 hash functions = ~1%
      6 hash functions = ~0.5%
      7 hash functions  = ~0.35%
      8 hash functions = ~0.15%
      9 hash functions = ~0.1%
      10 hash functions = ~0%
      
      For reference data, the benchmarks run on one thread on a machine
      with one numa node for 1 to 5 hash functions for 8-byte and 64-byte
      values are as follows:
      
      1 hash function:
        50k entries
      	8-byte value
      	    Lookups - 51.1 M/s operations
      	    Updates - 33.6 M/s operations
      	    False positive rate: 24.15%
      	64-byte value
      	    Lookups - 15.7 M/s operations
      	    Updates - 15.1 M/s operations
      	    False positive rate: 24.2%
        100k entries
      	8-byte value
      	    Lookups - 51.0 M/s operations
      	    Updates - 33.4 M/s operations
      	    False positive rate: 24.04%
      	64-byte value
      	    Lookups - 15.6 M/s operations
      	    Updates - 14.6 M/s operations
      	    False positive rate: 24.06%
        500k entries
      	8-byte value
      	    Lookups - 50.5 M/s operations
      	    Updates - 33.1 M/s operations
      	    False positive rate: 27.45%
      	64-byte value
      	    Lookups - 15.6 M/s operations
      	    Updates - 14.2 M/s operations
      	    False positive rate: 27.42%
        1 mil entries
      	8-byte value
      	    Lookups - 49.7 M/s operations
      	    Updates - 32.9 M/s operations
      	    False positive rate: 27.45%
      	64-byte value
      	    Lookups - 15.4 M/s operations
      	    Updates - 13.7 M/s operations
      	    False positive rate: 27.58%
        2.5 mil entries
      	8-byte value
      	    Lookups - 47.2 M/s operations
      	    Updates - 31.8 M/s operations
      	    False positive rate: 30.94%
      	64-byte value
      	    Lookups - 15.3 M/s operations
      	    Updates - 13.2 M/s operations
      	    False positive rate: 30.95%
        5 mil entries
      	8-byte value
      	    Lookups - 41.1 M/s operations
      	    Updates - 28.1 M/s operations
      	    False positive rate: 31.01%
      	64-byte value
      	    Lookups - 13.3 M/s operations
      	    Updates - 11.4 M/s operations
      	    False positive rate: 30.98%
      
      2 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 34.1 M/s operations
      	    Updates - 20.1 M/s operations
      	    False positive rate: 9.13%
      	64-byte value
      	    Lookups - 8.4 M/s operations
      	    Updates - 7.9 M/s operations
      	    False positive rate: 9.21%
        100k entries
      	8-byte value
      	    Lookups - 33.7 M/s operations
      	    Updates - 18.9 M/s operations
      	    False positive rate: 9.13%
      	64-byte value
      	    Lookups - 8.4 M/s operations
      	    Updates - 7.7 M/s operations
      	    False positive rate: 9.19%
        500k entries
      	8-byte value
      	    Lookups - 32.7 M/s operations
      	    Updates - 18.1 M/s operations
      	    False positive rate: 12.61%
      	64-byte value
      	    Lookups - 8.4 M/s operations
      	    Updates - 7.5 M/s operations
      	    False positive rate: 12.61%
        1 mil entries
      	8-byte value
      	    Lookups - 30.6 M/s operations
      	    Updates - 18.9 M/s operations
      	    False positive rate: 12.54%
      	64-byte value
      	    Lookups - 8.0 M/s operations
      	    Updates - 7.0 M/s operations
      	    False positive rate: 12.52%
        2.5 mil entries
      	8-byte value
      	    Lookups - 25.3 M/s operations
      	    Updates - 16.7 M/s operations
      	    False positive rate: 16.77%
      	64-byte value
      	    Lookups - 7.9 M/s operations
      	    Updates - 6.5 M/s operations
      	    False positive rate: 16.88%
        5 mil entries
      	8-byte value
      	    Lookups - 20.8 M/s operations
      	    Updates - 14.7 M/s operations
      	    False positive rate: 16.78%
      	64-byte value
      	    Lookups - 7.0 M/s operations
      	    Updates - 6.0 M/s operations
      	    False positive rate: 16.78%
      
      3 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 25.1 M/s operations
      	    Updates - 14.6 M/s operations
      	    False positive rate: 7.65%
      	64-byte value
      	    Lookups - 5.8 M/s operations
      	    Updates - 5.5 M/s operations
      	    False positive rate: 7.58%
        100k entries
      	8-byte value
      	    Lookups - 24.7 M/s operations
      	    Updates - 14.1 M/s operations
      	    False positive rate: 7.71%
      	64-byte value
      	    Lookups - 5.8 M/s operations
      	    Updates - 5.3 M/s operations
      	    False positive rate: 7.62%
        500k entries
      	8-byte value
      	    Lookups - 22.9 M/s operations
      	    Updates - 13.9 M/s operations
      	    False positive rate: 2.62%
      	64-byte value
      	    Lookups - 5.6 M/s operations
      	    Updates - 4.8 M/s operations
      	    False positive rate: 2.7%
        1 mil entries
      	8-byte value
      	    Lookups - 19.8 M/s operations
      	    Updates - 12.6 M/s operations
      	    False positive rate: 2.60%
      	64-byte value
      	    Lookups - 5.3 M/s operations
      	    Updates - 4.4 M/s operations
      	    False positive rate: 2.69%
        2.5 mil entries
      	8-byte value
      	    Lookups - 16.2 M/s operations
      	    Updates - 10.7 M/s operations
      	    False positive rate: 4.49%
      	64-byte value
      	    Lookups - 4.9 M/s operations
      	    Updates - 4.1 M/s operations
      	    False positive rate: 4.41%
        5 mil entries
      	8-byte value
      	    Lookups - 18.8 M/s operations
      	    Updates - 9.2 M/s operations
      	    False positive rate: 4.45%
      	64-byte value
      	    Lookups - 5.2 M/s operations
      	    Updates - 3.9 M/s operations
      	    False positive rate: 4.54%
      
      4 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 19.7 M/s operations
      	    Updates - 11.1 M/s operations
      	    False positive rate: 1.01%
      	64-byte value
      	    Lookups - 4.4 M/s operations
      	    Updates - 4.0 M/s operations
      	    False positive rate: 1.00%
        100k entries
      	8-byte value
      	    Lookups - 19.5 M/s operations
      	    Updates - 10.9 M/s operations
      	    False positive rate: 1.00%
      	64-byte value
      	    Lookups - 4.3 M/s operations
      	    Updates - 3.9 M/s operations
      	    False positive rate: 0.97%
        500k entries
      	8-byte value
      	    Lookups - 18.2 M/s operations
      	    Updates - 10.6 M/s operations
      	    False positive rate: 2.05%
      	64-byte value
      	    Lookups - 4.3 M/s operations
      	    Updates - 3.7 M/s operations
      	    False positive rate: 2.05%
        1 mil entries
      	8-byte value
      	    Lookups - 15.5 M/s operations
      	    Updates - 9.6 M/s operations
      	    False positive rate: 1.99%
      	64-byte value
      	    Lookups - 4.0 M/s operations
      	    Updates - 3.4 M/s operations
      	    False positive rate: 1.99%
        2.5 mil entries
      	8-byte value
      	    Lookups - 13.8 M/s operations
      	    Updates - 7.7 M/s operations
      	    False positive rate: 3.91%
      	64-byte value
      	    Lookups - 3.7 M/s operations
      	    Updates - 3.6 M/s operations
      	    False positive rate: 3.78%
        5 mil entries
      	8-byte value
      	    Lookups - 13.0 M/s operations
      	    Updates - 6.9 M/s operations
      	    False positive rate: 3.93%
      	64-byte value
      	    Lookups - 3.5 M/s operations
      	    Updates - 3.7 M/s operations
      	    False positive rate: 3.39%
      
      5 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 16.4 M/s operations
      	    Updates - 9.1 M/s operations
      	    False positive rate: 0.78%
      	64-byte value
      	    Lookups - 3.5 M/s operations
      	    Updates - 3.2 M/s operations
      	    False positive rate: 0.77%
        100k entries
      	8-byte value
      	    Lookups - 16.3 M/s operations
      	    Updates - 9.0 M/s operations
      	    False positive rate: 0.79%
      	64-byte value
      	    Lookups - 3.5 M/s operations
      	    Updates - 3.2 M/s operations
      	    False positive rate: 0.78%
        500k entries
      	8-byte value
      	    Lookups - 15.1 M/s operations
      	    Updates - 8.8 M/s operations
      	    False positive rate: 1.82%
      	64-byte value
      	    Lookups - 3.4 M/s operations
      	    Updates - 3.0 M/s operations
      	    False positive rate: 1.78%
        1 mil entries
      	8-byte value
      	    Lookups - 13.2 M/s operations
      	    Updates - 7.8 M/s operations
      	    False positive rate: 1.81%
      	64-byte value
      	    Lookups - 3.2 M/s operations
      	    Updates - 2.8 M/s operations
      	    False positive rate: 1.80%
        2.5 mil entries
      	8-byte value
      	    Lookups - 10.5 M/s operations
      	    Updates - 5.9 M/s operations
      	    False positive rate: 0.29%
      	64-byte value
      	    Lookups - 3.2 M/s operations
      	    Updates - 2.4 M/s operations
      	    False positive rate: 0.28%
        5 mil entries
      	8-byte value
      	    Lookups - 9.6 M/s operations
      	    Updates - 5.7 M/s operations
      	    False positive rate: 0.30%
      	64-byte value
      	    Lookups - 3.2 M/s operations
      	    Updates - 2.7 M/s operations
      	    False positive rate: 0.30%
      Signed-off-by: NJoanne Koong <joannekoong@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211027234504.30744-5-joannekoong@fb.com
      57fd1c63
  19. 26 10月, 2021 1 次提交
  20. 09 10月, 2021 4 次提交
    • Q
      bpftool: Add install-bin target to install binary only · d7db0a4e
      Quentin Monnet 提交于
      With "make install", bpftool installs its binary and its bash completion
      file. Usually, this is what we want. But a few components in the kernel
      repository (namely, BPF iterators and selftests) also install bpftool
      locally before using it. In such a case, bash completion is not
      necessary and is just a useless build artifact.
      
      Let's add an "install-bin" target to bpftool, to offer a way to install
      the binary only.
      Signed-off-by: NQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211007194438.34443-13-quentin@isovalent.com
      d7db0a4e
    • Q
      tools/runqslower: Install libbpf headers when building · be79505c
      Quentin Monnet 提交于
      API headers from libbpf should not be accessed directly from the
      library's source directory. Instead, they should be exported with "make
      install_headers". Let's make sure that runqslower installs the
      headers properly when building.
      
      We use a libbpf_hdrs target to mark the logical dependency on libbpf's
      headers export for a number of object files, even though the headers
      should have been exported at this time (since bpftool needs them, and is
      required to generate the skeleton or the vmlinux.h).
      
      When descending from a parent Makefile, the specific output directories
      for building the library and exporting the headers are configurable with
      BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to
      OUTPUT, on top of which those variables are constructed by default.
      
      Also adjust the Makefile for the BPF selftests. We pass a number of
      variables to the "make" invocation, because we want to point runqslower
      to the (target) libbpf shared with other tools, instead of building its
      own version. In addition, runqslower relies on (target) bpftool, and we
      also want to pass the proper variables to its Makefile so that bpftool
      itself reuses the same libbpf.
      Signed-off-by: NQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
      be79505c
    • Q
      tools/resolve_btfids: Install libbpf headers when building · 1478994a
      Quentin Monnet 提交于
      API headers from libbpf should not be accessed directly from the
      library's source directory. Instead, they should be exported with "make
      install_headers". Let's make sure that resolve_btfids installs the
      headers properly when building.
      
      When descending from a parent Makefile, the specific output directories
      for building the library and exporting the headers are configurable with
      LIBBPF_OUT and LIBBPF_DESTDIR, respectively. This is in addition to
      OUTPUT, on top of which those variables are constructed by default.
      
      Also adjust the Makefile for the BPF selftests in order to point to the
      (target) libbpf shared with other tools, instead of building a version
      specific to resolve_btfids. Remove libbpf's order-only dependencies on
      the include directories (they are created by libbpf and don't need to
      exist beforehand).
      Signed-off-by: NQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211007194438.34443-5-quentin@isovalent.com
      1478994a
    • Q
      bpftool: Install libbpf headers instead of including the dir · f012ade1
      Quentin Monnet 提交于
      Bpftool relies on libbpf, therefore it relies on a number of headers
      from the library and must be linked against the library. The Makefile
      for bpftool exposes these objects by adding tools/lib as an include
      directory ("-I$(srctree)/tools/lib"). This is a working solution, but
      this is not the cleanest one. The risk is to involuntarily include
      objects that are not intended to be exposed by the libbpf.
      
      The headers needed to compile bpftool should in fact be "installed" from
      libbpf, with its "install_headers" Makefile target. In addition, there
      is one header which is internal to the library and not supposed to be
      used by external applications, but that bpftool uses anyway.
      
      Adjust the Makefile in order to install the header files properly before
      compiling bpftool. Also copy the additional internal header file
      (nlattr.h), but call it out explicitly. Build (and install headers) in a
      subdirectory under bpftool/ instead of tools/lib/bpf/. When descending
      from a parent Makefile, this is configurable by setting the OUTPUT,
      LIBBPF_OUTPUT and LIBBPF_DESTDIR variables.
      
      Also adjust the Makefile for BPF selftests, so as to reuse the (host)
      libbpf compiled earlier and to avoid compiling a separate version of the
      library just for bpftool.
      Signed-off-by: NQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211007194438.34443-4-quentin@isovalent.com
      f012ade1
  21. 06 10月, 2021 2 次提交
  22. 28 9月, 2021 1 次提交
    • J
      selftests, bpf: Fix makefile dependencies on libbpf · d888eaac
      Jiri Benc 提交于
      When building bpf selftest with make -j, I'm randomly getting build failures
      such as this one:
      
        In file included from progs/bpf_flow.c:19:
        [...]/tools/testing/selftests/bpf/tools/include/bpf/bpf_helpers.h:11:10: fatal error: 'bpf_helper_defs.h' file not found
        #include "bpf_helper_defs.h"
                 ^~~~~~~~~~~~~~~~~~~
      
      The file that fails the build varies between runs but it's always in the
      progs/ subdir.
      
      The reason is a missing make dependency on libbpf for the .o files in
      progs/. There was a dependency before commit 3ac2e20f but that commit
      removed it to prevent unneeded rebuilds. However, that only works if libbpf
      has been built already; the 'wildcard' prerequisite does not trigger when
      there's no bpf_helper_defs.h generated yet.
      
      Keep the libbpf as an order-only prerequisite to satisfy both goals. It is
      always built before the progs/ objects but it does not trigger unnecessary
      rebuilds by itself.
      
      Fixes: 3ac2e20f ("selftests/bpf: BPF object files should depend only on libbpf headers")
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/ee84ab66436fba05a197f952af23c98d90eb6243.1632758415.git.jbenc@redhat.com
      d888eaac
  23. 18 9月, 2021 1 次提交
  24. 10 9月, 2021 1 次提交
    • Q
      libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations · 0b46b755
      Quentin Monnet 提交于
      Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare
      the deprecation of two API functions. This macro marks functions as deprecated
      when libbpf's version reaches the values passed as an argument.
      
      As part of this change libbpf_version.h header is added with recorded major
      (LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros.
      They are now part of libbpf public API and can be relied upon by user code.
      libbpf_version.h is installed system-wide along other libbpf public headers.
      
      Due to this new build-time auto-generated header, in-kernel applications
      relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to
      include libbpf's output directory as part of a list of include search paths.
      Better fix would be to use libbpf's make_install target to install public API
      headers, but that clean up is left out as a future improvement. The build
      changes were tested by building kernel (with KBUILD_OUTPUT and O= specified
      explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No
      problems were detected.
      
      Note that because of the constraints of the C preprocessor we have to write
      a few lines of macro magic for each version used to prepare deprecation (0.6
      for now).
      
      Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of
      btf__get_from_id() and btf__load(), which are replaced by
      btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively,
      starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]).
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/278Co-developed-by: NQuentin Monnet <quentin@isovalent.com>
      Co-developed-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
      0b46b755
  25. 25 8月, 2021 1 次提交
  26. 05 8月, 2021 1 次提交