- 05 3月, 2021 17 次提交
-
-
由 Joe Stringer 提交于
Commit 468e2f64 ("bpf: introduce BPF_PROG_QUERY command") originally introduced this, but there have been several additions since then. Unlike BPF_PROG_ATTACH, it appears that the sockmap progs are not able to be queried so far. Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-8-joe@cilium.io
-
由 Joe Stringer 提交于
Based on a brief read of the corresponding source code. Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-7-joe@cilium.io
-
由 Joe Stringer 提交于
Document the prog attach command in more detail, based on git commits: * commit f4324551 ("bpf: add BPF_PROG_ATTACH and BPF_PROG_DETACH commands") * commit 4f738adb ("bpf: create tcp_bpf_ulp allowing BPF to monitor socket TX/RX data") * commit f4364dcf ("media: rc: introduce BPF_PROG_LIRC_MODE2") * commit d58e468b ("flow_dissector: implements flow dissector BPF hook") Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-6-joe@cilium.io
-
由 Joe Stringer 提交于
Commit b2197755 ("bpf: add support for persistent maps/progs") contains the original implementation and git logs, used as reference for this documentation. Also pull in the filename restriction as documented in commit 6d8cb045 ("bpf: comment why dots in filenames under BPF virtual FS are not allowed") Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-5-joe@cilium.io
-
由 Joe Stringer 提交于
Document the meaning of the BPF_F_LOCK flag for the map lookup/update descriptions. Based on commit 96049f3a ("bpf: introduce BPF_F_LOCK flag"). Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-4-joe@cilium.io
-
由 Joe Stringer 提交于
Introduce high-level descriptions of the intent and return codes of the bpf() syscall commands. Subsequent patches may further flesh out the content to provide a more useful programming reference. Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-3-joe@cilium.io
-
由 Joe Stringer 提交于
These descriptions are present in the man-pages project from the original submissions around 2015-2016. Import them so that they can be kept up to date as developers extend the bpf syscall commands. These descriptions follow the pattern used by scripts/bpf_helpers_doc.py so that we can take advantage of the parser to generate more up-to-date man page writing based upon these headers. Some minor wording adjustments were made to make the descriptions more consistent for the description / return format. Signed-off-by: NJoe Stringer <joe@cilium.io> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Reviewed-by: NQuentin Monnet <quentin@isovalent.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210302171947.2268128-2-joe@cilium.ioCo-authored-by: NAlexei Starovoitov <ast@kernel.org> Co-authored-by: NMichael Kerrisk <mtk.manpages@gmail.com>
-
由 Alexei Starovoitov 提交于
Ilya Leoshkevich says: ==================== Some BPF programs compiled on s390 fail to load, because s390 arch-specific linux headers contain float and double types. Introduce support for such types by representing them using the new BTF_KIND_FLOAT. This series deals with libbpf, bpftool, in-kernel BTF parser as well as selftests and documentation. There are also pahole and LLVM parts: * https://github.com/iii-i/dwarves/commit/btf-kind-float-v2 * https://reviews.llvm.org/D83289 but they should go in after the libbpf part is integrated. --- v0: https://lore.kernel.org/bpf/20210210030317.78820-1-iii@linux.ibm.com/ v0 -> v1: Per Andrii's suggestion, remove the unnecessary trailing u32. v1: https://lore.kernel.org/bpf/20210216011216.3168-1-iii@linux.ibm.com/ v1 -> v2: John noticed that sanitization corrupts BTF, because new and old sizes don't match. Per Yonghong's suggestion, use a modifier type (which has the same size as the float type) as a replacement. Per Yonghong's suggestion, add size and alignment checks to the kernel BTF parser. v2: https://lore.kernel.org/bpf/20210219022543.20893-1-iii@linux.ibm.com/ v2 -> v3: Based on Yonghong's suggestions: Use BTF_KIND_CONST instead of BTF_KIND_TYPEDEF and make sure that the C code generated from the sanitized BTF is well-formed; fix size calculation in tests and use NAME_TBD everywhere; limit allowed sizes to 2, 4, 8, 12 and 16 (this should also fix m68k and nds32le builds). v3: https://lore.kernel.org/bpf/20210220034959.27006-1-iii@linux.ibm.com/ v3 -> v4: More fixes for the Yonghong's findings: fix the outdated comment in bpf_object__sanitize_btf() and add the error handling there (I've decided to check uint_id and uchar_id too in order to simplify debugging); add bpftool output example; use div64_u64_rem() instead of % in order to fix the linker error. Also fix the "invalid BTF_INFO" test (new commit, #4). v4: https://lore.kernel.org/bpf/20210222214917.83629-1-iii@linux.ibm.com/ v4 -> v5: Fixes for the Andrii's findings: Use BTF_KIND_STRUCT instead of BTF_KIND_TYPEDEF for sanitization; check byte_sz in libbpf; move btf__add_float; remove relo support; add a dedup test (new commit, #7). v5: https://lore.kernel.org/bpf/20210223231459.99664-1-iii@linux.ibm.com/ v5 -> v6: Fixes for further findings by Andrii: split whitespace issue fix into a separate patch; add 12-byte float to "float test #1, well-formed". v6: https://lore.kernel.org/bpf/20210224234535.106970-1-iii@linux.ibm.com/ v6 -> v7: John suggested to add a comment explaining why sanitization does not preserve the type name, as well as what effect it has on running the code on the older kernels. Yonghong has asked to add a comment explaining why we are not checking the alignment very precisely in the kernel. John suggested to add a bpf_core_field_size test (commit #9). Based on Alexei's feedback [1] I'm proceeding with the BTF_KIND_FLOAT approach. [1] https://lore.kernel.org/bpf/CAADnVQKWPODWZ2RSJ5FJhfYpxkuV0cvSAL1O+FSr9oP1ercoBg@mail.gmail.com/ ==================== Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Ilya Leoshkevich 提交于
Also document the expansion of the kind bitfield. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210226202256.116518-11-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
Check that floats don't interfere with struct deduplication, that they are not merged with another kinds and that floats of different sizes are not merged with each other. Suggested-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226202256.116518-9-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
Test the good variants as well as the potential malformed ones. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226202256.116518-8-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
On the kernel side, introduce a new btf_kind_operations. It is similar to that of BTF_KIND_INT, however, it does not need to handle encodings and bit offsets. Do not implement printing, since the kernel does not know how to format floating-point values. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210226202256.116518-7-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
The bit being checked by this test is no longer reserved after introducing BTF_KIND_FLOAT, so use the next one instead. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210226202256.116518-6-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
Only dumping support needs to be adjusted, the code structure follows that of BTF_KIND_INT. Example plain and JSON outputs: [4] FLOAT 'float' size=4 {"id":4,"kind":"FLOAT","name":"float","size":4} Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210226202256.116518-5-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
The logic follows that of BTF_KIND_INT most of the time. Sanitization replaces BTF_KIND_FLOATs with equally-sized empty BTF_KIND_STRUCTs on older kernels, for example, the following: [4] FLOAT 'float' size=4 becomes the following: [4] STRUCT '(anon)' size=4 vlen=0 With dwarves patch [1] and this patch, the older kernels, which were failing with the floating-point-related errors, will now start working correctly. [1] https://github.com/iii-i/dwarves/commit/btf-kind-float-v2Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226202256.116518-4-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
Remove trailing space. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210226202256.116518-3-iii@linux.ibm.com
-
由 Ilya Leoshkevich 提交于
Add a new kind value and expand the kind bitfield. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210226202256.116518-2-iii@linux.ibm.com
-
- 04 3月, 2021 2 次提交
-
-
由 Yonghong Song 提交于
The original bcc pull request https://github.com/iovisor/bcc/pull/3270 exposed a verifier failure with Clang 12/13 while Clang 4 works fine. Further investigation exposed two issues: Issue 1: LLVM may generate code which uses less refined value. The issue is fixed in LLVM patch: https://reviews.llvm.org/D97479 Issue 2: Spills with initial value 0 are marked as precise which makes later state pruning less effective. This is my rough initial analysis and further investigation is needed to find how to improve verifier pruning in such cases. With the above LLVM patch, for the new loop6.c test, which has smaller loop bound compared to original test, I got: $ test_progs -s -n 10/16 ... stack depth 64 processed 390735 insns (limit 1000000) max_states_per_insn 87 total_states 8658 peak_states 964 mark_read 6 #10/16 loop6.o:OK Use the original loop bound, i.e., commenting out "#define WORKAROUND", I got: $ test_progs -s -n 10/16 ... BPF program is too large. Processed 1000001 insn stack depth 64 processed 1000001 insns (limit 1000000) max_states_per_insn 91 total_states 23176 peak_states 5069 mark_read 6 ... #10/16 loop6.o:FAIL The purpose of this patch is to provide a regression test for the above LLVM fix and also provide a test case for further analyzing the verifier pruning issue. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Cc: Zhenwei Pi <pizhenwei@bytedance.com> Link: https://lore.kernel.org/bpf/20210226223810.236472-1-yhs@fb.com
-
由 Cong Wang 提交于
This should fix the following warning: include/linux/skbuff.h:932: warning: Function parameter or member '_sk_redir' not described in 'sk_buff' Reported-by: NLorenz Bauer <lmb@cloudflare.com> Signed-off-by: NCong Wang <cong.wang@bytedance.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NLorenz Bauer <lmb@cloudflare.com> Link: https://lore.kernel.org/bpf/20210301184805.8174-1-xiyou.wangcong@gmail.com
-
- 03 3月, 2021 1 次提交
-
-
由 Andrii Nakryiko 提交于
Just like was done for bpftool and selftests in ec23eb70 ("tools/bpftool: Allow substituting custom vmlinux.h for the build") and ca4db638 ("selftests/bpf: Allow substituting custom vmlinux.h for selftests build"), allow to provide pre-generated vmlinux.h for runqslower build. Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NMartin KaFai Lau <kafai@fb.com> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210303004010.653954-1-andrii@kernel.org
-
- 27 2月, 2021 20 次提交
-
-
由 Ian Denhardt 提交于
... so callers can correctly detect failure. Signed-off-by: NIan Denhardt <ian@zenhack.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/b0bea780bc292f29e7b389dd062f20adc2a2d634.1614201868.git.ian@zenhack.net
-
由 Ian Denhardt 提交于
Per discussion at [0] this was originally introduced as a warning due to concerns about breaking existing code, but a hard error probably makes more sense, especially given that concerns about breakage were only speculation. [0] https://lore.kernel.org/bpf/c964892195a6b91d20a67691448567ef528ffa6d.camel@linux.ibm.com/T/#tSigned-off-by: NIan Denhardt <ian@zenhack.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NIlya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/bpf/a6b6c7516f5d559049d669968e953b4a8d7adea3.1614201868.git.ian@zenhack.net
-
由 Alexei Starovoitov 提交于
Yonghong Song says: ==================== This patch set introduced bpf_for_each_map_elem() helper. The helper permits bpf program iterates through all elements for a particular map. The work originally inspired by an internal discussion where firewall rules are kept in a map and bpf prog wants to check packet 5 tuples against all rules in the map. A bounded loop can be used but it has a few drawbacks. As the loop iteration goes up, verification time goes up too. For really large maps, verification may fail. A helper which abstracts out the loop itself will not have verification time issue. A recent discussion in [1] involves to iterate all hash map elements in bpf program. Currently iterating all hashmap elements in bpf program is not easy if key space is really big. Having a helper to abstract out the loop itself is even more meaningful. The proposed helper signature looks like: long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags) where callback_fn is a static function and callback_ctx is a piece of data allocated on the caller stack which can be accessed by the callback_fn. The callback_fn signature might be different for different maps. For example, for hash/array maps, the signature is long callback_fn(map, key, val, callback_ctx) In the rest of series, Patches 1/2/3/4 did some refactoring. Patch 5 implemented core kernel support for the helper. Patches 6 and 7 added hashmap and arraymap support. Patches 8/9 added libbpf support. Patch 10 added bpftool support. Patches 11 and 12 added selftests for hashmap and arraymap. [1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/ Changelogs: v4 -> v5: - rebase on top of bpf-next. v3 -> v4: - better refactoring of check_func_call(), calculate subprogno outside of __check_func_call() helper. (Andrii) - better documentation (like the list of supported maps and their callback signatures) in uapi header. (Andrii) - implement and use ASSERT_LT in selftests. (Andrii) - a few other minor changes. v2 -> v3: - add comments in retrieve_ptr_limit(), which is in sanitize_ptr_alu(), to clarify the code is not executed for PTR_TO_MAP_KEY handling, but code is manually tested. (Alexei) - require BTF for callback function. (Alexei) - simplify hashmap/arraymap callback return handling as return value [0, 1] has been enforced by the verifier. (Alexei) - also mark global subprog (if used in ld_imm64) as RELO_SUBPROG_ADDR. (Andrii) - handle the condition to mark RELO_SUBPROG_ADDR properly. (Andrii) - make bpftool subprog insn offset dumping consist with pcrel calls. (Andrii) v1 -> v2: - setup callee frame in check_helper_call() and then proceed to verify helper return value as normal (Alexei) - use meta data to keep track of map/func pointer to avoid hard coding the register number (Alexei) - verify callback_fn return value range [0, 1]. (Alexei) - add migrate_{disable, enable} to ensure percpu value is the one bpf program expects to see. (Alexei) - change bpf_for_each_map_elem() return value to the number of iterated elements. (Andrii) - Change libbpf pseudo_func relo name to RELO_SUBPROG_ADDR and use more rigid checking for the relocation. (Andrii) - Better format to print out subprog address with bpftool. (Andrii) - Use bpf_prog_test_run to trigger bpf run, instead of bpf_iter. (Andrii) - Other misc changes. ==================== Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Yonghong Song 提交于
A test is added for arraymap and percpu arraymap. The test also exercises the early return for the helper which does not traverse all elements. $ ./test_progs -n 45 #45/1 hash_map:OK #45/2 array_map:OK #45 for_each:OK Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204934.3885756-1-yhs@fb.com
-
由 Yonghong Song 提交于
A test case is added for hashmap and percpu hashmap. The test also exercises nested bpf_for_each_map_elem() calls like bpf_prog: bpf_for_each_map_elem(func1) func1: bpf_for_each_map_elem(func2) func2: $ ./test_progs -n 45 #45/1 hash_map:OK #45 for_each:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204933.3885657-1-yhs@fb.com
-
由 Yonghong Song 提交于
With later hashmap example, using bpftool xlated output may look like: int dump_task(struct bpf_iter__task * ctx): ; struct task_struct *task = ctx->task; 0: (79) r2 = *(u64 *)(r1 +8) ; if (task == (void *)0 || called > 0) ... 19: (18) r2 = subprog[+17] 30: (18) r2 = subprog[+25] ... 36: (95) exit __u64 check_hash_elem(struct bpf_map * map, __u32 * key, __u64 * val, struct callback_ctx * data): ; struct bpf_iter__task *ctx = data->ctx; 37: (79) r5 = *(u64 *)(r4 +0) ... 55: (95) exit __u64 check_percpu_elem(struct bpf_map * map, __u32 * key, __u64 * val, void * unused): ; check_percpu_elem(struct bpf_map *map, __u32 *key, __u64 *val, void *unused) 56: (bf) r6 = r3 ... 83: (18) r2 = subprog[-47] Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204931.3885458-1-yhs@fb.com
-
由 Yonghong Song 提交于
A new relocation RELO_SUBPROG_ADDR is added to capture subprog addresses loaded with ld_imm64 insns. Such ld_imm64 insns are marked with BPF_PSEUDO_FUNC and will be passed to kernel. For bpf_for_each_map_elem() case, kernel will check that the to-be-used subprog address must be a static function and replace it with proper actual jited func address. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204930.3885367-1-yhs@fb.com
-
由 Yonghong Song 提交于
Move function is_ldimm64() close to the beginning of libbpf.c, so it can be reused by later code and the next patch as well. There is no functionality change. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204929.3885295-1-yhs@fb.com
-
由 Yonghong Song 提交于
This patch added support for arraymap and percpu arraymap. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204928.3885192-1-yhs@fb.com
-
由 Yonghong Song 提交于
This patch added support for hashmap, percpu hashmap, lru hashmap and percpu lru hashmap. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204927.3885020-1-yhs@fb.com
-
由 Yonghong Song 提交于
The bpf_for_each_map_elem() helper is introduced which iterates all map elements with a callback function. The helper signature looks like long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags) and for each map element, the callback_fn will be called. For example, like hashmap, the callback signature may look like long callback_fn(map, key, val, callback_ctx) There are two known use cases for this. One is from upstream ([1]) where a for_each_map_elem helper may help implement a timeout mechanism in a more generic way. Another is from our internal discussion for a firewall use case where a map contains all the rules. The packet data can be compared to all these rules to decide allow or deny the packet. For array maps, users can already use a bounded loop to traverse elements. Using this helper can avoid using bounded loop. For other type of maps (e.g., hash maps) where bounded loop is hard or impossible to use, this helper provides a convenient way to operate on all elements. For callback_fn, besides map and map element, a callback_ctx, allocated on caller stack, is also passed to the callback function. This callback_ctx argument can provide additional input and allow to write to caller stack for output. If the callback_fn returns 0, the helper will iterate through next element if available. If the callback_fn returns 1, the helper will stop iterating and returns to the bpf program. Other return values are not used for now. Currently, this helper is only available with jit. It is possible to make it work with interpreter with so effort but I leave it as the future work. [1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204925.3884923-1-yhs@fb.com
-
由 Yonghong Song 提交于
Currently, verifier function add_subprog() returns 0 for success and negative value for failure. Change the return value to be the subprog number for success. This functionality will be used in the next patch to save a call to find_subprog(). Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204924.3884848-1-yhs@fb.com
-
由 Yonghong Song 提交于
Later proposed bpf_for_each_map_elem() helper has callback function as one of its arguments. This patch refactored check_func_call() to permit callback function which sets callee state. Different callback functions may have different callee states. There is no functionality change for this patch. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204923.3884627-1-yhs@fb.com
-
由 Yonghong Song 提交于
Factor out the function verbose_invalid_scalar() to verbose print if a scalar is not in a tnum range. There is no functionality change and the function will be used by later patch which introduced bpf_for_each_map_elem(). Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204922.3884375-1-yhs@fb.com
-
由 Yonghong Song 提交于
During verifier check_cfg(), all instructions are visited to ensure verifier can handle program control flows. This patch factored out function visit_func_call_insn() so it can be reused in later patch to visit callback function calls. There is no functionality change for this patch. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204920.3884136-1-yhs@fb.com
-
由 Ilya Leoshkevich 提交于
Building selftests in a separate directory like this: make O="$BUILD" -C tools/testing/selftests/bpf and then running: cd "$BUILD" && ./test_progs -t btf causes all the non-flavored btf_dump_test_case_*.c tests to fail, because these files are not copied to where test_progs expects to find them. Fix by not skipping EXT-COPY when the original $(OUTPUT) is not empty (lib.mk sets it to $(shell pwd) in that case) and using rsync instead of cp: cp fails because e.g. urandom_read is being copied into itself, and rsync simply skips such cases. rsync is already used by kselftests and therefore is not a new dependency. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210224111445.102342-1-iii@linux.ibm.com
-
由 KP Singh 提交于
When vmtest.sh ran a command in a VM, it did not record or propagate the error code of the command. This made the script less "script-able". The script now saves the error code of the said command in a file in the VM, copies the file back to the host and (when available) uses this error code instead of its own. Signed-off-by: NKP Singh <kpsingh@google.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210225161947.1778590-1-kpsingh@kernel.org
-
由 Alexei Starovoitov 提交于
Cong Wang says: ==================== From: Cong Wang <cong.wang@bytedance.com> This patchset is the first series of patches separated out from the original large patchset, to make reviews easier. This patchset does not add any new feature or change any functionality but merely cleans up the existing sockmap and skmsg code and refactors it, to prepare for the patches followed up. This passed all BPF selftests. To see the big picture, the original whole patchset is available on github: https://github.com/congwang/linux/tree/sockmap and this patchset is also available on github: https://github.com/congwang/linux/tree/sockmap1 --- v7: add 1 trivial cleanup patch define a mask for sk_redir fix CONFIG_BPF_SYSCALL in include/net/udp.h make sk_psock_done_strp() static move skb_bpf_redirect_clear() to sk_psock_backlog() v6: fix !CONFIG_INET case v5: improve CONFIG_BPF_SYSCALL dependency add 3 trivial cleanup patches v4: reuse skb dst instead of skb ext fix another Kconfig error v3: fix a few Kconfig compile errors remove an unused variable add a comment for bpf_convert_data_end_access() v2: split the original patchset compute data_end with bpf_convert_data_end_access() get rid of psock->bpf_running reduce the scope of CONFIG_BPF_STREAM_PARSER do not add CONFIG_BPF_SOCK_MAP ==================== Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Cong Wang 提交于
It is not defined or used anywhere. Signed-off-by: NCong Wang <cong.wang@bytedance.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-10-xiyou.wangcong@gmail.com
-
由 Cong Wang 提交于
It is now nearly identical to bpf_prog_run_pin_on_cpu() and it has an unused parameter 'psock', so we can just get rid of it and call bpf_prog_run_pin_on_cpu() directly. Signed-off-by: NCong Wang <cong.wang@bytedance.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-9-xiyou.wangcong@gmail.com
-