- 03 6月, 2021 4 次提交
-
-
由 Andrii Nakryiko 提交于
When xdp_redirect_multi test binary was added recently, it wasn't added to .gitignore. Fix that. Fixes: d2329247 ("selftests/bpf: Add xdp_redirect_multi test") Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-5-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
Light skeleton code assumes skel_internal.h header to be installed system-wide by libbpf package. Make sure it is actually installed. Fixes: 67234743 ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-4-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
As we gradually get more headers that have to be installed, it's quite annoying to copy/paste long $(call) commands. So extract that logic and do a simple $(foreach) over the list of headers. Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-3-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
Official libbpf 0.4 release doesn't include three APIs that were tentatively put into 0.4 section. Fix libbpf.map and move these three APIs: - bpf_map__initial_value; - bpf_map_lookup_and_delete_elem_flags; - bpf_object__gen_loader. Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-2-andrii@kernel.org
-
- 01 6月, 2021 1 次提交
-
-
由 Harishankar Vishwanathan 提交于
This patch introduces a new algorithm for multiplication of tristate numbers (tnums) that is provably sound. It is faster and more precise when compared to the existing method. Like the existing method, this new algorithm follows the long multiplication algorithm. The idea is to generate partial products by multiplying each bit in the multiplier (tnum a) with the multiplicand (tnum b), and adding the partial products after appropriately bit-shifting them. The new algorithm, however, uses just a single loop over the bits of the multiplier (tnum a) and accumulates only the uncertain components of the multiplicand (tnum b) into a mask-only tnum. The following paper explains the algorithm in more detail: https://arxiv.org/abs/2105.05398. A natural way to construct the tnum product is by performing a tnum addition on all the partial products. This algorithm presents another method of doing this: decompose each partial product into two tnums, consisting of the values and the masks separately. The mask-sum is accumulated within the loop in acc_m. The value-sum tnum is generated using a.value * b.value. The tnum constructed by tnum addition of the value-sum and the mask-sum contains all possible summations of concrete values drawn from the partial product tnums pairwise. We prove this result in the paper. Our evaluations show that the new algorithm is overall more precise (producing tnums with less uncertain components) than the existing method. As an illustrative example, consider the input tnums A and B. The numbers in the parenthesis correspond to (value;mask). A = 000000x1 (1;2) B = 0010011x (38;1) A * B (existing) = xxxxxxxx (0;255) A * B (new) = 0x1xxxxx (32;95) Importantly, we present a proof of soundness of the new algorithm in the aforementioned paper. Additionally, we show that this new algorithm is empirically faster than the existing method. Co-developed-by: NMatan Shachnai <m.shachnai@rutgers.edu> Co-developed-by: NSrinivas Narayana <srinivas.narayana@rutgers.edu> Co-developed-by: NSantosh Nagarakatte <santosh.nagarakatte@rutgers.edu> Signed-off-by: NMatan Shachnai <m.shachnai@rutgers.edu> Signed-off-by: NSrinivas Narayana <srinivas.narayana@rutgers.edu> Signed-off-by: NSantosh Nagarakatte <santosh.nagarakatte@rutgers.edu> Signed-off-by: NHarishankar Vishwanathan <harishankar.vishwanathan@rutgers.edu> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NEdward Cree <ecree.xilinx@gmail.com> Link: https://arxiv.org/abs/2105.05398 Link: https://lore.kernel.org/bpf/20210531020157.7386-1-harishankar.vishwanathan@rutgers.edu
-
- 29 5月, 2021 2 次提交
-
-
由 Hangbin Liu 提交于
As Colin pointed out, the first drops assignment after declaration will be overwritten by the second drops assignment before using, which makes it useless. Since the drops variable will be used only once. Just remove it and use "cnt - sent" in trace_xdp_devmap_xmit(). Fixes: cb261b59 ("bpf: Run devmap xdp_prog on flush instead of bulk enqueue") Reported-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210528024356.24333-1-liuhangbin@gmail.com
-
由 Yonghong Song 提交于
LLVM upstream commit https://reviews.llvm.org/D102712 made some changes to bpf relocations to make them llvm linker lld friendly. The scope of existing relocations R_BPF_64_{64,32} is narrowed and new relocations R_BPF_64_{ABS32,ABS64,NODYLD32} are introduced. Let us add some documentation about llvm bpf relocations so people can understand how to resolve them properly in their respective tools. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210526152457.335210-1-yhs@fb.com
-
- 27 5月, 2021 1 次提交
-
-
由 Florent Revest 提交于
These macros are convenient wrappers around the bpf_seq_printf and bpf_snprintf helpers. They are currently provided by bpf_tracing.h which targets low level tracing primitives. bpf_helpers.h is a better fit. The __bpf_narg and __bpf_apply are needed in both files and provided twice. __bpf_empty isn't used anywhere and is removed from bpf_tracing.h Reported-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NFlorent Revest <revest@chromium.org> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210526164643.2881368-1-revest@chromium.org
-
- 26 5月, 2021 11 次提交
-
-
由 Daniel Borkmann 提交于
Hangbin Liu says: ==================== This patchset is a new implementation for XDP multicast support based on my previous 2 maps implementation[1]. The reason is that Daniel thinks the exclude map implementation is missing proper bond support in XDP context. And there is a plan to add native XDP bonding support. Adding a exclude map in the helper also increases the complexity of verifier and has drawbacks on performance. The new implementation just add two new flags BPF_F_BROADCAST and BPF_F_EXCLUDE_INGRESS to extend xdp_redirect_map for broadcast support. With BPF_F_BROADCAST the packet will be broadcasted to all the interfaces in the map. with BPF_F_EXCLUDE_INGRESS the ingress interface will be excluded when do broadcasting. The patchv11 link is here [2]. [1] https://lore.kernel.org/bpf/20210223125809.1376577-1-liuhangbin@gmail.com [2] https://lore.kernel.org/bpf/20210513070447.1878448-1-liuhangbin@gmail.com v12: As Daniel pointed out: a) defined as const u64 for flag_mask and action_mask in __bpf_xdp_redirect_map() b) remove BPF_F_ACTION_MASK in uapi header c) remove EXPORT_SYMBOL_GPL for xdpf_clone() v11: a) Use unlikely() when checking if this is for broadcast redirecting. b) Fix a tracepoint NULL pointer issue Jesper found c) Remove BPF_F_REDIR_MASK and just use OR flags to make the reader more clear about what's flags we are using d) Add the performace number with multi veth interfaces in patch 01 description. e) remove some sleeps to reduce the testing time in patch04. Re-struct the test and make clear what flags we are testing. v10: use READ/WRITE_ONCE when read/write map instead of xchg() v9: Update patch 01 commit description v8: use hlist_for_each_entry_rcu() when looping the devmap hash ojbs v7: No need to free xdpf in dev_map_enqueue_clone() if xdpf_clone failed. v6: Fix a skb leak in the error path for generic XDP v5: Just walk the map directly to get interfaces as get_next_key() of devmap hash may restart looping from the first key if the device get removed. After update the performace has improved 10% compired with v4. v4: Fix flags never cleared issue in patch 02. Update selftest to cover this. v3: Rebase the code based on latest bpf-next v2: fix flag renaming issue in patch 02 ==================== Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Hangbin Liu 提交于
Add a bpf selftest for new helper xdp_redirect_map_multi(). In this test there are 3 forward groups and 1 exclude group. The test will redirect each interface's packets to all the interfaces in the forward group, and exclude the interface in exclude map. Two maps (DEVMAP, DEVMAP_HASH) and two xdp modes (generic, drive) will be tested. XDP egress program will also be tested by setting pkt src MAC to egress interface's MAC address. For more test details, you can find it in the test script. Here is the test result. ]# time ./test_xdp_redirect_multi.sh Pass: xdpgeneric arp(F_BROADCAST) ns1-1 Pass: xdpgeneric arp(F_BROADCAST) ns1-2 Pass: xdpgeneric arp(F_BROADCAST) ns1-3 Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1 Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2 Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3 Pass: xdpgeneric IPv6 (no flags) ns1-1 Pass: xdpgeneric IPv6 (no flags) ns1-2 Pass: xdpdrv arp(F_BROADCAST) ns1-1 Pass: xdpdrv arp(F_BROADCAST) ns1-2 Pass: xdpdrv arp(F_BROADCAST) ns1-3 Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1 Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2 Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3 Pass: xdpdrv IPv6 (no flags) ns1-1 Pass: xdpdrv IPv6 (no flags) ns1-2 Pass: xdpegress mac ns1-2 Pass: xdpegress mac ns1-3 Summary: PASS 18, FAIL 0 real 1m18.321s user 0m0.123s sys 0m0.350s Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210519090747.1655268-5-liuhangbin@gmail.com
-
由 Hangbin Liu 提交于
This is a sample for xdp redirect broadcast. In the sample we could forward all packets between given interfaces. There is also an option -X that could enable 2nd xdp_prog on egress interface. Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210519090747.1655268-4-liuhangbin@gmail.com
-
由 Hangbin Liu 提交于
This patch adds two flags BPF_F_BROADCAST and BPF_F_EXCLUDE_INGRESS to extend xdp_redirect_map for broadcast support. With BPF_F_BROADCAST the packet will be broadcasted to all the interfaces in the map. with BPF_F_EXCLUDE_INGRESS the ingress interface will be excluded when do broadcasting. When getting the devices in dev hash map via dev_map_hash_get_next_key(), there is a possibility that we fall back to the first key when a device was removed. This will duplicate packets on some interfaces. So just walk the whole buckets to avoid this issue. For dev array map, we also walk the whole map to find valid interfaces. Function bpf_clear_redirect_map() was removed in commit ee75aef2 ("bpf, xdp: Restructure redirect actions"). Add it back as we need to use ri->map again. With test topology: +-------------------+ +-------------------+ | Host A (i40e 10G) | ---------- | eno1(i40e 10G) | +-------------------+ | | | Host B | +-------------------+ | | | Host C (i40e 10G) | ---------- | eno2(i40e 10G) | +-------------------+ | | | +------+ | | veth0 -- | Peer | | | veth1 -- | | | | veth2 -- | NS | | | +------+ | +-------------------+ On Host A: # pktgen/pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -s 64 On Host B(Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz, 128G Memory): Use xdp_redirect_map and xdp_redirect_map_multi in samples/bpf for testing. All the veth peers in the NS have a XDP_DROP program loaded. The forward_map max_entries in xdp_redirect_map_multi is modify to 4. Testing the performance impact on the regular xdp_redirect path with and without patch (to check impact of additional check for broadcast mode): 5.12 rc4 | redirect_map i40e->i40e | 2.0M | 9.7M 5.12 rc4 | redirect_map i40e->veth | 1.7M | 11.8M 5.12 rc4 + patch | redirect_map i40e->i40e | 2.0M | 9.6M 5.12 rc4 + patch | redirect_map i40e->veth | 1.7M | 11.7M Testing the performance when cloning packets with the redirect_map_multi test, using a redirect map size of 4, filled with 1-3 devices: 5.12 rc4 + patch | redirect_map multi i40e->veth (x1) | 1.7M | 11.4M 5.12 rc4 + patch | redirect_map multi i40e->veth (x2) | 1.1M | 4.3M 5.12 rc4 + patch | redirect_map multi i40e->veth (x3) | 0.8M | 2.6M Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NMartin KaFai Lau <kafai@fb.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20210519090747.1655268-3-liuhangbin@gmail.com
-
由 Jesper Dangaard Brouer 提交于
This changes the devmap XDP program support to run the program when the bulk queue is flushed instead of before the frame is enqueued. This has a couple of benefits: - It "sorts" the packets by destination devmap entry, and then runs the same BPF program on all the packets in sequence. This ensures that we keep the XDP program and destination device properties hot in I-cache. - It makes the multicast implementation simpler because it can just enqueue packets using bq_enqueue() without having to deal with the devmap program at all. The drawback is that if the devmap program drops the packet, the enqueue step is redundant. However, arguably this is mostly visible in a micro-benchmark, and with more mixed traffic the I-cache benefit should win out. The performance impact of just this patch is as follows: Using 2 10Gb i40e NIC, redirecting one to another, or into a veth interface, which do XDP_DROP on veth peer. With xdp_redirect_map in sample/bpf, send pkts via pktgen cmd: ./pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -t 10 -s 64 There are about +/- 0.1M deviation for native testing, the performance improved for the base-case, but some drop back with xdp devmap prog attached. Version | Test | Generic | Native | Native + 2nd xdp_prog 5.12 rc4 | xdp_redirect_map i40e->i40e | 1.9M | 9.6M | 8.4M 5.12 rc4 | xdp_redirect_map i40e->veth | 1.7M | 11.7M | 9.8M 5.12 rc4 + patch | xdp_redirect_map i40e->i40e | 1.9M | 9.8M | 8.0M 5.12 rc4 + patch | xdp_redirect_map i40e->veth | 1.7M | 12.0M | 9.4M When bq_xmit_all() is called from bq_enqueue(), another packet will always be enqueued immediately after, so clearing dev_rx, xdp_prog and flush_node in bq_xmit_all() is redundant. Move the clear to __dev_flush(), and only check them once in bq_enqueue() since they are all modified together. This change also has the side effect of extending the lifetime of the RCU-protected xdp_prog that lives inside the devmap entries: Instead of just living for the duration of the XDP program invocation, the reference now lives all the way until the bq is flushed. This is safe because the bq flush happens at the end of the NAPI poll loop, so everything happens between a local_bh_disable()/local_bh_enable() pair. However, this is by no means obvious from looking at the call sites; in particular, some drivers have an additional rcu_read_lock() around only the XDP program invocation, which only confuses matters further. Cleaning this up will be done in a separate patch series. Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210519090747.1655268-2-liuhangbin@gmail.com
-
由 Alexei Starovoitov 提交于
Andrii Nakryiko says: ==================== Implement error reporting changes discussed in "Libbpf: the road to v1.0" ([0]) document. Libbpf gets a new API, libbpf_set_strict_mode() which accepts a set of flags that turn on a set of libbpf 1.0 changes, that might be potentially breaking. It's possible to opt-in into all current and future 1.0 features by specifying LIBBPF_STRICT_ALL flag. When some of the 1.0 "features" are requested, libbpf APIs might behave differently. In this patch set a first set of changes are implemented, all related to the way libbpf returns errors. See individual patches for details. Patch #1 adds a no-op libbpf_set_strict_mode() functionality to enable updating selftests. Patch #2 gets rid of all the bad code patterns that will break in libbpf 1.0 (exact -1 comparison for low-level APIs, direct IS_ERR() macro usage to check pointer-returning APIs for error, etc). These changes make selftest work in both legacy and 1.0 libbpf modes. Selftests also opt-in into 100% libbpf 1.0 mode to automatically gain all the subsequent changes, which will come in follow up patches. Patch #3 streamlines error reporting for low-level APIs wrapping bpf() syscall. Patch #4 streamlines errors for all the rest APIs. Patch #5 ensures that BPF skeletons propagate errors properly as well, as currently on error some APIs will return NULL with no way of checking exact error code. [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY v1->v2: - move libbpf_set_strict_mode() implementation to patch #1, where it belongs (Alexei); - add acks, slight rewording of commit messages. ==================== Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
Follow libbpf's error handling conventions and pass through errors and errno properly. Skeleton code always returned NULL on errors (not ERR_PTR(err)), so there are no backwards compatibility concerns. But now we also set errno properly, so it's possible to distinguish different reasons for failure, if necessary. Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-6-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
Implement changes to error reporting for high-level libbpf APIs to make them less surprising and less error-prone to users: - in all the cases when error happens, errno is set to an appropriate error value; - in libbpf 1.0 mode, all pointer-returning APIs return NULL on error and error code is communicated through errno; this applies both to APIs that already returned NULL before (so now they communicate more detailed error codes), as well as for many APIs that used ERR_PTR() macro and encoded error numbers as fake pointers. - in legacy (default) mode, those APIs that were returning ERR_PTR(err), continue doing so, but still set errno. With these changes, errno can be always used to extract actual error, regardless of legacy or libbpf 1.0 modes. This is utilized internally in libbpf in places where libbpf uses it's own high-level APIs. libbpf_get_error() is adapted to handle both cases completely transparently to end-users (and is used by libbpf consistently as well). More context, justification, and discussion can be found in "Libbpf: the road to v1.0" document ([0]). [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTYSigned-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-5-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
Ensure that low-level APIs behave uniformly across the libbpf as follows: - in case of an error, errno is always set to the correct error code; - when libbpf 1.0 mode is enabled with LIBBPF_STRICT_DIRECT_ERRS option to libbpf_set_strict_mode(), return -Exxx error value directly, instead of -1; - by default, until libbpf 1.0 is released, keep returning -1 directly. More context, justification, and discussion can be found in "Libbpf: the road to v1.0" document ([0]). [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTYSigned-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-4-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
Turn ony libbpf 1.0 mode. Fix all the explicit IS_ERR checks that now will be broken because libbpf returns NULL on error (and sets errno). Fix ASSERT_OK_PTR and ASSERT_ERR_PTR to work for both old mode and new modes and use them throughout selftests. This is trivial to do by using libbpf_get_error() API that all libbpf users are supposed to use, instead of IS_ERR checks. A bunch of checks also did explicit -1 comparison for various fd-returning APIs. Such checks are replaced with >= 0 or < 0 cases. There were also few misuses of bpf_object__find_map_by_name() in test_maps. Those are fixed in this patch as well. Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-3-andrii@kernel.org
-
由 Andrii Nakryiko 提交于
Add libbpf_set_strict_mode() API that allows application to simulate libbpf 1.0 breaking changes before libbpf 1.0 is released. This will help users migrate gradually and with confidence. For now only ALL or NONE options are available, subsequent patches will add more flags. This patch is preliminary for selftests/bpf changes. Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-2-andrii@kernel.org
-
- 25 5月, 2021 9 次提交
-
-
由 Magnus Karlsson 提交于
Use kvcalloc() instead of kcalloc() to support large umems with, on my server, one million pages or more in the umem. Reported-by: NDan Siemon <dan@coverfire.com> Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NBjörn Töpel <bjorn@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210521083301.26921-1-magnus.karlsson@gmail.com
-
由 Zhen Lei 提交于
Fix some spelling mistakes in comments: aother ==> another Netiher ==> Neither desribe ==> describe intializing ==> initializing funciton ==> function wont ==> won't and move the word 'the' at the end to the next line accross ==> across pathes ==> paths triggerred ==> triggered excute ==> execute ether ==> either conervative ==> conservative convetion ==> convention markes ==> marks interpeter ==> interpreter Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210525025659.8898-2-thunder.leizhen@huawei.com
-
由 Aditya Srivastava 提交于
The opening comment mark '/**' is used for highlighting the beginning of kernel-doc comments. The header for samples/bpf/ibumad_kern.c follows this syntax, but the content inside does not comply with kernel-doc. This line was probably not meant for kernel-doc parsing, but is parsed due to the presence of kernel-doc like comment syntax(i.e, '/**'), which causes unexpected warnings from kernel-doc: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * ibumad BPF sample kernel side Provide a simple fix by replacing this occurrence with general comment format, i.e. '/*', to prevent kernel-doc from parsing it. Signed-off-by: NAditya Srivastava <yashsri421@gmail.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/bpf/20210523151408.22280-1-yashsri421@gmail.com
-
由 Yonghong Song 提交于
LLVM patch https://reviews.llvm.org/D102712 narrowed the scope of existing R_BPF_64_64 and R_BPF_64_32 relocations, and added three new relocations, R_BPF_64_ABS64, R_BPF_64_ABS32 and R_BPF_64_NODYLD32. The main motivation is to make relocations linker friendly. This change, unfortunately, breaks libbpf build, and we will see errors like below: libbpf: ELF relo #0 in section #6 has unexpected type 2 in /home/yhs/work/bpf-next/tools/testing/selftests/bpf/bpf_tcp_nogpl.o Error: failed to link '/home/yhs/work/bpf-next/tools/testing/selftests/bpf/bpf_tcp_nogpl.o': Unknown error -22 (-22) The new relocation R_BPF_64_ABS64 is generated and libbpf linker sanity check doesn't understand it. Relocation section '.rel.struct_ops' at offset 0x1410 contains 1 entries: Offset Info Type Symbol's Value Symbol's Name 0000000000000018 0000000700000002 R_BPF_64_ABS64 0000000000000000 nogpltcp_init Look at the selftests/bpf/bpf_tcp_nogpl.c, void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk) { } SEC(".struct_ops") struct tcp_congestion_ops bpf_nogpltcp = { .init = (void *)nogpltcp_init, .name = "bpf_nogpltcp", }; The new llvm relocation scheme categorizes 'nogpltcp_init' reference as R_BPF_64_ABS64 instead of R_BPF_64_64 which is used to specify ld_imm64 relocation in the new scheme. Let us fix the linker sanity checking by including R_BPF_64_ABS64 and R_BPF_64_ABS32. There is no need to check R_BPF_64_NODYLD32 which is used for .BTF and .BTF.ext. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210522162341.3687617-1-yhs@fb.com
-
由 Andrii Nakryiko 提交于
Denis Salopek says: ==================== This patch series extends the existing bpf_map_lookup_and_delete_elem() functionality with 4 more map types: - BPF_MAP_TYPE_HASH, - BPF_MAP_TYPE_PERCPU_HASH, - BPF_MAP_TYPE_LRU_HASH and - BPF_MAP_TYPE_LRU_PERCPU_HASH. Patch 1 adds most of its functionality and logic as well as documentation. As it was previously limited to only stacks and queues which do not support the BPF_F_LOCK flag, patch 2 enables its usage by adding a new libbpf API bpf_map_lookup_and_delete_elem_flags() based on the existing bpf_map_lookup_elem_flags(). Patch 3 adds selftests for lookup_and_delete_elem(). Changes in patch 1: v7: Minor formating nits, add Acked-by. v6: Remove unneeded flag check, minor code/format fixes. v5: Split patch to 3 patches. Extend BPF_MAP_LOOKUP_AND_DELETE_ELEM documentation with this changes. v4: Fix the return value for unsupported map types. v3: Add bpf_map_lookup_and_delete_elem_flags() and enable BPF_F_LOCK flag, change CHECKs to ASSERT_OKs, initialize variables to 0. v2: Add functionality for LRU/per-CPU, add test_progs tests. Changes in patch 2: v7: No change. v6: Add Acked-by. v5: Move to the newest libbpf version (0.4.0). Changes in patch 3: v7: Remove ASSERT_GE macro which is already added in some other commit, change ASSERT_OK to ASSERT_OK_PTR, add Acked-by. v6: Remove PERCPU macros, add ASSERT_GE macro to test_progs.h, remove leftover code. v5: Use more appropriate macros. Better check for changed value. ==================== Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
-
由 Denis Salopek 提交于
Add bpf selftests and extend existing ones for a new function bpf_lookup_and_delete_elem() for (percpu) hash and (percpu) LRU hash map types. In test_lru_map and test_maps we add an element, lookup_and_delete it, then check whether it's deleted. The newly added lookup_and_delete prog tests practically do the same thing but additionally use a BPF program to change the value of the element for LRU maps. Signed-off-by: NDenis Salopek <denis.salopek@sartura.hr> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/d30d3e0060c1f750e133579623cf1c60ff58f3d9.1620763117.git.denis.salopek@sartura.hr
-
由 Denis Salopek 提交于
Add bpf_map_lookup_and_delete_elem_flags() libbpf API in order to use the BPF_F_LOCK flag with the map_lookup_and_delete_elem() function. Signed-off-by: NDenis Salopek <denis.salopek@sartura.hr> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/15b05dafe46c7e0750d110f233977372029d1f62.1620763117.git.denis.salopek@sartura.hr
-
由 Denis Salopek 提交于
Extend the existing bpf_map_lookup_and_delete_elem() functionality to hashtab map types, in addition to stacks and queues. Create a new hashtab bpf_map_ops function that does lookup and deletion of the element under the same bucket lock and add the created map_ops to bpf.h. Signed-off-by: NDenis Salopek <denis.salopek@sartura.hr> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/4d18480a3e990ffbf14751ddef0325eed3be2966.1620763117.git.denis.salopek@sartura.hr
-
由 Stanislav Fomichev 提交于
I'm getting the following error when running 'gen skeleton -L' as regular user: libbpf: Error in bpf_object__probe_loading():Operation not permitted(1). Couldn't load trivial BPF program. Make sure your kernel supports BPF (CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is set to big enough value. Fixes: 67234743 ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: NStanislav Fomichev <sdf@google.com> Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210521030653.2626513-1-sdf@google.com
-
- 24 5月, 2021 9 次提交
-
-
由 YueHaibing 提交于
Issue identified with Coccinelle. Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Use phy_ethtool_nway_reset() since the driver makes use of the PHY library. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== net: r6040: Non-functional changes These two patches clean up the r6040 driver a little bit in preparation for adding additional features such as dumping MAC counters and properly dealing with DMA-API mapping. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Instead of the open coded constant 4. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
This is not a functional change, but we should be using a logical or to assign the bits we will be writing to the MDIO read and write registers. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YueHaibing 提交于
Use DEVICE_ATTR_*() helper instead of plain DEVICE_ATTR, which makes the code a bit shorter and easier to read. Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YueHaibing 提交于
Use DEVICE_ATTR_*() helper instead of plain DEVICE_ATTR, which makes the code a bit shorter and easier to read. Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YueHaibing 提交于
Use DEVICE_ATTR_*() helper instead of plain DEVICE_ATTR, which makes the code a bit shorter and easier to read. Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
The variables will be free on path err_phy_connect, it should return error code, or it will cause double free when calling ftgmac100_remove(). Fixes: bd466c3f ("net/faraday: Support NCSI mode") Fixes: 39bfab88 ("net: ftgmac100: Add support for DT phy-handle property") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 5月, 2021 3 次提交
-
-
由 Vladimir Oltean 提交于
When booting a board with DPAA2 interfaces defined statically via DPL (as opposed to creating them dynamically using restool), the driver will print an unspecific error message. This change adds the error code to the message, and avoids printing altogether if the error code is EPROBE_DEFER, because that is not a cause of alarm. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Ioana Ciornei says: ==================== dpaa2-eth: setup the of_node This patch set allows DSA to work with a DPAA2 master device by setting up the of_node to point to the corresponding MAC DTS node, so that of_find_net_device_by_node() can find it. As an added benefit, udev rules can now be used to create a naming scheme based on the physical MAC. The second patch renames the debugfs directory to use the DPNI name instead of the netdev name, since the latter can be changed after probe time. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Name the debugfs directory after the DPNI object instead of the netdev name since this can be changed after probe by udev rules. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-