- 18 11月, 2020 1 次提交
-
-
由 Magnus Karlsson 提交于
Increment the statistics over how many Tx packets have been sent at the time of sending instead of at the time of completion. This as a completion event means that the buffer has been sent AND returned to user space. The packet always gets sent shortly after sendto() is called. The kernel might, for performance reasons, decide to not return every single buffer to user space immediately after sending, for example, only after a batch of packets have been transmitted. Incrementing the number of packets sent at completion, will in that case be confusing as if you send a single packet, the counter might show zero for a while even though the packet has been transmitted. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/1605525167-14450-2-git-send-email-magnus.karlsson@gmail.com
-
- 11 11月, 2020 1 次提交
-
-
由 Hangbin Liu 提交于
The tcbpf2_kern.o and related kernel sections are moved to bpf selftest folder since b05cd740 ("samples/bpf: remove the bpf tunnel testsuite."). Remove this one as well. Fixes: b05cd740 ("samples/bpf: remove the bpf tunnel testsuite.") Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NMartin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20201110015013.1570716-3-liuhangbin@gmail.com
-
- 10 11月, 2020 1 次提交
-
-
由 Menglong Dong 提交于
The 'bpf/bpf.h' include in 'samples/bpf/hbm.c' is duplicated. Signed-off-by: NMenglong Dong <dong.menglong@zte.com.cn> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/1604654034-52821-1-git-send-email-dong.menglong@zte.com.cn
-
- 28 10月, 2020 1 次提交
-
-
由 Toke Høiland-Jørgensen 提交于
The memlock rlimit is a notorious source of failure for BPF programs. Most of the samples just set it to infinity, but a few used a lower limit. The problem with unconditionally setting a lower limit is that this will also override the limit if the system-wide setting is *higher* than the limit being set, which can lead to failures on systems that lock a lot of memory, but set 'ulimit -l' to unlimited before running a sample. One fix for this is to only conditionally set the limit if the current limit is lower, but it is simpler to just unify all the samples and have them all set the limit to infinity. Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20201026233623.91728-1-toke@redhat.com
-
- 22 10月, 2020 1 次提交
-
-
由 Daniel Borkmann 提交于
Yaniv reported a compilation error after pulling latest libbpf: [...] ../libbpf/src/root/usr/include/bpf/bpf_helpers.h:99:10: error: unknown register name 'r0' in asm : "r0", "r1", "r2", "r3", "r4", "r5"); [...] The issue got triggered given Yaniv was compiling tracing programs with native target (e.g. x86) instead of BPF target, hence no BTF generated vmlinux.h nor CO-RE used, and later llc with -march=bpf was invoked to compile from LLVM IR to BPF object file. Given that clang was expecting x86 inline asm and not BPF one the error complained that these regs don't exist on the former. Guard bpf_tail_call_static() with defined(__bpf__) where BPF inline asm is valid to use. BPF tracing programs on more modern kernels use BPF target anyway and thus the bpf_tail_call_static() function will be available for them. BPF inline asm is supported since clang 7 (clang <= 6 otherwise throws same above error), and __bpf_unreachable() since clang 8, therefore include the latter condition in order to prevent compilation errors for older clang versions. Given even an old Ubuntu 18.04 LTS has official LLVM packages all the way up to llvm-10, I did not bother to special case the __bpf_unreachable() inside bpf_tail_call_static() further. Also, undo the sockex3_kern's use of bpf_tail_call_static() sample given they still have the old hacky way to even compile networking progs with native instead of BPF target so bpf_tail_call_static() won't be defined there anymore. Fixes: 0e9f6841 ("bpf, libbpf: Add bpf_tail_call_static helper for bpf programs") Reported-by: NYaniv Agman <yanivagman@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Tested-by: NYaniv Agman <yanivagman@gmail.com> Link: https://lore.kernel.org/bpf/CAMy7=ZUk08w5Gc2Z-EKi4JFtuUCaZYmE4yzhJjrExXpYKR4L8w@mail.gmail.com Link: https://lore.kernel.org/bpf/20201021203257.26223-1-daniel@iogearbox.net
-
- 12 10月, 2020 3 次提交
-
-
由 Daniel T. Lee 提交于
Most of the samples were converted to use the new BTF-defined MAP as they moved to libbpf, but some of the samples were missing. Instead of using the previous BPF MAP definition, this commit refactors xdp_monitor and xdp_sample_pkts_kern MAP definition with the new BTF-defined MAP format. Also, this commit removes the max_entries attribute at PERF_EVENT_ARRAY map type. The libbpf's bpf_object__create_map() will automatically set max_entries to the maximum configured number of CPUs on the host. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-4-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
>From commit d7a18ea7 ("libbpf: Add generic bpf_program__attach()"), for some BPF programs, it is now possible to attach BPF programs with __attach() instead of explicitly calling __attach_<type>(). This commit refactors the __attach_tracepoint() with libbpf's generic __attach() method. In addition, this refactors the logic of setting the map FD to simplify the code. Also, the missing removal of bpf_load.o in Makefile has been fixed. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-3-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
To avoid confusion caused by the increasing fragmentation of the BPF Loader program, this commit would like to change to the libbpf loader instead of using the bpf_load. Thanks to libbpf's bpf_link interface, managing the tracepoint BPF program is much easier. bpf_program__attach_tracepoint manages the enable of tracepoint event and attach of BPF programs to it with a single interface bpf_link, so there is no need to manage event_fd and prog_fd separately. This commit refactors xdp_monitor with using this libbpf API, and the bpf_load is removed and migrated to libbpf. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-2-danieltimlee@gmail.com
-
- 07 10月, 2020 5 次提交
-
-
由 Ciara Loftus 提交于
Add an option to count the number of interrupts generated per second and total number of interrupts during the lifetime of the application for a given interface. This information is extracted from /proc/interrupts. Since there is no naming convention across drivers, the user must provide the string which is specific to their interface in the /proc/interrupts file on the command line. Usage: ./xdpsock ... -I <irq_str> eg. for queue 0 of i40e device eth0: ./xdpsock ... -I i40e-eth0-TxRx-0 Signed-off-by: NCiara Loftus <ciara.loftus@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201002133612.31536-3-ciara.loftus@intel.com
-
由 Ciara Loftus 提交于
Categorise and record syscalls issued in the xdpsock sample app. The categories recorded are: rx_empty_polls: polls when the rx ring is empty fill_fail_polls: polls when failed to get addr from fill ring copy_tx_sendtos: sendtos issued for tx when copy mode enabled tx_wakeup_sendtos: sendtos issued when tx ring needs waking up opt_polls: polls issued since the '-p' flag is set Print the stats using '-a' on the xdpsock command line. Signed-off-by: NCiara Loftus <ciara.loftus@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201002133612.31536-2-ciara.loftus@intel.com
-
由 Ciara Loftus 提交于
New statistics will be added in future commits. In preparation for this, let's split out the existing statistics into their own struct. Signed-off-by: NCiara Loftus <ciara.loftus@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201002133612.31536-1-ciara.loftus@intel.com
-
由 Yonghong Song 提交于
Compiling samples/bpf hits an error related to fallthrough marking. ... CC samples/bpf/hbm.o samples/bpf/hbm.c: In function ‘main’: samples/bpf/hbm.c:486:4: error: ‘fallthrough’ undeclared (first use in this function) fallthrough; ^~~~~~~~~~~ The "fallthrough" is not defined under tools/include directory. Rather, it is "__fallthrough" is defined in linux/compiler.h. Including "linux/compiler.h" and using "__fallthrough" fixed the issue. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201006043427.1891805-1-yhs@fb.com
-
由 Yonghong Song 提交于
With latest llvm trunk, bpf programs under samples/bpf directory, if using CORE, may experience the following errors: LLVM ERROR: Cannot select: intrinsic %llvm.preserve.struct.access.index PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace. Stack dump: 0. Program arguments: llc -march=bpf -filetype=obj -o samples/bpf/test_probe_write_user_kern.o 1. Running pass 'Function Pass Manager' on module '<stdin>'. 2. Running pass 'BPF DAG->DAG Pattern Instruction Selection' on function '@bpf_prog1' #0 0x000000000183c26c llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x183c26c) ... #7 0x00000000017c375e (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x17c375e) #8 0x00000000016a75c5 llvm::SelectionDAGISel::CannotYetSelect(llvm::SDNode*) (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x16a75c5) #9 0x00000000016ab4f8 llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int) (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x16ab4f8) ... Aborted (core dumped) | llc -march=bpf -filetype=obj -o samples/bpf/test_probe_write_user_kern.o The reason is due to llvm change https://reviews.llvm.org/D87153 where the CORE relocation global generation is moved from the beginning of target dependent optimization (llc) to the beginning of target independent optimization (opt). Since samples/bpf programs did not use vmlinux.h and its clang compilation uses native architecture, we need to adjust arch triple at opt level to do CORE relocation global generation properly. Otherwise, the above error will appear. This patch fixed the issue by introduce opt and llvm-dis to compilation chain, which will do proper CORE relocation global generation as well as O2 level optimization. Tested with llvm10, llvm11 and trunk/llvm12. Signed-off-by: NYonghong Song <yhs@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20201006043427.1891742-1-yhs@fb.com
-
- 01 10月, 2020 1 次提交
-
-
由 Daniel Borkmann 提交于
For those locations where we use an immediate tail call map index use the newly added bpf_tail_call_static() helper. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NMartin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/3cfb2b799a62d22c6e7ae5897c23940bdcc24cbc.1601477936.git.daniel@iogearbox.net
-
- 19 9月, 2020 1 次提交
-
-
由 Ilya Leoshkevich 提交于
s390 uses socketcall multiplexer instead of individual socket syscalls. Therefore, "kprobe/" SYSCALL(sys_connect) does not trigger and test_map_in_map fails. Fix by using "kprobe/__sys_connect" instead. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200915115519.3769807-1-iii@linux.ibm.com
-
- 15 9月, 2020 3 次提交
-
-
由 Magnus Karlsson 提交于
Add a quiet option (-Q) that disables the statistics print outs of xdpsock. This is good to have when measuring 0% loss rate performance as it will be quite terrible if the application uses printfs. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1599726666-8431-4-git-send-email-magnus.karlsson@gmail.com
-
由 Magnus Karlsson 提交于
Fix a possible deadlock in the l2fwd application in xdpsock that can occur when there is no space in the Tx ring. There are two ways to get the kernel to consume entries in the Tx ring: calling sendto() to make it send packets and freeing entries from the completion ring, as the kernel will not send a packet if there is no space for it to add a completion entry in the completion ring. The Tx loop in l2fwd only used to call sendto(). This patches adds cleaning the completion ring in that loop. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1599726666-8431-3-git-send-email-magnus.karlsson@gmail.com
-
由 Magnus Karlsson 提交于
Fix the sending of a single packet (or small burst) in xdpsock when executing in copy mode. Currently, the l2fwd application in xdpsock only transmits the packets after a batch of them has been received, which might be confusing if you only send one packet and expect that it is returned pronto. Fix this by calling sendto() more often and add a comment in the code that states that this can be optimized if needed. Reported-by: NTirthendu Sarkar <tirthendu.sarkar@intel.com> Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1599726666-8431-2-git-send-email-magnus.karlsson@gmail.com
-
- 04 9月, 2020 2 次提交
-
-
由 Daniel T. Lee 提交于
This commit adds xsk_fwd test file to .gitignore which is newly added to samples/bpf. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200904063434.24963-2-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
From commit 52109584 ("libbpf: Deprecate notion of BPF program "title" in favor of "section name""), the term title has been replaced with section name in libbpf. Since the bpf_program__title() has been deprecated, this commit switches this function to bpf_program__section_name(). Due to this commit, the compilation warning issue has also been resolved. Fixes: 52109584 ("libbpf: Deprecate notion of BPF program "title" in favor of "section name"") Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200904063434.24963-1-danieltimlee@gmail.com
-
- 01 9月, 2020 3 次提交
-
-
由 Weqaar Janjua 提交于
The txpush program in the xdpsock sample application is supposed to send out all packets in the umem in a round-robin fashion. The problem is that it only cycled through the first BATCH_SIZE worth of packets. Fixed this so that it cycles through all buffers in the umem as intended. Fixes: 248c7f9c ("samples/bpf: convert xdpsock to use libbpf for AF_XDP access") Signed-off-by: NWeqaar Janjua <weqaar.a.janjua@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NBjörn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/bpf/20200828161717.42705-1-weqaar.a.janjua@intel.com
-
由 Magnus Karlsson 提交于
Optimize the throughput performance of the l2fwd sub-app in the xdpsock sample application by removing a duplicate syscall and increasing the size of the fill ring. The latter needs some further explanation. We recommend that you set the fill ring size >= HW RX ring size + AF_XDP RX ring size. Make sure you fill up the fill ring with buffers at regular intervals, and you will with this setting avoid allocation failures in the driver. These are usually quite expensive since drivers have not been written to assume that allocation failures are common. For regular sockets, kernel allocated memory is used that only runs out in OOM situations that should be rare. These two performance optimizations together lead to a 6% percent improvement for the l2fwd app on my machine. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/1598619065-1944-1-git-send-email-magnus.karlsson@intel.com
-
由 Cristian Dumitrescu 提交于
This sample code illustrates the packet forwarding between multiple AF_XDP sockets in multi-threading environment. All the threads and sockets are sharing a common buffer pool, with each socket having its own private buffer cache. The sockets are created with the xsk_socket__create_shared() function, which allows multiple AF_XDP sockets to share the same UMEM object. Example 1: Single thread handling two sockets. Packets received from socket A (on top of interface IFA, queue QA) are forwarded to socket B (on top of interface IFB, queue QB) and vice-versa. The thread is affinitized to CPU core C: ./xsk_fwd -i IFA -q QA -i IFB -q QB -c C Example 2: Two threads, each handling two sockets. Packets from socket A are sent to socket B (by thread X), packets from socket B are sent to socket A (by thread X); packets from socket C are sent to socket D (by thread Y), packets from socket D are sent to socket C (by thread Y). The two threads are bound to CPU cores CX and CY: ./xdp_fwd -i IFA -q QA -i IFB -q QB -i IFC -q QC -i IFD -q QD -c CX -c CY Signed-off-by: NCristian Dumitrescu <cristian.dumitrescu@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NBjörn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/bpf/1598603189-32145-15-git-send-email-magnus.karlsson@intel.com
-
- 25 8月, 2020 3 次提交
-
-
由 Daniel T. Lee 提交于
For the problem of increasing fragmentation of the bpf loader programs, instead of using bpf_loader.o, which is used in samples/bpf, this commit refactors the existing tracepoint tracing programs with libbbpf bpf loader. - Adding a tracepoint event and attaching a bpf program to it was done through bpf_program_attach(). - Instead of using the existing BPF MAP definition, MAP definition has been refactored with the new BTF-defined MAP format. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200823085334.9413-4-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
For the problem of increasing fragmentation of the bpf loader programs, instead of using bpf_loader.o, which is used in samples/bpf, this commit refactors the existing kprobe tracing programs with libbbpf bpf loader. - For kprobe events pointing to system calls, the SYSCALL() macro in trace_common.h was used. - Adding a kprobe event and attaching a bpf program to it was done through bpf_program_attach(). - Instead of using the existing BPF MAP definition, MAP definition has been refactored with the new BTF-defined MAP format. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200823085334.9413-3-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
Since commit cc7f641d ("samples: bpf: Refactor BPF map performance test with libbpf") has ommited the removal of bpf_load.o from Makefile, this commit removes the bpf_load.o rule for targets where bpf_load.o is not used. Fixes: cc7f641d ("samples: bpf: Refactor BPF map performance test with libbpf") Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200823085334.9413-2-danieltimlee@gmail.com
-
- 24 8月, 2020 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-throughSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
-
- 19 8月, 2020 1 次提交
-
-
由 Daniel T. Lee 提交于
>From commit f1394b79 ("block: mark blk_account_io_completion static") symbol blk_account_io_completion() has been marked as static, which makes it no longer possible to attach kprobe to this event. Currently, there are broken samples due to this reason. As a solution to this, attach kprobe events to blk_account_io_done() to modify them to perform the same behavior as before. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200818051641.21724-1-danieltimlee@gmail.com
-
- 22 7月, 2020 1 次提交
-
-
由 Ilya Leoshkevich 提交于
A handful of samples and selftests fail to build on s390, because after commit 0ebeea8c ("bpf: Restrict bpf_probe_read{, str}() only to archs where they work") bpf_probe_read is not available anymore. Fix by using bpf_probe_read_kernel. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200720114806.88823-1-iii@linux.ibm.com
-
- 16 7月, 2020 2 次提交
-
-
由 Lorenzo Bianconi 提交于
Extend xdp_redirect_cpu_{usr,kern}.c adding the possibility to load a XDP program on cpumap entries. The following options have been added: - mprog-name: cpumap entry program name - mprog-filename: cpumap entry program filename - redirect-device: output interface if the cpumap program performs a XDP_REDIRECT to an egress interface - redirect-map: bpf map used to perform XDP_REDIRECT to an egress interface - mprog-disable: disable loading XDP program on cpumap entries Add xdp_pass, xdp_drop, xdp_redirect stats accounting Co-developed-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/aa5a9a281b9dac425620fdabe82670ffb6bbdb92.1594734381.git.lorenzo@kernel.org
-
由 Lorenzo Bianconi 提交于
Do not update xdp_redirect_cpu maps running while option loop but defer it after all available options have been parsed. This is a preliminary patch to pass the program name we want to attach to the map entries as a user option Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/95dc46286fd2c609042948e04bb7ae1f5b425538.1594734381.git.lorenzo@kernel.org
-
- 14 7月, 2020 1 次提交
-
-
由 Ciara Loftus 提交于
Introduce the --extra-stats (or simply -x) flag to the xdpsock application which prints additional statistics alongside the regular rx and tx counters. The new statistics printed report error conditions eg. rx ring full, invalid descriptors, etc. Signed-off-by: NCiara Loftus <ciara.loftus@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200708072835.4427-3-ciara.loftus@intel.com
-
- 11 7月, 2020 1 次提交
-
-
由 Wenbo Zhang 提交于
The `BPF_LOG_BUF_SIZE`'s value is `UINT32_MAX >> 8`, so define an array with it on stack caused an overflow. Signed-off-by: NWenbo Zhang <ethercflow@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200710092035.28919-1-ethercflow@gmail.com
-
- 08 7月, 2020 3 次提交
-
-
由 Daniel T. Lee 提交于
Previously, in order to set the numa_node attribute at the time of map creation using "libbpf", it was necessary to call bpf_create_map_node() directly (bpf_load approach), instead of calling bpf_object_load() that handles everything on its own, including map creation. And because of this problem, this sample had problems with refactoring from bpf_load to libbbpf. However, by commit 1bdb6c9a ("libbpf: Add a bunch of attribute getters/setters for map definitions") added the numa_node attribute and allowed it to be set in the map. By using libbpf instead of bpf_load, the inner map definition has been explicitly declared with BTF-defined format. Also, the element of ARRAY_OF_MAPS was also statically specified using the BTF format. And for this reason some logic in fixup_map() was not needed and changed or removed. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200707184855.30968-4-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
From commit 646f02ff ("libbpf: Add BTF-defined map-in-map support"), a way to define internal map in BTF-defined map has been added. Instead of using previous 'inner_map_idx' definition, the structure to be used for the inner map can be directly defined using array directive. __array(values, struct inner_map) This commit refactors map in map test program with libbpf by explicitly defining inner map with BTF-defined format. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200707184855.30968-3-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
Currently, BPF programs with kprobe/sys_connect does not work properly. Commit 34745aed ("samples/bpf: fix kprobe attachment issue on x64") This commit modifies the bpf_load behavior of kprobe events in the x64 architecture. If the current kprobe event target starts with "sys_*", add the prefix "__x64_" to the front of the event. Appending "__x64_" prefix with kprobe/sys_* event was appropriate as a solution to most of the problems caused by the commit below. commit d5a00528 ("syscalls/core, syscalls/x86: Rename struct pt_regs-based sys_*() to __x64_sys_*()") However, there is a problem with the sys_connect kprobe event that does not work properly. For __sys_connect event, parameters can be fetched normally, but for __x64_sys_connect, parameters cannot be fetched. ffffffff818d3520 <__x64_sys_connect>: ffffffff818d3520: e8 fb df 32 00 callq 0xffffffff81c01520 <__fentry__> ffffffff818d3525: 48 8b 57 60 movq 96(%rdi), %rdx ffffffff818d3529: 48 8b 77 68 movq 104(%rdi), %rsi ffffffff818d352d: 48 8b 7f 70 movq 112(%rdi), %rdi ffffffff818d3531: e8 1a ff ff ff callq 0xffffffff818d3450 <__sys_connect> ffffffff818d3536: 48 98 cltq ffffffff818d3538: c3 retq ffffffff818d3539: 0f 1f 80 00 00 00 00 nopl (%rax) As the assembly code for __x64_sys_connect shows, parameters should be fetched and set into rdi, rsi, rdx registers prior to calling __sys_connect. Because of this problem, this commit fixes the sys_connect event by first getting the value of the rdi register and then the value of the rdi, rsi, and rdx register through an offset based on that value. Fixes: 34745aed ("samples/bpf: fix kprobe attachment issue on x64") Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200707184855.30968-2-danieltimlee@gmail.com
-
- 16 6月, 2020 1 次提交
-
-
由 Gaurav Singh 提交于
Memset on the pointer right after malloc can cause a NULL pointer deference if it failed to allocate memory. A simple fix is to replace malloc()/memset() pair with a simple call to calloc(). Fixes: 0fca931a ("samples/bpf: program demonstrating access to xdp_rxq_info") Signed-off-by: NGaurav Singh <gaurav1086@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
-
- 19 5月, 2020 3 次提交
-
-
由 Daniel T. Lee 提交于
Because the previous two commit replaced the bpf_load implementation of the user program with libbpf, the corresponding kernel program's MAP definition can be replaced with new BTF-defined map syntax. This commit only updates the samples which uses libbpf API for loading bpf program not with bpf_load. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200516040608.1377876-6-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
This commit adds tracex7 test file (testfile.img) to .gitignore which comes from test_override_return.sh. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200516040608.1377876-5-danieltimlee@gmail.com
-
由 Daniel T. Lee 提交于
BPF tail call uses the BPF_MAP_TYPE_PROG_ARRAY type map for calling into other BPF programs and this PROG_ARRAY should be filled prior to use. Currently, samples with the PROG_ARRAY type MAP fill this program array with bpf_load. For bpf_load to fill this map, kernel BPF program must specify the section with specific format of <prog_type>/<array_idx> (e.g. SEC("socket/0")) But by using libbpf instead of bpf_load, user program can specify which programs should be added to PROG_ARRAY. The advantage of this approach is that you can selectively add only the programs you want, rather than adding all of them to PROG_ARRAY, and it's much more intuitive than the traditional approach. This commit refactors user programs with the PROG_ARRAY type MAP with libbpf instead of using bpf_load. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NYonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200516040608.1377876-4-danieltimlee@gmail.com
-