- 13 4月, 2019 16 次提交
-
-
由 Andrey Ignatov 提交于
Add file_pos field to bpf_sysctl context to read and write sysctl file position at which sysctl is being accessed (read or written). The field can be used to e.g. override whole sysctl value on write to sysctl even when sys_write is called by user space with file_pos > 0. Or BPF program may reject such accesses. Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrey Ignatov 提交于
Add helpers to work with new value being written to sysctl by user space. bpf_sysctl_get_new_value() copies value being written to sysctl into provided buffer. bpf_sysctl_set_new_value() overrides new value being written by user space with a one from provided buffer. Buffer should contain string representation of the value, similar to what can be seen in /proc/sys/. Both helpers can be used only on sysctl write. File position matters and can be managed by an interface that will be introduced separately. E.g. if user space calls sys_write to a file in /proc/sys/ at file position = X, where X > 0, then the value set by bpf_sysctl_set_new_value() will be written starting from X. If program wants to override whole value with specified buffer, file position has to be set to zero. Documentation for the new helpers is provided in bpf.h UAPI. Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrey Ignatov 提交于
Add bpf_sysctl_get_current_value() helper to copy current sysctl value into provided by BPF_PROG_TYPE_CGROUP_SYSCTL program buffer. It provides same string as user space can see by reading corresponding file in /proc/sys/, including new line, etc. Documentation for the new helper is provided in bpf.h UAPI. Since current value is kept in ctl_table->data in a parsed form, ctl_table->proc_handler() with write=0 is called to read that data and convert it to a string. Such a string can later be parsed by a program using helpers that will be introduced separately. Unfortunately it's not trivial to provide API to access parsed data due to variety of data representations (string, intvec, uintvec, ulongvec, custom structures, even NULL, etc). Instead it's assumed that user know how to handle specific sysctl they're interested in and appropriate helpers can be used. Since ctl_table->proc_handler() expects __user buffer, conversion to __user happens for kernel allocated one where the value is stored. Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrey Ignatov 提交于
Add bpf_sysctl_get_name() helper to copy sysctl name (/proc/sys/ entry) into provided by BPF_PROG_TYPE_CGROUP_SYSCTL program buffer. By default full name (w/o /proc/sys/) is copied, e.g. "net/ipv4/tcp_mem". If BPF_F_SYSCTL_BASE_NAME flag is set, only base name will be copied, e.g. "tcp_mem". Documentation for the new helper is provided in bpf.h UAPI. Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrey Ignatov 提交于
Containerized applications may run as root and it may create problems for whole host. Specifically such applications may change a sysctl and affect applications in other containers. Furthermore in existing infrastructure it may not be possible to just completely disable writing to sysctl, instead such a process should be gradual with ability to log what sysctl are being changed by a container, investigate, limit the set of writable sysctl to currently used ones (so that new ones can not be changed) and eventually reduce this set to zero. The patch introduces new program type BPF_PROG_TYPE_CGROUP_SYSCTL and attach type BPF_CGROUP_SYSCTL to solve these problems on cgroup basis. New program type has access to following minimal context: struct bpf_sysctl { __u32 write; }; Where @write indicates whether sysctl is being read (= 0) or written (= 1). Helpers to access sysctl name and value will be introduced separately. BPF_CGROUP_SYSCTL attach point is added to sysctl code right before passing control to ctl_table->proc_handler so that BPF program can either allow or deny access to sysctl. Suggested-by: NRoman Gushchin <guro@fb.com> Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrey Ignatov 提交于
Currently kernel/bpf/cgroup.c contains only one program type and one proto function cgroup_dev_func_proto(). It'd be useful to have base proto function that can be reused for new cgroup-bpf program types coming soon. Introduce cgroup_base_func_proto(). Signed-off-by: NAndrey Ignatov <rdna@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 David S. Miller 提交于
Ursula Braun says: ==================== net/smc: patches 2019-04-12 here are patches for SMC: * patch 1 improves behavior of non-blocking connect * patches 2, 3, 5, 7, and 8 improve connecting return codes * patches 4 and 6 are a cleanups without functional change ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
Rework smc_conn_create() to always return a valid DECLINE reason code. This removes the need to translate the return codes on 4 different places and allows to easily add more detailed return codes by changing smc_conn_create() only. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
Rework smc_listen_work() to provide improved reason codes when an SMC connection is declined. This allows better debugging on user side. This also adds 3 more detailed reason codes in smc_clc.h to indicate what type of device was not found (ism or rdma or both), or if ism cannot talk to the peer. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
In smc_listen_work() the variables rc and reason_code are defined which have the same meaning. Eliminate reason_code in favor of the shorter name rc. No functional changes. Rename the functions smc_check_ism() and smc_check_rdma() into smc_find_ism_device() and smc_find_rdma_device() to make there purpose more clear. No functional changes. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
The vlan_id of the underlying CLC socket was retrieved two times during processing of the listen handshaking. Change this to get the vlan id one time in connect and in listen processing, and reuse the id. And add a new CLC DECLINE return code for the case when the retrieval of the vlan id failed. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
During initialization of an SMC socket a lot of function parameters need to get passed down the function call path. Consolidate the parameters in a helper struct so there are less enough parameters to get all passed by register. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
The check for a matching ip prefix and subnet was only done for SMC-R in smc_listen_rdma_check() but not when an SMC-D connection was possible. Rename the function into smc_listen_prfx_check() and move its call to a place where it is called for both SMC variants. And add a new CLC DECLINE reason for the case when the IP prefix or subnet check fails so the reason for the failing SMC connection can be found out more easily. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Graul 提交于
Correct the CLC decline reason codes for internal problems to not have the sign bit set, negative reason codes are interpreted as not eligible for TCP fallback. Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
For nonblocking sockets move the kernel_connect() from the connect worker into the initial smc_connect part to return kernel_connect() errors other than -EINPROGRESS to user space. Reviewed-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dongli Zhang 提交于
During coredump analysis, it is not easy to obtain the address of backend_info in xen-netback. So far there are two ways to obtain backend_info: 1. Do what xenbus_device_find() does for vmcore to find the xenbus_device and then derive it from dev_get_drvdata(). 2. Extract backend_info from callstack of xenwatch (e.g., netback_remove() or frontend_changed()). This patch adds a reference from xenvif to backend_info so that it would be much more easier to obtain backend_info during coredump analysis. Signed-off-by: NDongli Zhang <dongli.zhang@oracle.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 4月, 2019 24 次提交
-
-
由 David S. Miller 提交于
David Miller says: ==================== SCTP: Event skb list overhaul. This patch series eliminates the explicit reference to the skb list implementation via skb->prev dereferences. The approach used is to pass a non-empty skb list around instead of an event skb object which may or may not be on a list. I'd like to thank Marcelo Leitner, Xin Long, and Neil Horman for reviewing previous versions of this series. Testing would be very much appreciated, in addition to the review of course. v4 --> v5: Rebase to net-next v3 --> v4: Fix the logic in patch #4 so that we don't miss cases where we should add event to the on-stack temp list. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Miller 提交于
Now the SKB list implementation assumption can be removed. And now that we know that the list head is always non-NULL we can remove the code blocks dealing with that as well. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Miller 提交于
Pass this, instead of an event. Then everything trickles down and we always have events a non-empty list. Then we needs a list creating stub to place into .enqueue_event for sctp_stream_interleave_1. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Miller 提交于
This way we can make sure events sent this way to sctp_ulpq_tail_event() are on a list as well. Now all such code paths are fully covered. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Miller 提交于
This way we can simplify the logic and remove assumptions about the implementation of skb lists. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Miller 提交于
Inside the loop, we always start with event non-NULL. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vlad Buslov 提交于
Fix net reference counting in fl_change() and remove redundant call to tcf_exts_get_net() from __fl_delete(). __fl_put() already tries to get net before releasing exts and deallocating a filter, so this code caused flower classifier to obtain net twice per filter that is being deleted. Implementation of __fl_delete() called tcf_exts_get_net() to pass its result as 'async' flag to fl_mask_put(). However, 'async' flag is redundant and only complicates fl_mask_put() implementation. This functionality seems to be copied from filter cleanup code, where it was added by Cong with following explanation: This patchset tries to fix the race between call_rcu() and cleanup_net() again. Without holding the netns refcnt the tc_action_net_exit() in netns workqueue could be called before filter destroy works in tc filter workqueue. This patchset moves the netns refcnt from tc actions to tcf_exts, without breaking per-netns tc actions. This doesn't apply to flower mask, which doesn't call any tc action code during cleanup. Simplify fl_mask_put() by removing the flag parameter and always use tcf_queue_work() to free mask objects. Fixes: 06177558 ("net: sched: flower: introduce reference counting for filters") Fixes: 1f17f774 ("net: sched: flower: insert filter to ht before offloading it to hw") Fixes: 05cd271f ("cls_flower: Support multiple masks per priority") Reported-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NVlad Buslov <vladbu@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
pmtu.sh script runs a number of tests and dumps a summary of pass/fail. If a test fails, it is near impossible to debug why. For example: TEST: ipv6: PMTU exceptions [FAIL] There are a lot of commands run behind the scenes for this test. Which one is failing? Add a VERBOSE option to show commands that are run and any output from those commands. Add a PAUSE_ON_FAIL option to halt the script if a test fails allowing users to poke around with the setup in the failed state. In the process, rename tracing to TRACING and move declaration to top with the new variables. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Reviewed-by: NStefano Brivio <sbrivio@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next由 David S. Miller 提交于
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-04-12 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Improve BPF verifier scalability for large programs through two optimizations: i) remove verifier states that are not useful in pruning, ii) stop walking parentage chain once first LIVE_READ is seen. Combined gives approx 20x speedup. Increase limits for accepting large programs under root, and add various stress tests, from Alexei. 2) Implement global data support in BPF. This enables static global variables for .data, .rodata and .bss sections to be properly handled which allows for more natural program development. This also opens up the possibility to optimize program workflow by compiling ELFs only once and later only rewriting section data before reload, from Daniel and with test cases and libbpf refactoring from Joe. 3) Add config option to generate BTF type info for vmlinux as part of the kernel build process. DWARF debug info is converted via pahole to BTF. Latter relies on libbpf and makes use of BTF deduplication algorithm which results in 100x savings compared to DWARF data. Resulting .BTF section is typically about 2MB in size, from Andrii. 4) Add BPF verifier support for stack access with variable offset from helpers and add various test cases along with it, from Andrey. 5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header so that L2 encapsulation can be used for tc tunnels, from Alan. 6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that users can define a subset of allowed __sk_buff fields that get fed into the test program, from Stanislav. 7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up various UBSAN warnings in bpftool, from Yonghong. 8) Generate a pkg-config file for libbpf, from Luca. 9) Dump program's BTF id in bpftool, from Prashant. 10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP program, from Magnus. 11) kallsyms related fixes for the case when symbols are not present in BPF selftests and samples, from Daniel ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stanislav Fomichev 提交于
This should allow us later to extend BPF_PROG_TEST_RUN for non-skb case and be sure that nobody is erroneously setting ctx_{in,out}. Fixes: b0b9395d ("bpf: support input __sk_buff context in BPF_PROG_TEST_RUN") Reported-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NStanislav Fomichev <sdf@google.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Daniel Borkmann 提交于
Add the definition for smp_rmb(), smp_wmb(), and smp_mb() to the tools include infrastructure: this patch adds the implementation for x86-64 and arm64, and have it fall back as currently is for other archs which do not have it implemented at this point. The x86-64 one uses lock + add combination for smp_mb() with address below red zone. This is on top of 09d62154 ("tools, perf: add and use optimized ring_buffer_{read_head, write_tail} helpers"), which didn't touch smp_* barrier implementations. Magnus recently rightfully reported however that the latter on x86-64 still wrongly falls back to sfence, lfence and mfence respectively, thus fix that for applications under tools making use of these to avoid such ugly surprises. The main header under tools (include/asm/barrier.h) will in that case not select the fallback implementation. Reported-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 David S. Miller 提交于
David Ahern says: ==================== ipv6: Refactor nexthop selection helpers during a fib lookup IPv6 has a fib6_nh embedded within each fib6_info and a separate fib6_info for each path in a multipath route. A side effect is that a fib6_info is passed all the way down the stack when selecting a path on a fib lookup. Refactor the fib lookup functions and associated helper functions to take a fib6_nh when appropriate to enable IPv6 to work with nexthop objects where the fib6_nh is not directly part of a fib entry. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Move the nexthop evaluation of a fib entry to a helper that can be leveraged for each fib6_nh in a multipath nexthop object. In the move, 'continue' statements means the helper returns false (loop should continue) and 'break' means return true (found the entry of interest). Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Move the device and gateway checks in the fib6_next loop to a helper that can be called per fib6_nh entry. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Move the siblings and fib6_multipath_select after the null entry check since a null entry can not have siblings. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Clean up the fib6_null_entry handling in ip6_pol_route_lookup. rt6_device_match can return fib6_null_entry, but fib6_multipath_select can not. Consolidate the fib6_null_entry handling and on the final null_entry check set rt and goto out - no need to defer to a second check after rt6_find_cached_rt. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
find_rr_leaf has 3 loops over fib_entries calling find_match. The loops are very similar with differences in start point and whether the metric is evaluated: 1. start at rr_head, no extra loop compare, check fib metric 2. start at leaf, compare rt against rr_head, check metric 3. start at cont (potential saved point from earlier loops), no extra loop compare, no metric check Create 1 loop that is called 3 different times. This will make a later change with multipath nexthop objects much simpler. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
find_match primarily needs a fib6_nh (and fib6_flags which it passes through to rt6_score_route). Move fib6_check_expired up to the call sites so find_match is only called for relevant entries. Remove the match argument which is mostly a pass through and use the return boolean to decide if match gets set in the call sites. The end result is a helper that can be called per fib6_nh struct which is needed once fib entries reference nexthop objects that have more than one fib6_nh. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
rt6_score_route only needs the fib6_flags and nexthop data. Change it accordingly. Allows re-use later for nexthop based fib6_nh. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
rt6_probe sends probes for gateways in a nexthop. As such it really depends on a fib6_nh, not a fib entry. Move last_probe to fib6_nh and update rt6_probe to a fib6_nh struct. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
rt6_check_dev is a simpler helper with only 1 caller. Fold the code into rt6_score_route. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Change rt6_check_neigh to take a fib6_nh instead of a fib entry. Move the check on fib_flags and whether the nexthop has a gateway up to the one caller. Remove the inline from the definition as well. Not necessary. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Colin Ian King 提交于
The zero namelen check is redundant as it has already been checked for zero at the start of the function. Remove the redundant check. Addresses-Coverity: ("Logically Dead Code") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
Alan Maguire says: ==================== Extend bpf_skb_adjust_room growth to mark inner MAC header so that L2 encapsulation can be used for tc tunnels. Patch #1 extends the existing test_tc_tunnel to support UDP encapsulation; later we want to be able to test MPLS over UDP and MPLS over GRE encapsulation. Patch #2 adds the BPF_F_ADJ_ROOM_ENCAP_L2(len) macro, which allows specification of inner mac length. Other approaches were explored prior to taking this approach. Specifically, I tried automatically computing the inner mac length on the basis of the specified flags (so inner maclen for GRE/IPv4 encap is the len_diff specified to bpf_skb_adjust_room minus GRE + IPv4 header length for example). Problem with this is that we don't know for sure what form of GRE/UDP header we have; is it a full GRE header, or is it a FOU UDP header or generic UDP encap header? My fear here was we'd end up with an explosion of flags. The other approach tried was to support inner L2 header marking as a separate room adjustment, i.e. adjust for L3/L4 encap, then call bpf_skb_adjust_room for L2 encap. This can be made to work but because it imposed an order on operations, felt a bit clunky. Patch #3 syncs tools/ bpf.h. Patch #4 extends the tests again to support MPLSoverGRE, MPLSoverUDP, and transparent ethernet bridging (TEB) where the inner L2 header is an ethernet header. Testing of BPF encap against tunnels is done for cases where configuration of such tunnels is possible (MPLSoverGRE[6], MPLSoverUDP, gre[6]tap), and skipped otherwise. Testing of BPF encap/decap is always carried out. Changes since v2: - updated tools/testing/selftest/bpf/config with FOU/MPLS CONFIG variables (patches 1, 4) - reduced noise in patch 1 by avoiding unnecessary movement of code - eliminated inner_mac variable in bpf_skb_net_grow (patch 2) Changes since v1: - fixed formatting of commit references. - BPF_F_ADJ_ROOM_FIXED_GSO flag enabled on all variants (patch 1) - fixed fou6 options for UDP encap; checksum errors observed were due to the fact fou6 tunnel was not set up with correct ipproto options (41 -6). 0 checksums work fine (patch 1) - added definitions for mask and shift used in setting L2 length (patch 2) - allow udp encap with fixed GSO (patch 2) - changed "elen" to "l2_len" to be more descriptive (patch 4) ==================== Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-