- 07 9月, 2022 26 次提交
-
-
由 Lior Nahmanson 提交于
Add new namespace for MACsec RX flows. Encrypted MACsec packets should be first decrypted and stripped from MACsec header and then continues with the kernel's steering pipeline. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
Add a support for Connect-X MACsec offload Rx SA & SC commands: add, update and delete. SCs are created on demend and aren't limited by number and unique by SCI. Each Rx SA must be associated with Rx SC according to SCI. Follow-up patches will implement the Rx steering. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
MACsec driver marks Tx packets for device offload using a dedicated skb_metadata_dst which holds a 64 bits SCI number. A previously set rule will match on this number so the correct SA is used for the MACsec operation. As device driver can only provide 32 bits of metadata to flow tables, need to used a mapping from 64 bit to 32 bits marker or id, which is can be achieved by provide a 32 bit unique flow id in the control path, and used a hash table to map 64 bit to the unique id in the datapath. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
Tx flow steering consists of two flow tables (FTs). The first FT (crypto table) has two fixed rules: One default miss rule so non MACsec offloaded packets bypass the MACSec tables, another rule to make sure that MACsec key exchange (MKE) traffic passes unencrypted as expected (matched of ethertype). On each new MACsec offload flow, a new MACsec rule is added. This rule is matched on metadata_reg_a (which contains the id of the flow) and invokes the MACsec offload action on match. The second FT (check table) has two fixed rules: One rule for verifying that the previous offload actions were finished successfully and packet need to be transmitted. Another default rule for dropping packets that were failed in the offload actions. The MACsec FTs should be created on demand when the first MACsec rule is added and destroyed when the last MACsec rule is deleted. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
Changed EGRESS_KERNEL namespace to EGRESS_IPSEC and add new namespace for MACsec TX. This namespace should be the last namespace for transmitted packets. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
This patch adds support for Connect-X MACsec offload Tx SA commands: add, update and delete. In Connect-X MACsec, a Security Association (SA) is added or deleted via allocating a HW context of an encryption/decryption key and a HW context of a matching SA (MACsec object). When new SA is added: - Use a separate crypto key HW context. - Create a separate MACsec context in HW to include the SA properties. Introduce a new compilation flag MLX5_EN_MACSEC for it. Follow-up patches will implement the Tx steering. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
Add MACsec offload related IFC structs, layouts and enumerations. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
In order to support MACsec offload (and maybe some other crypto features in the future), generalize flow action parameters / defines to be used by crypto offlaods other than IPsec. The following changes made: ipsec_obj_id field at flow action context was changed to crypto_obj_id, intreduced a new crypto_type field where IPsec is the default zero type for backward compatibility. Action ipsec_decrypt was changed to crypto_decrypt. Action ipsec_encrypt was changed to crypto_encrypt. IPsec offload code was updated accordingly for backward compatibility. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
esp_id is no longer in used Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
Move some MACsec infrastructure like defines and functions, in order to avoid code duplication for future drivers which implements MACsec offload. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Reviewed-by: NJiri Pirko <jiri@nvidia.com> Reviewed-by: NBen Ben-Ishay <benishay@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
Like in the Tx changes, if there are more than one MACsec device with the same MAC address as in the packet's destination MAC, the packet will be forward only to this device and not neccessarly to the desired one. Offloading device drivers will mark offloaded MACsec SKBs with the corresponding SCI in the skb_metadata_dst so the macsec rx handler will know to which port to divert those skbs, instead of wrongly solely relaying on dst MAC address comparison. Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lior Nahmanson 提交于
In the current MACsec offload implementation, MACsec interfaces shares the same MAC address by default. Therefore, HW can't distinguish from which MACsec interface the traffic originated from. MACsec stack will use skb_metadata_dst to store the SCI value, which is unique per Macsec interface, skb_metadat_dst will be used by the offloading device driver to associate the SKB with the corresponding offloaded interface (SCI). Signed-off-by: NLior Nahmanson <liorna@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
Now that nla_policy allows range checks for bigendian data make use of this to reject such attributes. At this time, reject happens later from the init or select_ops callbacks, but its prone to errors. In the future, new attributes can be handled via NLA_POLICY_MAX_BE and exiting ones can be converted one by one. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
netlink allows to specify allowed ranges for integer types. Unfortunately, nfnetlink passes integers in big endian, so the existing NLA_POLICY_MAX() cannot be used. At the moment, nfnetlink users, such as nf_tables, need to resort to programmatic checking via helpers such as nft_parse_u32_check(). This is both cumbersome and error prone. This adds NLA_POLICY_MAX_BE which adds range check support for BE16, BE32 and BE64 integers. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Edward Cree says: ==================== sfc: add support for PTP over IPv6 and 802.3 Most recent cards (8000 series and newer) had enough hardware support for this, but it was not enabled in the driver. The transmission of PTP packets over these protocols was already added in commit bd4a2697 ("sfc: use hardware tx timestamps for more than PTP"), but receiving them was already unsupported so synchronization didn't happen. These patches add support for timestamping received packets over IPv6/UPD and IEEE802.3. v2: fixed weird indentation in efx_ptp_init_filter v3: fixed bug caused by usage of htons in PTP_EVENT_PORT definition. It was used in more places, where htons was used too, so using it 2 times leave it again in host order. I didn't detected it in my tests because it only affected if timestamping through the MC, but the model I used do it through the MAC. Detected by kernel test robot <lkp@intel.com> v4: removed `inline` specifiers from 2 local functions v5: restored deleted comment with useful explanation about packets reordering. Deleted useless whitespaces. ==================== Reviewed-by: NEdward Cree <ecree.xilinx@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Íñigo Huguet 提交于
The previous patch add support for PTP over IPv6/UDP (only for 8000 series and newer) and this one add support for PTP over 802.3. Tested: sync as master and as slave is correct with ptp4l. PTP over IPv4 and IPv6 still works fine. Suggested-by: NEdward Cree <ecree.xilinx@gmail.com> Signed-off-by: NÍñigo Huguet <ihuguet@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Íñigo Huguet 提交于
commit bd4a2697 ("sfc: use hardware tx timestamps for more than PTP") added support for hardware timestamping on TX for cards of the 8000 series and newer, in an effort to provide support for other transports other than IPv4/UDP. However, timestamping was still not working on RX for these other transports. This patch add support for PTP over IPv6/UDP. Tested: sync as master and as slave is correct using ptp4l from linuxptp package, both with IPv4 and IPv6. Suggested-by: NEdward Cree <ecree.xilinx@gmail.com> Signed-off-by: NÍñigo Huguet <ihuguet@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Íñigo Huguet 提交于
In preparation for the support of PTP over IPv6/UDP and Ethernet in next patches, allow a more flexible way of adding and removing RX filters for PTP. Right now, only 2 filters are allowed, which are the ones needed for PTP over IPv4/UDP. Signed-off-by: NÍñigo Huguet <ihuguet@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jerry Ray 提交于
Adding support for the LAN9354 device by allowing it to use the LAN9303 DSA driver. These devices have the same underlying access and control methods and from a feature set point of view the LAN9354 is a superset of the LAN9303. The MDIO access method has been tested on a SAMA5D3-EDS board with a LAN9354 RMII daughter card. While the SPI access method should also be the same, it has not been tested and as such is not included at this time. Signed-off-by: NJerry Ray <jerry.ray@microchip.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jerry Ray 提交于
Add initial BYTE_ORDER read to sync the 32-bit accesses over the 16-bit mdio bus to improve driver robustness. The lan9303 expects two mdio read transactions back-to-back to read a 32-bit register. The first read transaction causes the other half of the 32-bit register to get latched. The subsequent read returns the latched second half of the 32-bit read. The BYTE_ORDER register is an exception to this rule. As it is a constant value, there is no need to latch the second half. We read this register first in case there were reads during the boot loader process that might have occurred prior to this driver taking over ownership of accessing this device. This patch has been tested on the SAMA5D3-EDS with a LAN9303 RMII daughter card. Signed-off-by: NJerry Ray <jerry.ray@microchip.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Romain Naour 提交于
Add register validation for KSZ9896. Signed-off-by: NRomain Naour <romain.naour@skf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Romain Naour 提交于
According to the KSZ9477S datasheet, there is no global register at 0x033C and 0x033D addresses. Signed-off-by: NRomain Naour <romain.naour@skf.com> Cc: Oleksij Rempel <o.rempel@pengutronix.de> Tested-by: NOleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Romain Naour 提交于
Add support for the KSZ9896 6-port Gigabit Ethernet Switch to the ksz9477 driver. The KSZ9896 supports both SPI (already in) and I2C. Signed-off-by: NRomain Naour <romain.naour@skf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Romain Naour 提交于
Add support for the KSZ9896 6-port Gigabit Ethernet Switch to the ksz9477 driver. Although the KSZ9896 is already listed in the device tree binding documentation since a1c0ed24 (dt-bindings: net: dsa: document additional Microchip KSZ9477 family switches) the chip id (0x00989600) is not recognized by ksz_switch_detect() and rejected by the driver. The KSZ9896 is similar to KSZ9897 but has only one configurable MII/RMII/RGMII/GMII cpu port. Signed-off-by: NRomain Naour <romain.naour@skf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next由 Paolo Abeni 提交于
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-09-05 The following pull-request contains BPF updates for your *net-next* tree. We've added 106 non-merge commits during the last 18 day(s) which contain a total of 159 files changed, 5225 insertions(+), 1358 deletions(-). There are two small merge conflicts, resolve them as follows: 1) tools/testing/selftests/bpf/DENYLIST.s390x Commit 27e23836 ("selftests/bpf: Add lru_bug to s390x deny list") in bpf tree was needed to get BPF CI green on s390x, but it conflicted with newly added tests on bpf-next. Resolve by adding both hunks, result: [...] lru_bug # prog 'printk': failed to auto-attach: -524 setget_sockopt # attach unexpected error: -524 (trampoline) cb_refs # expected error message unexpected error: -524 (trampoline) cgroup_hierarchical_stats # JIT does not support calling kernel function (kfunc) htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) [...] 2) net/core/filter.c Commit 1227c177 ("net: Fix data-races around sysctl_[rw]mem_(max|default).") from net tree conflicts with commit 29003875 ("bpf: Change bpf_setsockopt(SOL_SOCKET) to reuse sk_setsockopt()") from bpf-next tree. Take the code as it is from bpf-next tree, result: [...] if (getopt) { if (optname == SO_BINDTODEVICE) return -EINVAL; return sk_getsockopt(sk, SOL_SOCKET, optname, KERNEL_SOCKPTR(optval), KERNEL_SOCKPTR(optlen)); } return sk_setsockopt(sk, SOL_SOCKET, optname, KERNEL_SOCKPTR(optval), *optlen); [...] The main changes are: 1) Add any-context BPF specific memory allocator which is useful in particular for BPF tracing with bonus of performance equal to full prealloc, from Alexei Starovoitov. 2) Big batch to remove duplicated code from bpf_{get,set}sockopt() helpers as an effort to reuse the existing core socket code as much as possible, from Martin KaFai Lau. 3) Extend BPF flow dissector for BPF programs to just augment the in-kernel dissector with custom logic. In other words, allow for partial replacement, from Shmulik Ladkani. 4) Add a new cgroup iterator to BPF with different traversal options, from Hao Luo. 5) Support for BPF to collect hierarchical cgroup statistics efficiently through BPF integration with the rstat framework, from Yosry Ahmed. 6) Support bpf_{g,s}et_retval() under more BPF cgroup hooks, from Stanislav Fomichev. 7) BPF hash table and local storages fixes under fully preemptible kernel, from Hou Tao. 8) Add various improvements to BPF selftests and libbpf for compilation with gcc BPF backend, from James Hilliard. 9) Fix verifier helper permissions and reference state management for synchronous callbacks, from Kumar Kartikeya Dwivedi. 10) Add support for BPF selftest's xskxceiver to also be used against real devices that support MAC loopback, from Maciej Fijalkowski. 11) Various fixes to the bpf-helpers(7) man page generation script, from Quentin Monnet. 12) Document BPF verifier's tnum_in(tnum_range(), ...) gotchas, from Shung-Hsi Yu. 13) Various minor misc improvements all over the place. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (106 commits) bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc. bpf: Remove usage of kmem_cache from bpf_mem_cache. bpf: Remove prealloc-only restriction for sleepable bpf programs. bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs. bpf: Remove tracing program restriction on map types bpf: Convert percpu hash map to per-cpu bpf_mem_alloc. bpf: Add percpu allocation support to bpf_mem_alloc. bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU. bpf: Adjust low/high watermarks in bpf_mem_cache bpf: Optimize call_rcu in non-preallocated hash map. bpf: Optimize element count in non-preallocated hash map. bpf: Relax the requirement to use preallocated hash maps in tracing progs. samples/bpf: Reduce syscall overhead in map_perf_test. selftests/bpf: Improve test coverage of test_maps bpf: Convert hash map to bpf_mem_alloc. bpf: Introduce any context BPF specific memory allocator. selftest/bpf: Add test for bpf_getsockopt() bpf: Change bpf_getsockopt(SOL_IPV6) to reuse do_ipv6_getsockopt() bpf: Change bpf_getsockopt(SOL_IP) to reuse do_ip_getsockopt() bpf: Change bpf_getsockopt(SOL_TCP) to reuse do_tcp_getsockopt() ... ==================== Link: https://lore.kernel.org/r/20220905161136.9150-1-daniel@iogearbox.netSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
- 06 9月, 2022 3 次提交
-
-
由 Sergei Antonov 提交于
Sparse checker found two endianness-related issues: .../moxart_ether.c:34:15: warning: incorrect type in assignment (different base types) .../moxart_ether.c:34:15: expected unsigned int [usertype] .../moxart_ether.c:34:15: got restricted __le32 [usertype] .../moxart_ether.c:39:16: warning: cast to restricted __le32 Fix them by using __le32 type instead of u32. Signed-off-by: NSergei Antonov <saproj@gmail.com> Link: https://lore.kernel.org/r/20220902125037.1480268-1-saproj@gmail.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Sergei Antonov 提交于
Sparse found a number of endianness-related issues of these kinds: .../ftmac100.c:192:32: warning: restricted __le32 degrades to integer .../ftmac100.c:208:23: warning: incorrect type in assignment (different base types) .../ftmac100.c:208:23: expected unsigned int rxdes0 .../ftmac100.c:208:23: got restricted __le32 [usertype] .../ftmac100.c:249:23: warning: invalid assignment: &= .../ftmac100.c:249:23: left side has type unsigned int .../ftmac100.c:249:23: right side has type restricted __le32 .../ftmac100.c:527:16: warning: cast to restricted __le32 Change type of some fields from 'unsigned int' to '__le32' to fix it. Signed-off-by: NSergei Antonov <saproj@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20220902113749.1408562-1-saproj@gmail.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Horatiu Vultur 提交于
Extend lan966x with RGMII support. The MAC supports all RGMII_* modes. Signed-off-by: NHoratiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20220902111548.614525-1-horatiu.vultur@microchip.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
- 05 9月, 2022 11 次提交
-
-
由 Heiner Kallweit 提交于
We're not in a hot path and don't want to miss this message, therefore remove the net_ratelimit() check. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kees Cook 提交于
In preparation for FORTIFY_SOURCE doing bounds-check on memcpy(), switch from __nlmsg_put to nlmsg_put(), and explain the bounds check for dealing with the memcpy() across a composite flexible array struct. Avoids this future run-time warning: memcpy: detected field-spanning write (size 32) of single field "&errmsg->msg" at net/netlink/af_netlink.c:2447 (size 16) Cc: Jakub Kicinski <kuba@kernel.org> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Jozsef Kadlecsik <kadlec@netfilter.org> Cc: Florian Westphal <fw@strlen.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: syzbot <syzkaller@googlegroups.com> Cc: netfilter-devel@vger.kernel.org Cc: coreteam@netfilter.org Cc: netdev@vger.kernel.org Signed-off-by: NKees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220901071336.1418572-1-keescook@chromium.orgSigned-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
Alexei Starovoitov says: ==================== Introduce any context BPF specific memory allocator. Tracing BPF programs can attach to kprobe and fentry. Hence they run in unknown context where calling plain kmalloc() might not be safe. Front-end kmalloc() with per-cpu cache of free elements. Refill this cache asynchronously from irq_work. Major achievements enabled by bpf_mem_alloc: - Dynamically allocated hash maps used to be 10 times slower than fully preallocated. With bpf_mem_alloc and subsequent optimizations the speed of dynamic maps is equal to full prealloc. - Tracing bpf programs can use dynamically allocated hash maps. Potentially saving lots of memory. Typical hash map is sparsely populated. - Sleepable bpf programs can used dynamically allocated hash maps. Future work: - Expose bpf_mem_alloc as uapi FD to be used in dynptr_alloc, kptr_alloc - Convert lru map to bpf_mem_alloc - Further cleanup htab code. Example: htab_use_raw_lock can be removed. Changelog: v5->v6: - Debugged the reason for selftests/bpf/test_maps ooming in a small VM that BPF CI is using. Added patch 16 that optimizes the usage of rcu_barrier-s between bpf_mem_alloc and hash map. It drastically improved the speed of htab destruction. v4->v5: - Fixed missing migrate_disable in hash tab free path (Daniel) - Replaced impossible "memory leak" with WARN_ON_ONCE (Martin) - Dropped sysctl kernel.bpf_force_dyn_alloc patch (Daniel) - Added Andrii's ack - Added new patch 15 that removes kmem_cache usage from bpf_mem_alloc. It saves memory, speeds up map create/destroy operations while maintains hash map update/delete performance. v3->v4: - fix build issue due to missing local.h on 32-bit arch - add Kumar's ack - proposal for next steps from Delyan: https://lore.kernel.org/bpf/d3f76b27f4e55ec9e400ae8dcaecbb702a4932e8.camel@fb.com/ v2->v3: - Rewrote the free_list algorithm based on discussions with Kumar. Patch 1. - Allowed sleepable bpf progs use dynamically allocated maps. Patches 13 and 14. - Added sysctl to force bpf_mem_alloc in hash map even if pre-alloc is requested to reduce memory consumption. Patch 15. - Fix: zero-fill percpu allocation - Single rcu_barrier at the end instead of each cpu during bpf_mem_alloc destruction v2 thread: https://lore.kernel.org/bpf/20220817210419.95560-1-alexei.starovoitov@gmail.com/ v1->v2: - Moved unsafe direct call_rcu() from hash map into safe place inside bpf_mem_alloc. Patches 7 and 9. - Optimized atomic_inc/dec in hash map with percpu_counter. Patch 6. - Tuned watermarks per allocation size. Patch 8 - Adopted this approach to per-cpu allocation. Patch 10. - Fully converted hash map to bpf_mem_alloc. Patch 11. - Removed tracing prog restriction on map types. Combination of all patches and final patch 12. v1 thread: https://lore.kernel.org/bpf/20220623003230.37497-1-alexei.starovoitov@gmail.com/ LWN article: https://lwn.net/Articles/899274/ ==================== Link: https://lore.kernel.org/r/Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Alexei Starovoitov 提交于
User space might be creating and destroying a lot of hash maps. Synchronous rcu_barrier-s in a destruction path of hash map delay freeing of hash buckets and other map memory and may cause artificial OOM situation under stress. Optimize rcu_barrier usage between bpf hash map and bpf_mem_alloc: - remove rcu_barrier from hash map, since htab doesn't use call_rcu directly and there are no callback to wait for. - bpf_mem_alloc has call_rcu_in_progress flag that indicates pending callbacks. Use it to avoid barriers in fast path. - When barriers are needed copy bpf_mem_alloc into temp structure and wait for rcu barrier-s in the worker to let the rest of hash map freeing to proceed. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220902211058.60789-17-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
For bpf_mem_cache based hash maps the following stress test: for (i = 1; i <= 512; i <<= 1) for (j = 1; j <= 1 << 18; j <<= 1) fd = bpf_map_create(BPF_MAP_TYPE_HASH, NULL, i, j, 2, 0); creates many kmem_cache-s that are not mergeable in debug kernels and consume unnecessary amount of memory. Turned out bpf_mem_cache's free_list logic does batching well, so usage of kmem_cache for fixes size allocations doesn't bring any performance benefits vs normal kmalloc. Hence get rid of kmem_cache in bpf_mem_cache. That saves memory, speeds up map create/destroy operations, while maintains hash map update/delete performance. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220902211058.60789-16-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
Since hash map is now converted to bpf_mem_alloc and it's waiting for rcu and rcu_tasks_trace GPs before freeing elements into global memory slabs it's safe to use dynamically allocated hash maps in sleepable bpf programs. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NKumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-15-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
Use call_rcu_tasks_trace() to wait for sleepable progs to finish. Then use call_rcu() to wait for normal progs to finish and finally do free_one() on each element when freeing objects into global memory pool. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NKumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-14-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
The hash map is now fully converted to bpf_mem_alloc. Its implementation is not allocating synchronously and not calling call_rcu() directly. It's now safe to use non-preallocated hash maps in all types of tracing programs including BPF_PROG_TYPE_PERF_EVENT that runs out of NMI context. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NKumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-13-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
Convert dynamic allocations in percpu hash map from alloc_percpu() to bpf_mem_cache_alloc() from per-cpu bpf_mem_alloc. Since bpf_mem_alloc frees objects after RCU gp the call_rcu() is removed. pcpu_init_value() now needs to zero-fill per-cpu allocations, since dynamically allocated map elements are now similar to full prealloc, since alloc_percpu() is not called inline and the elements are reused in the freelist. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NKumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-12-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
Extend bpf_mem_alloc to cache free list of fixed size per-cpu allocations. Once such cache is created bpf_mem_cache_alloc() will return per-cpu objects. bpf_mem_cache_free() will free them back into global per-cpu pool after observing RCU grace period. per-cpu flavor of bpf_mem_alloc is going to be used by per-cpu hash maps. The free list cache consists of tuples { llist_node, per-cpu pointer } Unlike alloc_percpu() that returns per-cpu pointer the bpf_mem_cache_alloc() returns a pointer to per-cpu pointer and bpf_mem_cache_free() expects to receive it back. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NKumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-11-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
SLAB_TYPESAFE_BY_RCU makes kmem_caches non mergeable and slows down kmem_cache_destroy. All bpf_mem_cache are safe to share across different maps and programs. Convert SLAB_TYPESAFE_BY_RCU to batched call_rcu. This change solves the memory consumption issue, avoids kmem_cache_destroy latency and keeps bpf hash map performance the same. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NKumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-10-alexei.starovoitov@gmail.com
-