- 09 1月, 2021 2 次提交
-
-
由 Lorenzo Bianconi 提交于
Introduce xdp_prepare_buff utility routine to initialize per-descriptor xdp_buff fields (e.g. xdp_buff pointers). Rely on xdp_prepare_buff() in all XDP capable drivers. Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Acked-by: NMartin Habets <habetsm.xilinx@gmail.com> Acked-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/bpf/45f46f12295972a97da8ca01990b3e71501e9d89.1608670965.git.lorenzo@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Lorenzo Bianconi 提交于
Introduce xdp_init_buff utility routine to initialize xdp_buff fields const over NAPI iterations (e.g. frame_sz or rxq pointer). Rely on xdp_init_buff in all XDP capable drivers. Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Acked-by: NMartin Habets <habetsm.xilinx@gmail.com> Acked-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/bpf/7f8329b6da1434dc2b05a77f2e800b29628a8913.1608670965.git.lorenzo@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 08 1月, 2021 38 次提交
-
-
由 Aleksander Jan Bajkowski 提交于
Exclude RMII from modes that report 1 GbE support. Reduced MII supports up to 100 MbE. Fixes: 14fceff4 ("net: dsa: Add Lantiq / Intel DSA driver for vrx200") Signed-off-by: NAleksander Jan Bajkowski <olek2@wp.pl> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20210107195818.3878-1-olek2@wp.plSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Colin Ian King 提交于
Currently the error return paths don't kfree lmac and lmac->name leading to some memory leaks. Fix this by adding two error return paths that kfree these objects Addresses-Coverity: ("Resource leak") Fixes: 1463f382 ("octeontx2-af: Add support for CGX link management") Signed-off-by: NColin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/20210107123916.189748-1-colin.king@canonical.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
CPL_ABORT_RPL is sent after releasing the resources by calling chtls_release_resources(sk); and chtls_conn_done(sk); eventually causing kernel panic. Fixing it by calling release in appropriate order. Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NVinay Kumar Yadav <vinay.yadav@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
In case of server removal lookup_stid() may return NULL pointer, which is used as listen_ctx. So added a check before accessing this pointer. Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NVinay Kumar Yadav <vinay.yadav@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
The skb is unlinked twice, one in __skb_dequeue in function chtls_reset_synq() and another in cleanup_syn_rcv_conn(). So in this patch using skb_peek() instead of __skb_dequeue(), so that unlink will be handled only in cleanup_syn_rcv_conn(). Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NVinay Kumar Yadav <vinay.yadav@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
In chtls_pass_accept_request(), removing the chtls_reqsk_free() call to avoid oreq freeing twice. Here oreq is the pointer to struct request_sock. Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NRohit Maheshwari <rohitm@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
If route to peer is not configured, we might get non tls devices from dst_neigh_lookup() which is invalid, adding a check to avoid it. Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NRohit Maheshwari <rohitm@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
At the time of SYN_RECV, connection information is not initialized at FW, updating tcb flag over uninitialized connection causes adapter crash. We don't need to update the flag during SYN_RECV state, so avoid this. Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NRohit Maheshwari <rohitm@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ayush Sawal 提交于
send_abort_rpl() is not calculating cpl_abort_req_rss offset and ends up sending wrong TID with abort_rpl WR causng tid leaks. Replaced send_abort_rpl() with chtls_send_abort_rpl() as it is redundant. Fixes: cc35c88a ("crypto : chtls - CPL handler definition") Signed-off-by: NRohit Maheshwari <rohitm@chelsio.com> Signed-off-by: NAyush Sawal <ayush.sawal@chelsio.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jonathan Lemon 提交于
Replace direct assignments with skb_zcopy_init() for zerocopy cases where a new skb is initialized, without changing the reference counts. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jonathan Lemon 提交于
In preparation for expanded zerocopy (TX and RX), move the zerocopy related bits out of tx_flags into their own flag word. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jonathan Lemon 提交于
Add an optional skb parameter to the zerocopy callback parameter, which is passed down from skb_zcopy_clear(). This gives access to the original skb, which is needed for upcoming RX zero-copy error handling. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Martin Blumenstingl 提交于
Amlogic Meson G12A (and newer: G12B, SM1) SoCs have a more advanced RX delay logic. Instead of fine-tuning the delay in the nanoseconds range it now allows tuning in 200 picosecond steps. This support comes with new bits in the PRG_ETH1[19:16] register. Add support for validating the RGMII RX delay as well as configuring the register accordingly on these platforms. Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Martin Blumenstingl 提交于
Newer SoCs starting with the Amlogic Meson G12A have more a precise RGMII RX delay configuration register. This means more complexity in the code. Extract the existing RGMII delay configuration code into a separate function to make it easier to read/understand even when adding more logic in the future. Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Martin Blumenstingl 提交于
Amlogic Meson G12A, G12B and SM1 SoCs have a more advanced RGMII RX delay register which allows picoseconds precision. Parse the new "rx-internal-delay-ps" property or fall back to the value from the old "amlogic,rx-delay-ns" property. No upstream DTB uses the old "amlogic,rx-delay-ns" property (yet). Only include minimalistic logic to fall back to the old property, without any special validation (for example if the old and new property are given at the same time). Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Martin Blumenstingl 提交于
The timing-adjustment clock only has to be enabled when a) there is a 2ns RX delay configured using device-tree and b) the phy-mode indicates that the RX delay should be enabled. Only enable the RX delay if both are true, instead of (by accident) also enabling it when there's the 2ns RX delay configured but the phy-mode incicates that the RX delay is not used. Fixes: 9308c476 ("net: stmmac: dwmac-meson8b: add support for the RX delay configuration") Reported-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Vladimir Oltean 提交于
The SYSTEMPORT driver maps each port of the embedded Broadcom DSA switch port to a certain queue of the master Ethernet controller. For that it currently uses a dedicated notifier infrastructure which was added in commit 60724d4b ("net: dsa: Add support for DSA specific notifiers"). However, since commit 2f1e8ea7 ("net: dsa: link interfaces with the DSA master to get rid of lockdep warnings"), DSA is actually an upper of the Broadcom SYSTEMPORT as far as the netdevice adjacency lists are concerned. So naturally, the plain NETDEV_CHANGEUPPER net device notifiers are emitted. It looks like there is enough API exposed by DSA to the outside world already to make the call_dsa_notifiers API redundant. So let's convert its only user to plain netdev notifiers. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Tested-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Vladimir Oltean 提交于
It is a bit strange to see something as specific as Broadcom SYSTEMPORT bits in the main DSA include file. Move these away into a separate header, and have the tagger and the SYSTEMPORT driver include them. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Vladimir Oltean 提交于
Given the following setup: ip link add br0 type bridge ip link set eno0 master br0 ip link set swp0 master br0 ip link set swp1 master br0 ip link set swp2 master br0 ip link set swp3 master br0 Currently, packets received on a DSA slave interface (such as swp0) which should be routed by the software bridge towards a non-switch port (such as eno0) are also flooded towards the other switch ports (swp1, swp2, swp3) because the destination is unknown to the hardware switch. This patch addresses the issue by monitoring the addresses learnt by the software bridge on eno0, and adding/deleting them as static FDB entries on the CPU port accordingly. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Heiner Kallweit 提交于
According to Realtek the ERI register 0x1a8 quirk is needed to work around a hw issue with the PHY on RTL8168g. The register needs to be changed before powering down the PHY. Currently we don't meet this requirement, however I'm not aware of any problems caused by this. Therefore I see the change as an improvement. The PHY driver has no means to access the chip ERI registers, therefore we have to intercept MDIO writes to BMCR register. If the BMCR_PDOWN bit is going to be set, then let's apply the quirk before actually powering down the PHY. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Heiner Kallweit 提交于
No functional change here. We just move a code block to avoid a function forward declaration in a subsequent change. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Heiner Kallweit 提交于
Switch to lockdep_assert_held(_once), similar to what is being done in other subsystems. One advantage is that there's zero runtime overhead if lockdep support isn't enabled. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/ccc40b9d-8ee0-43a1-5009-2cc95ca79c85@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Florian Fainelli 提交于
BCM72116 features a 28nm integrated EPHY, add an entry to match this PHY OUI. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20210106170944.1253046-1-f.fainelli@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
All UDP tunnel port management is now routed via udp_tunnel_nic infra directly. Remove the old callbacks. Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Reviewed-by: NJacob Keller <jacob.e.keller@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
udp_tunnel_nic handles REGISTER and UNREGISTER event, now that all drivers use that infra we can drop the event handling in the tunnel drivers. Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Reviewed-by: NJacob Keller <jacob.e.keller@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Florian Fainelli 提交于
All of the OF code that is used has stubbed and will compile and link just fine, keeping COMPILE_TEST is enough. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20210106191546.1358324-1-f.fainelli@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
Use existing rx processed count to track against budget, thereby making budget decrement operation redundant. rx_desc_count can be calculated outside the rx loop, making the loop a bit smaller. Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
We can increase the efficiency of rx path by using buffers to receive packets then build SKBs around them just before passing into the network stack. In contrast, preallocating SKBs too early reduces CPU cache efficiency. Check if we're in NAPI context when refilling RX. Normally we're almost always running in NAPI context. Dispatch to napi_alloc_frag directly instead of relying on netdev_alloc_frag which does the same but with the overhead of local_bh_disable/enable. Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec performance. Included netif_receive_skb_list and NET_IP_ALIGN optimizations. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 49.9 MBytes 41.9 Mbits/sec 197 sender [ 4] 0.00-10.00 sec 49.3 MBytes 41.3 Mbits/sec receiver After: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 171 MBytes 47.8 Mbits/sec 272 sender [ 4] 0.00-30.00 sec 170 MBytes 47.6 Mbits/sec receiver Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
The rx SKB ring use the same code for cleanup at various points. Combine them into a function to reduce lines of code. Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
Use netdev_alloc_skb_ip_align on newer SoCs with integrated switch (enetsw) when refilling RX. Increases packet processing performance by 30% (with netif_receive_skb_list). Non-enetsw SoCs cannot function with the extra pad so continue to use the regular netdev_alloc_skb. Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec performance. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender [ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver After (+netif_receive_skb_list): [ 4] 0.00-30.00 sec 155 MBytes 43.3 Mbits/sec 354 sender [ 4] 0.00-30.00 sec 154 MBytes 43.1 Mbits/sec receiver Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
Support bulking hardware TX queue by using netdev_xmit_more(). Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
Add Byte Queue Limits support to reduce/remove bufferbloat in bcm63xx_enet. Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
Use netif_receive_skb_list to batch process rx skb. Tested on BCM6328 320 MHz using iperf3 -M 512, increasing performance by 12.5%. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender [ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver After: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 136 MBytes 37.9 Mbits/sec 203 sender [ 4] 0.00-30.00 sec 135 MBytes 37.7 Mbits/sec receiver Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Dinghao Liu 提交于
When mlx5_create_flow_group() fails, ft->g should be freed just like when kvzalloc() fails. The caller of mlx5e_create_l2_table_groups() does not catch this issue on failure, which leads to memleak. Fixes: 33cfaaa8 ("net/mlx5e: Split the main flow steering table") Signed-off-by: NDinghao Liu <dinghao.liu@zju.edu.cn> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dinghao Liu 提交于
mlx5e_create_ttc_table_groups() frees ft->g on failure of kvzalloc(), but such failure will be caught by its caller in mlx5e_create_ttc_table() and ft->g will be freed again in mlx5e_destroy_flow_table(). The same issue also occurs in mlx5e_create_ttc_table_groups(). Set ft->g to NULL after kfree() to avoid double free. Fixes: 7b3722fa ("net/mlx5e: Support RSS for GRE tunneled packets") Fixes: 33cfaaa8 ("net/mlx5e: Split the main flow steering table") Signed-off-by: NDinghao Liu <dinghao.liu@zju.edu.cn> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Leon Romanovsky 提交于
Add missed freeing previously allocated devlink object. Fixes: a925b5e3 ("net/mlx5: Register mlx5 devices to auxiliary virtual bus") Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Aya Levin 提交于
Prior to this patch, configuring speed to 50G with autoneg off over devices supporting 50G per lane failed. Support for 50G per lane introduced a new set of link-modes, on which driver always performed a speed validation as if only legacy link-modes were configured. Fix driver speed validation to force setting autoneg over 56G only if in legacy link-mode. Fixes: 3d7cadae ("net/mlx5e: ethtool, Fix analysis of speed setting") Signed-off-by: NAya Levin <ayal@nvidia.com> Reviewed-by: NEran Ben Elisha <eranbe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Maor Dickman 提交于
sop_drop_qpn field in the cqe is used by two features, in SWITCHDEV mode to restore the chain id in case of a miss and in LEGACY mode to support skbedit mark action. In build RX skb, the skb mark field is set regardless of the configured mode which cause a corruption of the mark field in case of switchdev mode. Fix by overriding the mark value back to 0 in the representor tc update skb flow. Fixes: 8f1e0b97 ("net/mlx5: E-Switch, Mark miss packets with new chain id mapping") Signed-off-by: NMaor Dickman <maord@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-