- 18 5月, 2022 27 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux由 David S. Miller 提交于
Saeed Mahameed says: ==================== mlx5-updates-2022-05-17 MISC updates to mlx5 dirver 1) Aya Levin allows relaxed ordering over VFs 2) Gal Pressman Adds support XDP SQs for uplink representors in switchdev mode 3) Add debugfs TC stats and command failure syndrome for debuggability 4) Tariq uses variants of vzalloc where it could help 5) Multiport eswitch support from Elic Cohen: Eli Cohen Says: =============== The multiport eswitch feature allows to forward traffic from a representor net device to the uplink port of an associated eswitch's uplink port. This feature requires creating a LAG object. Since LAG can be created only once for a function, the feature is mutual exclusive with either bonding or multipath. Multipath eswitch mode is entered automatically these conditions are met: 1. No other LAG related mode is active. 2. A rule that explicitly forwards to an uplink port is inserted. The implementation maintains a reference count on such rules. When the reference count reaches zero, the LAG is released and other modes may be used. When an explicit rule that explicitly forwards to an uplink port is inserted while another LAG mode is active, that rule will not be offloaded by the hardware since the hardware cannot guarantee that the rule will actually be forwarded to that port. Example rules that forwards to an uplink port is: $ tc filter add dev rep0 root flower action mirred egress \ redirect dev uplinkrep0 $ tc filter add dev rep0 root flower action mirred egress \ redirect dev uplinkrep1 This feature is supported only if LAG_RESOURCE_ALLOCATION firmware configuration parameter is set to true. The series consists of three patches: 1. Lag state machine refactor This patch does not add new functionality but rather changes the way the state of the LAG is maintained. 2. Small fix to remove unused argument. 3. The actual implementation of the feature. =============== ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eli Cohen 提交于
Multiport eswitch mode is a LAG mode that allows to add rules that forward traffic to a specific physical port without being affected by LAG affinity configuration. This mode of operation is mutual exclusive with the other LAG modes used by multipath and bonding. To make the transition between the modes, we maintain a counter on the number of rules specifying one of the uplink representors as the target of mirred egress redirect action. An example of such rule would be: $ tc filter add dev enp8s0f0_0 prot all root flower dst_mac \ 00:11:22:33:44:55 action mirred egress redirect dev enp8s0f0 If the reference count just grows to one and LAG is not in use, we create the LAG in multiport eswitch mode. Other mode changes are not allowed while in this mode. When the reference count reaches zero, we destroy the LAG and let other modes be used if needed. logic also changed such that if forwarding to some uplink destination cannot be guaranteed, we fail the operation so the rule will eventually be in software and not in hardware. Signed-off-by: NEli Cohen <elic@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Eli Cohen 提交于
Argument ndev is not used in mlx5_handle_changeupper_event() Remove it. Signed-off-by: NEli Cohen <elic@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Eli Cohen 提交于
LAG state machine is implemented using bit flags. However, all these bit flags, except for MLX5_LAG_FLAG_HASH_BASED, are really mutual exclusive. In addition, MLX5_LAG_FLAG_READY is used by bonding to mark if we have our netdevices successfully added to lag and does not really belong in the same flags variable as the other flags. Rename MLX5_LAG_FLAG_READY to MLX5_LAG_FLAG_NDEVS_READY to better reflect its purpose and put it in a new flags variable. For the rest of the flags, we introduce a mode enum to hold the state of the LAG. Remove the shared fdb boolean flag from struct mlx5_lag and store this configuration as a mode flag. Change all flag related operations to use standard Linux APIs. Signed-off-by: NEli Cohen <elic@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Gal Pressman 提交于
This patch adds the XDP SQs to the uplink representors steering tables in swichdev mode and enables XDP usage on them. Signed-off-by: NGal Pressman <gal@nvidia.com> Reviewed-by: NMaor Dickman <maord@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Moshe Tal 提交于
Correct the calculation of maximum channels of rep to better utilize the hardware resources and allow a larger scale of reps. This will allow creation of all virtual ports configured. Fixes: 473baf2e ("net/mlx5e: Allow profile-specific limitation on max num of channels") Signed-off-by: NMoshe Tal <moshet@nvidia.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Saeed Mahameed 提交于
Connection offload is translated to multiple rules over several hardware flow tables. Unhandled end-cases may cause a hardware resource leak causing multiple system symptoms such as a host memory leak, decreased performance and other scale related issues. Export the current number of firmware FTEs related to the CT table as a debugfs counter. Also add a dropped packets counter to help debug packets dropped on restore failure. To show the offloaded count: cat /sys/kernel/debug/mlx5/<PCI>/ct_nic/offloaded To show the dropped count: cat /sys/kernel/debug/mlx5/<PCI>/ct_nic/rx_dropped Signed-off-by: NPaul Blakey <paulb@mellanox.com> Signed-off-by: NRoi Dayan <paulb@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NPaul Blakey <paulb@nvidia.com>
-
由 Aya Levin 提交于
By PCI spec, the config space of the VF always report relaxed ordering not supported while it inherits this property from its PF. Hence using pcie_relaxed_ordering_enable(), always disables the relaxed ordering on all VFs. Remove this check and rely on the firmware which queries the config space of the PF and set the capability bit accordingly. Signed-off-by: NAya Levin <ayal@nvidia.com> Reviewed-by: NGal Pressman <gal@nvidia.com> Reviewed-by: NMarina Varshaver <marinav@nvidia.com> Reviewed-by: NGal Shalom <galshalom@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Gal Pressman 提交于
Offloading outer checksum on tunnels requires GSO partial, add it to 'vlan_features' to allow offloading tunnels over vlans. For example, running GENEVE over vlan & ipv6 (mandatory UDP checksum) now allows for hardware TSO instead of software segmentation in GSO only. Signed-off-by: NGal Pressman <gal@nvidia.com> Reviewed-by: NAya Levin <ayal@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Gal Pressman 提交于
Followup commit 79ce39be ("net/mlx5e: Improve ethtool rxnfc callback structure") and handle CONFIG_MLX5_EN_RXNFC enabled/disabled inside the fs layer so the ethtool callbacks are always available. The fs layer will provide stubs when CONFIG_MLX5_EN_RXNFC is compiled out. Signed-off-by: NGal Pressman <gal@nvidia.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Tariq Toukan 提交于
Physical continuity is not necessary, and requested allocation size might be larger than PAGE_SIZE. Hence, use v-alloc/free API. Signed-off-by: NTariq Toukan <tariqt@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Tariq Toukan 提交于
Physical continuity is not necessary, and requested allocation size might be larger than PAGE_SIZE. Hence, use v-alloc/free API. Signed-off-by: NTariq Toukan <tariqt@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Tariq Toukan 提交于
Physical continuity is not necessary, and requested allocation size might be larger than PAGE_SIZE. Hence, use v-alloc/free API. Signed-off-by: NTariq Toukan <tariqt@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Tariq Toukan 提交于
Physical continuity is not necessary, and requested allocation size might be larger than PAGE_SIZE. Hence, use v-alloc/free API. Signed-off-by: NTariq Toukan <tariqt@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Tariq Toukan 提交于
Take the wrapper version which picks default node into a header file. This reduces the number of exported functions. Signed-off-by: NTariq Toukan <tariqt@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Moshe Shemesh 提交于
Add syndrome of last command failure per command type to debugfs to ease debugging of such failure. last_failed_syndrome - last command failed syndrome returned by FW. Signed-off-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Saeed Mahameed 提交于
Removing the annotation resolves the issue for some reason. Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Suman Ghosh 提交于
Added support for adaptive IRQ coalescing. It uses net_dim algorithm to find the suitable delay/IRQ count based on the current packet rate. Signed-off-by: NSuman Ghosh <sumang@marvell.com> Link: https://lore.kernel.org/r/20220517044055.876158-1-sumang@marvell.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Xin Long 提交于
Like other places in ipv4/6 dst ifdown, change to use blackhole_netdev instead of pernet loopback_dev in dn dst ifdown. Signed-off-by: NXin Long <lucien.xin@gmail.com> Link: https://lore.kernel.org/r/0cdf10e5a4af509024f08644919121fb71645bc2.1652751029.git.lucien.xin@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Min Li 提交于
Also removes PEROUT_ENABLE_OUTPUT_MASK Signed-off-by: NMin Li <min.li.xe@renesas.com> Acked-by: NRichard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/1652712427-14703-2-git-send-email-min.li.xe@renesas.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Min Li 提交于
Use TOD_READ_SECONDARY for extts to keep TOD_READ_PRIMARY for gettime and settime exclusively. Before this change, TOD_READ_PRIMARY was used for both extts and gettime/settime, which would result in changing TOD read/write triggers between operations. Using TOD_READ_SECONDARY would make extts independent of gettime/settime operation Signed-off-by: NMin Li <min.li.xe@renesas.com> Acked-by: NRichard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/1652712427-14703-1-git-send-email-min.li.xe@renesas.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Guo Zhengkui 提交于
Fix the following coccicheck warning: drivers/net/ethernet/smsc/smc911x.c:483:20-22: WARNING opportunity for min() Signed-off-by: NGuo Zhengkui <guozhengkui@vivo.com> Link: https://lore.kernel.org/r/20220516115627.66363-1-guozhengkui@vivo.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Haowen Bai 提交于
container_of() will never return NULL, so remove useless code. Signed-off-by: NHaowen Bai <baihaowen@meizu.com> Link: https://lore.kernel.org/r/1652696212-17516-1-git-send-email-baihaowen@meizu.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Xiu Jianfeng 提交于
Use memset_startat() helper to simplify the code, there is no functional change in this patch. Signed-off-by: NXiu Jianfeng <xiujianfeng@huawei.com> Link: https://lore.kernel.org/r/20220516092337.131653-1-xiujianfeng@huawei.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
Guangguan Wang says: ==================== net/smc: send and write inline optimization for smc Send cdc msgs and write data inline if qp has sufficent inline space, helps latency reducing. In my test environment, which are 2 VMs running on the same physical host and whose NICs(ConnectX-4Lx) are working on SR-IOV mode, qperf shows 0.4us-1.3us improvement in latency. Test command: server: smc_run taskset -c 1 qperf client: smc_run taskset -c 1 qperf <server ip> -oo \ msg_size:1:2K:*2 -t 30 -vu tcp_lat The results shown below: msgsize before after 1B 11.9 us 10.6 us (-1.3 us) 2B 11.7 us 10.7 us (-1.0 us) 4B 11.7 us 10.7 us (-1.0 us) 8B 11.6 us 10.6 us (-1.0 us) 16B 11.7 us 10.7 us (-1.0 us) 32B 11.7 us 10.6 us (-1.1 us) 64B 11.7 us 11.2 us (-0.5 us) 128B 11.6 us 11.2 us (-0.4 us) 256B 11.8 us 11.2 us (-0.6 us) 512B 11.8 us 11.3 us (-0.5 us) 1KB 11.9 us 11.5 us (-0.4 us) 2KB 12.1 us 11.5 us (-0.6 us) ==================== Link: https://lore.kernel.org/r/20220516055137.51873-1-guangguan.wang@linux.alibaba.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Guangguan Wang 提交于
Rdma write with inline flag when sending small packages, whose length is shorter than the qp's max_inline_data, can help reducing latency. In my test environment, which are 2 VMs running on the same physical host and whose NICs(ConnectX-4Lx) are working on SR-IOV mode, qperf shows 0.5us-0.7us improvement in latency. Test command: server: smc_run taskset -c 1 qperf client: smc_run taskset -c 1 qperf <server ip> -oo \ msg_size:1:2K:*2 -t 30 -vu tcp_lat The results shown below: msgsize before after 1B 11.2 us 10.6 us (-0.6 us) 2B 11.2 us 10.7 us (-0.5 us) 4B 11.3 us 10.7 us (-0.6 us) 8B 11.2 us 10.6 us (-0.6 us) 16B 11.3 us 10.7 us (-0.6 us) 32B 11.3 us 10.6 us (-0.7 us) 64B 11.2 us 11.2 us (0 us) 128B 11.2 us 11.2 us (0 us) 256B 11.2 us 11.2 us (0 us) 512B 11.4 us 11.3 us (-0.1 us) 1KB 11.4 us 11.5 us (0.1 us) 2KB 11.5 us 11.5 us (0 us) Signed-off-by: NGuangguan Wang <guangguan.wang@linux.alibaba.com> Reviewed-by: NTony Lu <tonylu@linux.alibaba.com> Tested-by: Nkernel test robot <lkp@intel.com> Acked-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Guangguan Wang 提交于
As cdc msg's length is 44B, cdc msgs can be sent inline in most rdma devices, which can help reducing sending latency. In my test environment, which are 2 VMs running on the same physical host and whose NICs(ConnectX-4Lx) are working on SR-IOV mode, qperf shows 0.4us-0.7us improvement in latency. Test command: server: smc_run taskset -c 1 qperf client: smc_run taskset -c 1 qperf <server ip> -oo \ msg_size:1:2K:*2 -t 30 -vu tcp_lat The results shown below: msgsize before after 1B 11.9 us 11.2 us (-0.7 us) 2B 11.7 us 11.2 us (-0.5 us) 4B 11.7 us 11.3 us (-0.4 us) 8B 11.6 us 11.2 us (-0.4 us) 16B 11.7 us 11.3 us (-0.4 us) 32B 11.7 us 11.3 us (-0.4 us) 64B 11.7 us 11.2 us (-0.5 us) 128B 11.6 us 11.2 us (-0.4 us) 256B 11.8 us 11.2 us (-0.6 us) 512B 11.8 us 11.4 us (-0.4 us) 1KB 11.9 us 11.4 us (-0.5 us) 2KB 12.1 us 11.5 us (-0.6 us) Signed-off-by: NGuangguan Wang <guangguan.wang@linux.alibaba.com> Reviewed-by: NTony Lu <tonylu@linux.alibaba.com> Tested-by: Nkernel test robot <lkp@intel.com> Acked-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 17 5月, 2022 13 次提交
-
-
由 Leszek Polak 提交于
As per Errata Section 5.1, if EEE is intended to be used, some register writes must be done once after every hardware reset. This patch now adds the necessary register writes as listed in the Marvell errata. Without this fix we experience ethernet problems on some of our boards equipped with a new version of this ethernet PHY (different supplier). The fix applies to Marvell Alaska 88E1510/88E1518/88E1512/88E1514 Rev. A0. Signed-off-by: NLeszek Polak <lpolak@arri.de> Signed-off-by: NStefan Roese <sr@denx.de> Cc: Marek Behún <kabel@kernel.org> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: David S. Miller <davem@davemloft.net> Reviewed-by: NMarek Behún <kabel@kernel.org> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20220516070859.549170-1-sr@denx.deSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Minghao Chi 提交于
Calling synchronize_irq() right before free_irq() is quite useless. On one hand the IRQ can easily fire again before free_irq() is entered, on the other hand free_irq() itself calls synchronize_irq() internally (in a race condition free way), before any state associated with the IRQ is freed. Reported-by: NZeal Robot <zealci@zte.com.cn> Signed-off-by: NMinghao Chi <chi.minghao@zte.com.cn> Link: https://lore.kernel.org/r/20220516082251.1651350-1-chi.minghao@zte.com.cnSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Minghao Chi 提交于
Calling synchronize_irq() right before free_irq() is quite useless. On one hand the IRQ can easily fire again before free_irq() is entered, on the other hand free_irq() itself calls synchronize_irq() internally (in a race condition free way), before any state associated with the IRQ is freed. Reported-by: NZeal Robot <zealci@zte.com.cn> Signed-off-by: NMinghao Chi <chi.minghao@zte.com.cn> Link: https://lore.kernel.org/r/20220516081914.1651281-1-chi.minghao@zte.com.cnSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Minghao Chi 提交于
Calling synchronize_irq() right before free_irq() is quite useless. On one hand the IRQ can easily fire again before free_irq() is entered, on the other hand free_irq() itself calls synchronize_irq() internally (in a race condition free way), before any state associated with the IRQ is freed. Reported-by: NZeal Robot <zealci@zte.com.cn> Signed-off-by: NMinghao Chi <chi.minghao@zte.com.cn> Link: https://lore.kernel.org/r/20220516072646.1651109-1-chi.minghao@zte.com.cnSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Lu Wei 提交于
Merge repeat codes to reduce the duplication. Signed-off-by: NLu Wei <luwei32@huawei.com> Link: https://lore.kernel.org/r/20220516062804.254742-1-luwei32@huawei.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Lu Wei 提交于
Use eth_zero_addr() to clear mac address instead of memset(). Signed-off-by: NLu Wei <luwei32@huawei.com> Link: https://lore.kernel.org/r/20220516033343.329178-1-luwei32@huawei.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Bernard Zhao 提交于
devm_kfree check the pointer, there is no need to check before devm_kfree call. This change is to cleanup the code a bit. Signed-off-by: NBernard Zhao <bernard@vivo.com> Link: https://lore.kernel.org/r/20220516015208.6526-1-bernard@vivo.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Wells Lu 提交于
Removed unnecessary: select COMMON_CLK_SP7021 select RESET_SUNPLUS select NVMEM_SUNPLUS_OCOTP from Kconfig. Reported-by: Nkernel test robot <yujie.liu@intel.com> Signed-off-by: NWells Lu <wellslutw@gmail.com> Link: https://lore.kernel.org/r/1652443036-24731-1-git-send-email-wellslutw@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
Merge tag 'linux-can-next-for-5.19-20220516' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2022-05-16 the first 2 patches are by me and target the CAN raw protocol. The 1st removes an unneeded assignment, the other one adds support for SO_TXTIME/SCM_TXTIME. Oliver Hartkopp contributes 2 patches for the ISOTP protocol. The 1st adds support for transmission without flow control, the other let's bind() return an error on incorrect CAN ID formatting. Geert Uytterhoeven contributes a patch to clean up ctucanfd's Kconfig file. Vincent Mailhol's patch for the slcan driver uses the proper function to check for invalid CAN frames in the xmit callback. The next patch is by Geert Uytterhoeven and makes the interrupt-names of the renesas,rcar-canfd dt bindings mandatory. A patch by my update the ctucanfd dt bindings to include the common CAN controller bindings. The last patch is by Akira Yokosawa and fixes a breakage the ctucanfd's documentation. * tag 'linux-can-next-for-5.19-20220516' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: docs: ctucanfd: Use 'kernel-figure' directive instead of 'figure' dt-bindings: can: ctucanfd: include common CAN controller bindings dt-bindings: can: renesas,rcar-canfd: Make interrupt-names required can: slcan: slc_xmit(): use can_dropped_invalid_skb() instead of manual check can: ctucanfd: Let users select instead of depend on CAN_CTUCANFD can: isotp: isotp_bind(): return -EINVAL on incorrect CAN ID formatting can: isotp: add support for transmission without flow control can: raw: add support for SO_TXTIME/SCM_TXTIME can: raw: raw_sendmsg(): remove not needed setting of skb->sk ==================== Link: https://lore.kernel.org/r/20220516202625.1129281-1-mkl@pengutronix.deSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
Ricardo Martinez says: ==================== net: skb: Remove skb_data_area_size() This patch series removes the skb_data_area_size() helper, replacing it in t7xx driver with the size used during skb allocation. https://lore.kernel.org/netdev/CAHNKnsTmH-rGgWi3jtyC=ktM1DW2W1VJkYoTMJV2Z_Bt498bsg@mail.gmail.com/ ==================== Link: https://lore.kernel.org/r/20220513173400.3848271-1-ricardo.martinez@linux.intel.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ricardo Martinez 提交于
skb_data_area_size() is not needed. As Jakub pointed out [1]: For Rx, drivers can use the size passed during skb allocation or use skb_tailroom(). For Tx, drivers should use skb_headlen(). [1] https://lore.kernel.org/netdev/CAHNKnsTmH-rGgWi3jtyC=ktM1DW2W1VJkYoTMJV2Z_Bt498bsg@mail.gmail.com/Signed-off-by: NRicardo Martinez <ricardo.martinez@linux.intel.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: NSergey Ryazanov <ryazanov.s.a@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ricardo Martinez 提交于
skb_data_area_size() helper was used to calculate the size of the DMA mapped buffer passed to the HW. Instead of doing this, use the size passed to allocate the skbs. Signed-off-by: NRicardo Martinez <ricardo.martinez@linux.intel.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: NSergey Ryazanov <ryazanov.s.a@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
Mat Martineau says: ==================== mptcp: Updates for net-next Three independent fixes/features from the MPTCP tree: Patch 1 is a selftest workaround for older iproute2 packages. Patch 2 removes superfluous locks that were added with recent MP_FAIL patches. Patch 3 adds support for the TCP_DEFER_ACCEPT sockopt. ==================== Link: https://lore.kernel.org/r/20220514002115.725976-1-mathew.j.martineau@linux.intel.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-