- 08 1月, 2021 12 次提交
-
-
由 Jakub Kicinski 提交于
All UDP tunnel port management is now routed via udp_tunnel_nic infra directly. Remove the old callbacks. Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Reviewed-by: NJacob Keller <jacob.e.keller@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Dinghao Liu 提交于
When mlx5_create_flow_group() fails, ft->g should be freed just like when kvzalloc() fails. The caller of mlx5e_create_l2_table_groups() does not catch this issue on failure, which leads to memleak. Fixes: 33cfaaa8 ("net/mlx5e: Split the main flow steering table") Signed-off-by: NDinghao Liu <dinghao.liu@zju.edu.cn> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dinghao Liu 提交于
mlx5e_create_ttc_table_groups() frees ft->g on failure of kvzalloc(), but such failure will be caught by its caller in mlx5e_create_ttc_table() and ft->g will be freed again in mlx5e_destroy_flow_table(). The same issue also occurs in mlx5e_create_ttc_table_groups(). Set ft->g to NULL after kfree() to avoid double free. Fixes: 7b3722fa ("net/mlx5e: Support RSS for GRE tunneled packets") Fixes: 33cfaaa8 ("net/mlx5e: Split the main flow steering table") Signed-off-by: NDinghao Liu <dinghao.liu@zju.edu.cn> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Leon Romanovsky 提交于
Add missed freeing previously allocated devlink object. Fixes: a925b5e3 ("net/mlx5: Register mlx5 devices to auxiliary virtual bus") Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Aya Levin 提交于
Prior to this patch, configuring speed to 50G with autoneg off over devices supporting 50G per lane failed. Support for 50G per lane introduced a new set of link-modes, on which driver always performed a speed validation as if only legacy link-modes were configured. Fix driver speed validation to force setting autoneg over 56G only if in legacy link-mode. Fixes: 3d7cadae ("net/mlx5e: ethtool, Fix analysis of speed setting") Signed-off-by: NAya Levin <ayal@nvidia.com> Reviewed-by: NEran Ben Elisha <eranbe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Maor Dickman 提交于
sop_drop_qpn field in the cqe is used by two features, in SWITCHDEV mode to restore the chain id in case of a miss and in LEGACY mode to support skbedit mark action. In build RX skb, the skb mark field is set regardless of the configured mode which cause a corruption of the mark field in case of switchdev mode. Fix by overriding the mark value back to 0 in the representor tc update skb flow. Fixes: 8f1e0b97 ("net/mlx5: E-Switch, Mark miss packets with new chain id mapping") Signed-off-by: NMaor Dickman <maord@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Alaa Hleihel 提交于
Adding vf VLANID for the first time, or after having cleared previously defined VLANID works fine, however, attempting to change an existing vf VLANID clears the rules on the firmware, but does not add new rules for the new vf VLANID. Fix this by changing the logic in function esw_acl_egress_lgcy_setup() so that it will always configure egress rules. Fixes: ea651a86 ("net/mlx5: E-Switch, Refactor eswitch egress acl codes") Signed-off-by: NAlaa Hleihel <alaa@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Moshe Shemesh 提交于
In case WQE includes inline header the vlan is inserted by driver even if vlan offload is set. On geneve over vlan interface where software parser is used the SWP offsets should be updated according to the added vlan. Fixes: e3cfc7e6 ("net/mlx5e: TX, Add geneve tunnel stateless offload support") Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Oz Shlomo 提交于
Connection counters may be shared for both directions when the counter is used for connection aging purposes. However, if TC flow accounting is enabled then a unique counter is required per direction. Instantiate a unique counter per direction if the conntrack accounting extension is enabled. Use a shared counter when the connection accounting extension is disabled. Fixes: 1edae233 ("net/mlx5e: CT: Use the same counter for both directions") Signed-off-by: NOz Shlomo <ozsh@nvidia.com> Reported-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Reviewed-by: NPaul Blakey <paulb@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Mark Zhang 提交于
In multi-port mode, FW reports syndrome 0x2ea48 (invalid vhca_port_number) if the port_num is not 1 or 2. Fixes: 80f09dfc ("net/mlx5: Eswitch, enable RoCE loopback traffic") Signed-off-by: NMark Zhang <markzhang@nvidia.com> Reviewed-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Aya Levin 提交于
Expose firmware indication that it supports setting eswitch uplink state to follow (follow the physical link). Condition setting the eswitch uplink admin-state with this capability bit. Older FW may not support the uplink state setting. Fixes: 7d0314b1 ("net/mlx5e: Modify uplink state on interface up/down") Signed-off-by: NAya Levin <ayal@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Mark Zhang 提交于
This patch fixes a memleak issue by preventing to create a lag and add PFs if lag is not supported. comm “python3”, pid 349349, jiffies 4296985507 (age 1446.976s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ……………. 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ……………. backtrace: [<000000005b216ae7>] mlx5_lag_add+0x1d5/0×3f0 [mlx5_core] [<000000000445aa55>] mlx5e_nic_enable+0x66/0×1b0 [mlx5_core] [<00000000c56734c3>] mlx5e_attach_netdev+0x16e/0×200 [mlx5_core] [<0000000030439d1f>] mlx5e_attach+0x5c/0×90 [mlx5_core] [<0000000018fd8615>] mlx5e_add+0x1a4/0×410 [mlx5_core] [<0000000068bc504b>] mlx5_add_device+0x72/0×120 [mlx5_core] [<000000009fce51f9>] mlx5_register_device+0x77/0xb0 [mlx5_core] [<00000000d0d81ff3>] mlx5_load_one+0xc58/0×1eb0 [mlx5_core] [<0000000045077adc>] init_one+0x3ea/0×920 [mlx5_core] [<0000000043287674>] pci_device_probe+0xcd/0×150 [<00000000dafd3279>] really_probe+0x1c9/0×4b0 [<00000000f06bdd84>] driver_probe_device+0x5d/0×140 [<00000000e3d508b6>] device_driver_attach+0x4f/0×60 [<0000000084fba0f0>] bind_store+0xbf/0×120 [<00000000bf6622b3>] kernfs_fop_write+0x114/0×1b0 Fixes: 9b412cc3 ("net/mlx5e: Add LAG warning if bond slave is not lag master") Signed-off-by: NMark Zhang <markzhang@nvidia.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Reviewed-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
- 06 1月, 2021 17 次提交
-
-
由 Zheng Yongjun 提交于
Use kzalloc rather than kcalloc(1,...) The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ @@ - kcalloc(1, + kzalloc( ...) // </smpl> Signed-off-by: NZheng Yongjun <zhengyongjun3@huawei.com> Reviewed-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yevgeny Kliteynik 提交于
Move HW specific modify header fields and logic to STEv0 file and use the new STE context callbacks. Since STEv0 and STEv1 modify actions values are different, each version has its own implementation. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NAlex Vesker <valex@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Extend the STE context struct with per-device modify header actions. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Use STE tx/rx actions per-device API: move HW specific action apply logic from dr_ste to STEv0 file - STEv0 and STEv1 actions format is different, each version should have its own implementation. Signed-off-by: NAlex Vesker <valex@nvidia.com> Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Extend the STE context struct with per-device tx/rx actions. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Use the new setters and getters API for STEv0: move HW specific setter and getters from dr_ste to STEv0 file. Since STEv0 and STEv1 format are different each version should implemented different setters and getters. Rename remaining static functions w/o mlx5 prefix. Signed-off-by: NAlex Vesker <valex@nvidia.com> Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Extend the STE context struct with various per-device setters and getters. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
The action apply logic is device specific per STE version, moving to the STE layer will allow implementing it for both devices while keeping DR upper layers the same. Signed-off-by: NAlex Vesker <valex@nvidia.com> Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Reworked ICMP tag builder to better handle ICMP v4/6 fields and avoid unneeded code duplication and 'if' statements, removed unused macro, changed bitfield of len 8 to u8. Signed-off-by: NAlex Vesker <valex@nvidia.com> Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
The lookup types are device specific and should not be exposed to DR upper layers, matchers/tables. Each HW STE version should keep them internal. The lu_type size is updated to support larger lu_types as required for STEv1. Signed-off-by: NAlex Vesker <valex@nvidia.com> Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Merge DR_STE_STE macros for better code reuse, the macro DR_STE_SET_MASK_V and DR_STE_SET_TAG are merged to avoid tag and bit_mask function creation which are usually the same. Signed-off-by: NAlex Vesker <valex@nvidia.com> Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Check vport_cap only if match on source gvmi is required. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NAlex Vesker <valex@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Move current STE match logic to a seprate file. This file will be used for HW specific STEv0. Future patches will add functionality for v1 steering. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NAlex Vesker <valex@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Split the STE builders functionality into the common part and device-specific part. All the device-specific part (with 'v0' in the function names) is accessed through the STE context structure. Subsequent patches will have the device-specific logic moved to a separate file. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Move some macros from dr_ste.c to header - these macros will be used by all the format-specific functions. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yevgeny Kliteynik 提交于
Add a struct of device specific callbacks for STE layer below dr_ste. Each device will implement its HW-specific function, and a comon logic from the DR code will access these functions through the new ste_ctx API. More callbacks will follow in the subsequent patches. Signed-off-by: NYevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
- 15 12月, 2020 11 次提交
-
-
由 Thomas Gleixner 提交于
Using the interrupt affinity mask for checking locality is not really working well on architectures which support effective affinity masks. The affinity mask is either the system wide default or set by user space, but the architecture can or even must reduce the mask to the effective set, which means that checking the affinity mask itself does not really tell about the actual target CPUs. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20201210194044.876342330@linutronix.de
-
由 Thomas Gleixner 提交于
No driver has any business with the internals of an interrupt descriptor. Storing a pointer to it just to use yet another helper at the actual usage site to retrieve the affinity mask is creative at best. Just because C does not allow encapsulation does not mean that the kernel has no limits. Retrieve a pointer to the affinity mask itself and use that. It's still using an interface which is usually not for random drivers, but definitely less hideous than the previous hack. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20201210194044.769458162@linutronix.de
-
由 Thomas Gleixner 提交于
Using the interrupt affinity mask for checking locality is not really working well on architectures which support effective affinity masks. The affinity mask is either the system wide default or set by user space, but the architecture can or even must reduce the mask to the effective set, which means that checking the affinity mask itself does not really tell about the actual target CPUs. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20201210194044.672935978@linutronix.de
-
由 Thomas Gleixner 提交于
No driver has any business with the internals of an interrupt descriptor. Storing a pointer to it just to use yet another helper at the actual usage site to retrieve the affinity mask is creative at best. Just because C does not allow encapsulation does not mean that the kernel has no limits. Retrieve a pointer to the affinity mask itself and use that. It's still using an interface which is usually not for random drivers, but definitely less hideous than the previous hack. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20201210194044.580936243@linutronix.de
-
由 Jiri Pirko 提交于
In case the eXtended mezzanine is present on the system, use it for IPv4 router offload. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jiri Pirko 提交于
Set a profile option to instruct FW to use 1/2 of KVH for XLT cache, not the whole one. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jiri Pirko 提交于
Upon route insertion and removal, it is needed to flush possibly cached entries from the XM cache. Extend XM op context to carry information needed for the flush. Implement the flush in delayed work since for HW design reasons there is a need to wait 50usec before the flush can be done. If during this time comes the same flush request, consolidate it to the first one. Implement this queued flushes by a hashtable. v2: * Fix GENMASK() high bit Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jiri Pirko 提交于
The RLPMCE allows disabling the LPM cache. Can be changed on the fly. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jiri Pirko 提交于
The RLCMLD register is used to bulk delete the XLT-LPM cache ML entries. This can be used by SW when L is increased or decreased, thus need to remove entries with old ML values. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jiri Pirko 提交于
There is a table that assigns L-value per M-index. The L is always the biggest from the currently inserted prefixes. Setup a hashtable to track the M-index information and the prefixes that are related to it. Ensure the L-value is always correctly set. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jiri Pirko 提交于
The XRMT configures the M-Table for the XLT-LPM. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-