- 10 5月, 2020 1 次提交
-
-
由 Jiri Pirko 提交于
HW supports packet sampling on ingress only. Check and fail if user is adding sample on egress. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 09 5月, 2020 13 次提交
-
-
由 Tariq Toukan 提交于
The same WQE opcode might be used in different ICOSQ flows and WQE types. To have a better distinguishability, replace it with an enum that better indicates the WQE type and flow it is used for. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
The include of Ethernet driver header in core is not needed and actually wrong. Remove it. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Struct assignment looks more clean, and implies resetting the not assigned fields to zero, instead of holding values from older ring cycles. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Into the txrx header file. The mlx5e_sq_wqe_info structure describes WQE info for the ICOSQ, rename it to better reflect this. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Every single DUMP WQE resides in a single WQEBB. As the pi is calculated per each one separately, there is no real need for a contiguous room for them, allow them to populate different WQ fragments. This reduces WQ waste and improves its utilization. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
For the static and progress context params WQEs, do the edge filling separately. This improves the WQ utilization, code readability, and reduces the chance of future bugs. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
After previous modifications, the offloads are no longer called one by one, the pi is calculated and the wqe is cleared on between of TLS and IPSEC offloads, which doesn't quite fit mlx5e_accel_handle_tx's purpose. This patch splits mlx5e_accel_handle_tx into two functions that correspond to two logical phases of running offloads: 1. Before fetching a WQE. Here runs the code that can post WQEs on its own, before the main WQE is fetched. It's the main part of TLS offload. 2. After fetching a WQE. Here runs the code that updates the WQE's fields, but can't post other WQEs any more. It's a minor part of TLS offload that sets the tisn field in the cseg, and eseg-based offloads (currently IPSEC, and later patches will move GENEVE and checksum offloads there, too). It allows to make mlx5e_xmit take care of all actions needed to transmit a packet in the right order, improve the structure of the code and reduce unnecessary operations. The structure will be further improved in the following patches (all eseg-based offloads will be moved to a single place, and reserving space for the main WQE will happen between phase 1 and phase 2 of offloads to eliminate unneeded data movements). Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
mlx5e_udp_gso_handle_tx_skb updates the length field in the UDP header in case of GSO. It doesn't interfere with other offloads, so do it first to simplify further restructuring of the code. This way we'll make all independent modifications to the SKB before starting to work with WQEs. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NRaed Salem <raeds@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
TLS offload may write a 32-bit field (tisn) to the cseg of the WQE. To do that, it receives pi and wqe pointers. As TLS offload may also send additional WQEs, it has to update pi and wqe, and in many cases it even doesn't use pi calculated before and wqe zeroed before and does it itself. Also, mlx5e_sq_xmit has to copy the whole cseg if it goes to the mlx5e_fill_sq_frag_edge flow. This all is not efficient. It's more efficient to do the following: 1. Just return tisn from TLS offload and make the caller fill it in a more appropriate place. 2. Calculate pi and clear wqe after calling TLS offload. 3. If TLS offload has to send WQEs, calculate pi and clear wqe just before that. It's already done in all places anyway, so this commit allows to remove some redundant memsets and calls. Copying of cseg will be eliminated in one of the following commits, and all other stuff is done here. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
IPSEC offload needs to modify the eseg of the WQE that is being filled, but it receives a pointer to the whole WQE. To make the contract stricter, pass only the pointer to the eseg of that WQE. This commit is preparation for the following refactoring of offloads in the TX path and for the MPWQE support. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
mlx5e_sq_xmit and mlx5i_sq_xmit always return NETDEV_TX_OK. Drop the return value to simplify the code. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
Both INNOVA and ConnectX TLS offloads perform the same checks in the beginning. Unify them to reduce repeating code. Do WARN_ON_ONCE on netdev mismatch and finish with an error in both offloads, not only in the ConnectX one. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
TLS and IPSEC offloads currently return struct sk_buff *, but the value is either NULL or the same skb that was passed as a parameter. Return bool instead to provide stronger guarantees to the calling code (it won't need to support handling a different SKB that could be potentially returned before this change) and to simplify restructuring this code in the following commits. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 08 5月, 2020 2 次提交
-
-
由 Jacob Keller 提交于
The NL_SET_ERR_MSG_MOD macro is used to report a string describing an error message to userspace via the netlink extended ACK structure. It should not have a trailing newline. Add a cocci script which catches cases where the newline marker is present. Using this script, fix the handful of cases which accidentally included a trailing new line. I couldn't figure out a way to get a patch mode working, so this script only implements context, report, and org. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Andy Whitcroft <apw@canonical.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Yan 提交于
Fix the following coccicheck warning: drivers/net/ethernet/mellanox/mlx4/en_ethtool.c:1396:5-8: Unneeded variable: "err". Return "0" on line 1411 Signed-off-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 5月, 2020 2 次提交
-
-
由 Pablo Neira Ayuso 提交于
This patch adds FLOW_ACTION_HW_STATS_DONT_CARE which tells the driver that the frontend does not need counters, this hw stats type request never fails. The FLOW_ACTION_HW_STATS_DISABLED type explicitly requests the driver to disable the stats, however, if the driver cannot disable counters, it bails out. TCA_ACT_HW_STATS_* maintains the 1:1 mapping with FLOW_ACTION_HW_STATS_* except by disabled which is mapped to FLOW_ACTION_HW_STATS_DISABLED (this is 0 in tc). Add tc_act_hw_stats() to perform the mapping between TCA_ACT_HW_STATS_* and FLOW_ACTION_HW_STATS_*. Fixes: 319a1d19 ("flow_offload: check for basic action hw stats type") Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Yan 提交于
Fix the following coccicheck warning: drivers/net/ethernet/mellanox/mlx4/en_ethtool.c:1238:5-8: Unneeded variable: "err". Return "0" on line 1252 Signed-off-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 5月, 2020 1 次提交
-
-
由 Tariq Toukan 提交于
When ENOSPC is set the idx is still valid and gets set to the global MLX4_SINK_COUNTER_INDEX. However gcc's static analysis cannot tell that ENOSPC is impossible from mlx4_cmd_imm() and gives this warning: drivers/net/ethernet/mellanox/mlx4/main.c:2552:28: warning: 'idx' may be used uninitialized in this function [-Wmaybe-uninitialized] 2552 | priv->def_counter[port] = idx; Also, when ENOSPC is returned mlx4_allocate_default_counters should not fail. Fixes: 6de5f7f6 ("net/mlx4_core: Allocate default counter per port") Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 5月, 2020 2 次提交
-
-
由 Maor Gottlieb 提交于
Add function to get the device physical port of the lag slave. Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maor Gottlieb 提交于
The lag lock could be a spin lock, the critical section is short and there is no need that the thread will sleep. Change the lock that protects the LAG structure from mutex to spin lock. It is required for next patch that need to access this structure from context that we can't sleep. In addition there is no need to hold this lock when query the congestion counters. Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 01 5月, 2020 19 次提交
-
-
由 Jiri Pirko 提交于
Vregion helpers to get min and max priority depend on the correct ordering of vchunks in the vregion list. However, the current code always adds new chunk to the end of the list, no matter what the priority is. Fix this by finding the correct place in the list and put vchunk there. Fixes: 22a67766 ("mlxsw: spectrum: Introduce ACL core with simple TCAM implementation") Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
Remove the old SPAN API now that matchall-based and flower-based mirroring were converted to use the new API. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
As previously explained, each port whose outgoing traffic is analyzed needs to have an egress mirror buffer. The size of the egress mirror buffer is calculated based on various parameters, two of which are the speed and the MTU of the port. Therefore, when the MTU or the speed of a port change, the SPAN code is called to see if the egress mirror buffer of the port needs to be adjusted. Currently, this is done by traversing all the SPAN agents and for each SPAN agent the list of bound ports is traversed. Instead of the above, traverse the recently added list of analyzed ports. This will later allow us to remove the old SPAN API. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
In flower-based mirroring, mirroring is done with ACLs and the SPAN agent is not bound to a port. Instead its identifier is specified in an ACL action. Convert this type of mirroring to use the new API. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
In matchall-based mirroring, mirroring is not done with ACLs, but a SPAN agent is bound to the ingress / egress of a port and all incoming / outgoing traffic is mirrored. Convert this type of mirroring to use the new API. First the SPAN agent is resolved, then the port is marked as analyzed and its egress mirror buffer is potentially allocated. Lastly, the binding is performed. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
Currently, a SPAN agent can only be bound to a per-port trigger where the trigger is either an incoming packet (INGRESS) or an outgoing packet (EGRESS) to / from the port. A follow-up patch set will introduce the concept of global triggers and per-{port, TC} enablement. With global triggers, the trigger entry is only keyed by a trigger and not by a port and a trigger. The trigger can be, for example, a packet that was early dropped. While the binding between the SPAN agent and the trigger is performed only once, the trigger entry needs to be reference counted, as the trigger can be enabled on multiple ports. Add APIs to bind / unbind a SPAN agent to a trigger and reference count the trigger entry in preparation for global triggers. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
The code that adjusts the egress buffer size is not symmetric at the moment. The update is done via a call to mlxsw_sp_span_port_buffer_update(), but the disablement is done inline by invoking the write to SBIB register directly. Wrap the disablement code in mlxsw_sp_span_port_buffer_disable(). Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Suggested-by: NPetr Machata <petrm@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
Next patch will introduce mlxsw_sp_span_port_buffer_disable() function that disables the egress buffer on an analyzed port. Rename the opposite function that updates the buffer on an analyzed port accordingly. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Suggested-by: NPetr Machata <petrm@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
An analyzed port is a port whose incoming / outgoing traffic is mirrored to a SPAN agent and analyzed on a remote server. A port can be analyzed by multiple tc filters and therefore the corresponding analyzed port entry needs to be reference counted. This is significant because ports whose outgoing traffic is analyzed need to have an egress mirror buffer. Add APIs to get / put an analyzed port. Allocate an egress mirror buffer on a port when it is first inspected at egress and free the buffer when it is no longer inspected at egress. Protect the list of analyzed ports with a mutex, as a later patch will traverse it from a context in which RTNL lock is not held. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
Given a netdev that packets should be mirrored to, create a SPAN agent and return its identifier to the caller. The SPAN agent is reference counted, as multiple tc-mirred actions can point to the same destination netdev. Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxim Mikityanskiy 提交于
In our fast-path design, a WQE (Work Queue Element) must not cross the page boundary. To enforce that, for WQEs consisting of more than one BB (Basic Block), the driver checks the available contiguous space in the WQ in advance, and if it's not enough, it pads it with NOPs. This patch modifies the code that calculates the position of next WQE, considering the padding, and prepares the WQE. This code is common for all SQ types. In this patch it's reorganized in a way that makes the usage pattern unified for all SQ types, and makes the implementations self-contained and look almost the same, preparing the repeating code to further attempts to deduplicate it. One place is left as is: mlx5e_sq_xmit and mlx5e_fill_sq_frag_edge call inside, because it is special in a way that it may also copy WQE's cseg and eseg when reserving space. This will be eliminated in one of the following patches, and this place will be converted to the new approach, too. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
Structs mlx5e_txqsq and mlx5e_xdpsq contain wqe_info arrays to store supplementary information corresponding to WQEs in the queue. Struct mlx5e_icosq also has such an array, but it's called differently - ico_wqe. This patch renames it to unify with the other SQs. In addition, rename the struct to emphasize its specific usage. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maxim Mikityanskiy 提交于
There are multiple functions mlx5{e,i}_*_fetch_wqe that contain the same code, that is repeated, because they operate on different SQ struct types. mlx5e_sq_fetch_wqe also returns void *, instead of the concrete WQE type. This commit generalizes the fetch WQE operation by putting this code into a single function. To simplify calls of the generic function in concrete use cases, macros are provided that substitute the right WQE size and cast the return type. Before this patch, fetch_wqe used to calculate pi itself, but the value was often known to the caller. This calculation is moved outside to eliminate this unnecessary step and prepare for the fill_frag_edge refactoring in the next patch. Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Upon an error completion on an XDP SQ, print the offending WQE to ease the debug process. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NAya Levin <ayal@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Error CQE was dumped only for TXQ SQs. Generalise the function, and add usage for error completions on ICO SQs and XDP SQs. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NAya Levin <ayal@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Even though some of the WQE control segment's field share the same memory bits (a union of fields), prefer having the right field name for every different usage. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMaxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
If FW sets release_all_pages bit in MLX5_EVENT_TYPE_PAGE_REQUEST, driver shall release all pages of a given function id, with no further pages reclaim negotiation with FW nor MANAGE_PAGES commands from driver towards FW. Upon receiving this bit as part of pages reclaim event, driver will initiate release all flow, in which it will iterate and release all function's pages. As part of driver <-> FW capabilities handshake, FW will report release_all_pages max HCA cap bit, and driver will set the release_all_pages bit in HCA cap. NIC: ConnectX-4 Lx CPU: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz Test case: Simulataniously FLR 4 VFs, and measure FW release pages by driver. Before: 3.18 Sec After: 0.31 Sec Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Reviewed-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
Thousands of pages are released with free_addr() function. In case of buggy sync between FW and driver on released address, the log will be flooded with error messages. Use mlx5_core_warn_rl() to limit it. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
Factor out the fwp address release page to an helper function, will be used in the downstream patch. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Reviewed-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-