- 17 9月, 2020 2 次提交
-
-
由 Petr Machata 提交于
When a priority is marked as lossless using DCB PFC, or when pause frames are enabled on a port, mlxsw adds to port buffers an extra space to cover the traffic that will arrive between the time that a pause or PFC frame is emitted, and the time traffic actually stops. This is called the delay. The concept is the same in PFC and pause, however the way the extra buffer space is calculated differs. In this patch, unify this handling. Delay is to be measured in bytes of extra space, and will not include MTU. PFC handler sets the delay directly from the parameter it gets through the DCB interface. To convert pause handler, move MLXSW_SP_PAUSE_DELAY to ethtool module, convert to bytes, and reduce it by maximum MTU, and divide by two. Then it has the same meaning as the delay_bytes set by the PFC handler. Keep the delay_bytes value in struct mlxsw_sp_hdroom introduced in the previous patch. Change PFC and pause handlers to store the new delay value there and have __mlxsw_sp_port_headroom_set() take it from there. Instead of mlxsw_sp_pfc_delay_get() and mlxsw_sp_pg_buf_delay_get(), introduce mlxsw_sp_hdroom_buf_delay_get() to calculate the delay provision. Drop the unnecessary MLXSW_SP_CELL_FACTOR, and instead add an explanatory comment describing the formula used. Signed-off-by: NPetr Machata <petrm@nvidia.com> Reviewed-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Petr Machata 提交于
The port headroom handling is currently strewn across several modules and tricky to follow: MTU, DCB PFC, DCB ETS and ethtool pause all influence the settings, and then there is the completely separate initial configuraion in spectrum_buffers. A following patch will implement the dcbnl_setbuffer callback, which is going to further complicate the landscape. In order to simplify work with port buffers, the following patches are going to centralize all port-buffer handling in spectrum_buffers. As a first step, introduce a (currently empty) struct mlxsw_sp_hdroom that will keep the configuration parameters, and allocate and free it in appropriate places. Signed-off-by: NPetr Machata <petrm@nvidia.com> Reviewed-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 9月, 2020 32 次提交
-
-
由 Geert Uytterhoeven 提交于
As CHELSIO_INLINE_CRYPTO is bool, and CHELSIO_T4 is tristate, the dependency of CHELSIO_INLINE_CRYPTO on CHELSIO_T4 is not sufficient to protect CRYPTO_DEV_CHELSIO_TLS and CHELSIO_IPSEC_INLINE. The latter two are also tristate, hence if CHELSIO_T4=n, they cannot be builtin, as that would lead to link failures like: drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c:259: undefined reference to `cxgb4_port_viid' and drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c:752: undefined reference to `cxgb4_reclaim_completed_tx' Fix this by re-adding dependencies on CHELSIO_T4 to tristate symbols. The dependency of CHELSIO_INLINE_CRYPTO on CHELSIO_T4 is kept to avoid asking the user. Fixes: 6bd860ac ("chelsio/chtls: CHELSIO_INLINE_CRYPTO should depend on CHELSIO_T4") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Introduce devlink health reporter to report FW fatal events. Implement the event listener using MFDE trap and enable the events to be propagated using MFGD register configuration. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Introduce MFGD register that is used to configure firmware debugging. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Introduce MFDE register that is passed through MFDE trap in case of fatal FW event. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
As the fw flashing code was moved to core.c, move the param which is related to it there as well. Remove unnecessary parentheses on the way. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Extract the code calling params register/unregister driver ops into separate functions. Call publish/unpublish unconditionally. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
As the firmware flashing is not specific to Spectrum, move the code to core.c and avoid one op call and 2 exported symbols. Also, this allows to do flash before call of driver->init function and possibly do other core calls in between. Do some small renaming here and there on the way to be consistent with the rest of core.c code. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Among other changes, this version supports FW monitoring. Signed-off-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ong Boon Leong 提交于
The current implementation of stmmac_stop_all_queues() and stmmac_start_all_queues() will not work correctly when the value of tx_queues_to_use is changed through ethtool -L DEVNAME rx N tx M command. Also, netif_tx_start|stop_all_queues() are only needed in driver open() and close() only. Fixes: c22a3f48 net: stmmac: adding multiple napi mechanism Signed-off-by: NOng Boon Leong <boon.leong.ong@intel.com> Signed-off-by: NVoon Weifeng <weifeng.voon@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Aashish Verma 提交于
netif_set_real_num_tx_queues() & netif_set_real_num_rx_queues() should be used to inform network stack about the real Tx & Rx queue (active) number in both stmmac_open() and stmmac_resume(), therefore, we move the code from stmmac_dvr_probe() to stmmac_hw_setup(). Fixes: c02b7a91 net: stmmac: use netif_set_real_num_{rx,tx}_queues Signed-off-by: NAashish Verma <aashishx.verma@intel.com> Signed-off-by: NOng Boon Leong <boon.leong.ong@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ong Boon Leong 提交于
Restructure NAPI add and delete process so that we can call them accordingly in open() and ethtool_set_channels() accordingly. Introduced stmmac_reinit_queues() to handle the transition needed for changing Rx & Tx channels accordingly. Signed-off-by: NOng Boon Leong <boon.leong.ong@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Check if the pause stats are reported by HW by checking the bitmap. Calculation is based on the order of strings in main_strings from ethtool -S. Hopefully the semantics of these stats match the standard.. Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Plumb through all the indirection and copy some code from ethtool -S. The names of the group indicate that these are the stats we are after (and Saeed confirms it). v3: - fix build in mlx5_rep v2: - drop the ethool helper and call stats directly - don't pass 0 as initialized to in buffer - use local buffer Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Report standard pause frame stats. They are already aggregated in struct ixgbe_hw_stats. The combination of the registers is suggested as equivalent to PAUSEMACCtrlFramesTransmitted / PAUSEMACCtrlFramesReceived by the Intel 82576EB datasheet, I could not find any information in the HW actually supported by ixgbe. Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NAlexander Duyck <alexander.h.duyck@linux.intel.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
These stats are already reported in ethtool -S. Michael confirms they are equivalent to standard stats. v2: - fix sparse warning about endian by using the macro - use u64 for pointer type Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Reviewed-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Add minimal ethtool interface for testing ethtool pause stats. v2: add missing static on nsim_ethtool_ops Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ofer Levi 提交于
Add CQE compression support for completions of packets that span multiple strides in a Striding RQ, per the HW capability. In our memory model, we use small strides (256B as of today) for the non-linear SKB mode. This feature allows CQE compression to work also for multiple strides packets. In this case decompressing the mini CQE array will use stride index provided by HW as part of the mini CQE. Before this feature, compression was possible only for single-strided packets, i.e. for packets of size up to 256 bytes when in non-linear mode, and the index was maintained by SW. This feature is supported for ConnectX-5 and above. Feature performance test: This was whitebox-tested, we reduced the PCI speed from 125Gb/s to 62.5Gb/s to overload pci and manipulated mlx5 driver to drop incoming packets before building the SKB to achieve low cpu utilization. Outcome is low cpu utilization and bottleneck on pci only. Test setup: Server: Intel(R) Xeon(R) Silver 4108 CPU @ 1.80GHz server, 32 cores NIC: ConnectX-6 DX. Sender side generates 300 byte packets at full pci bandwidth. Receiver side configuration: Single channel, one cpu processing with one ring allocated. Cpu utilization is ~20% while pci bandwidth is fully utilized. For the generated traffic and interface MTU of 4500B (to activate the non-linear SKB mode), packet rate improvement is about 19% from ~17.6Mpps to ~21Mpps. Without this feature, counters show no CQE compression blocks for this setup, while with the feature, counters show ~20.7Mpps compressed CQEs in ~500K compression blocks. Signed-off-by: NOfer Levi <oferle@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Maor Dickman 提交于
Add support for rewriting of IPV6 DSCP part of traffic class field. Next commands, for example, can be used to offload rewrite action: OVS: $ ovs-ofctl add-flow ovs-sriov "tcpv6, in_port=REP, \ actions=mod_nw_tos:68, output:NIC" iproute2: $ tc filter add dev REP ingress protocol ipv6 prio 1 flower skip_sw \ ip_proto tcp \ action pedit ex munge ip6 traffic_class set 68 retain 0xfc pipe \ action mirred egress redirect dev NIC Signed-off-by: NMaor Dickman <maord@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Eli Cohen 提交于
Support tc trap such that packets can explicitly be forwarded to slow path if they match a specific rule. In the example below, we want packets with src IP equals 7.7.7.8 to be forwarded to software, in which case it will get to the appropriate representor net device. $ tc filter add dev eth1 protocol ip prio 1 root flower skip_sw \ src_ip 7.7.7.8 action trap Signed-off-by: NEli Cohen <eli@mellanox.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Reviewed-by: NAriel Levkovich <lariel@nvidia.com> Reviewed-by: NMaor Dickman <maord@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vu Pham 提交于
Multiple features use metadata matching such as bond vport in live migration, multi-port RoCE mode, stacked devices; hence, enable vport metadata matching by default. Fixes: 1e62e222 ("net/mlx5: E-Switch, Use vport metadata matching only when mandatory") Signed-off-by: NVu Pham <vuhuong@mellanox.com> Reviewed-by: NBodong Wang <bodong@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vu Pham 提交于
In merged eswitch configuration, peer miss rule is setup for all vports. If metadata is enabled, peer miss rule with metadata matching will be configured instead of source port matching; however, some vports that have not yet been enabled don't have default_metadata setup and their default_metadata will be zero. Hence, setup/cleanup default metadata for all vports when eswitch moves in/out of offloads mode. Fixes: 133dcfc5 ("net/mlx5: E-Switch, Alloc and free unique metadata for match") Signed-off-by: NVu Pham <vuhuong@mellanox.com> Reviewed-by: NBodong Wang <bodong@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vu Pham 提交于
Uplink vport must have a dedicated metadata with vhca_id being part of the metadata. Fixes: 133dcfc5 ("net/mlx5: E-Switch, Alloc and free unique metadata for match") Signed-off-by: NVu Pham <vuhuong@mellanox.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vu Pham 提交于
Check E-Switch capabilities and enable metadata support flag before using it to setup other features that need metadata. Signed-off-by: NVu Pham <vuhuong@mellanox.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Jianbo Liu 提交于
LAG offload can't be enabled if the enslaved PF is not lag master, which is indicated by HCA capabilities bit. It is cleared if more than 64 VFs are configured for this PF. Previously, a data structure is created to store lag info, including PFs to be enslaved, then a handler is registered for netdev notifier. However, this initialization is skipped if PF is not lag master. So PF can't handle CHANGEUPPER event from upper bond device. Even worse, PF is enslaved silently, and LAG offload is not activated. Fix this by registering netdev notifier for PFs which are not lag masters. When CHANGEUPPER event is received, and both physical ports (and only them) on the same NIC are about to be enslaved, a warning is returned for user to know it. Signed-off-by: NJianbo Liu <jianbol@mellanox.com> Reviewed-by: NRaed Salem <raeds@mellanox.com> Reviewed-by: NRoi Dayan <roid@mellanox.com> Reviewed-by: NJiri Pirko <jiri@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Jianbo Liu 提交于
If bond tx type is not active-backup or hash, LAG offload can't be enabled. When CHANGEUPPER event is received, and both PFs (and only them) under the same lag master are about to be enslaved, a warning is returned for user to know the offload failure, otherwise PFs are enslaved silently without LAG offload activated. Signed-off-by: NJianbo Liu <jianbol@mellanox.com> Reviewed-by: NRaed Salem <raeds@mellanox.com> Reviewed-by: NRoi Dayan <roid@mellanox.com> Reviewed-by: NJiri Pirko <jiri@nvidia.com> Reviewed-by: NRaed Salem <raeds@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Jianbo Liu 提交于
Change the return value to -ENOENT, to make it more meaningful. Signed-off-by: NJianbo Liu <jianbol@mellanox.com> Reviewed-by: NJiri Pirko <jiri@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Eran Ben Elisha 提交于
Before calling timecounter_cyc2time(), clock->lock must be taken. Use mlx5_timecounter_cyc2time instead which guarantees a safe access. Fixes: afc98a0b ("net/mlx5: Update ptp_clock_event foreach PPS event") Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com>
-
由 Eran Ben Elisha 提交于
Holding the clock lock is not required when scheduling a PPS work. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Reviewed-by: NMoshe Shemesh <moshe@mellanox.com>
-
由 Eran Ben Elisha 提交于
Fix a typo in ptp_clock_info naming: mlx5_p2p -> mlx5_ptp. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com>
-
由 Eran Ben Elisha 提交于
Clock struct is part of struct mlx5_core_dev. Code was inconsistent, on some cases used container_of and on another used clock->mdev. Align code to use container_of amd remove clock->mdev pointer. While here, fix reverse xmas tree coding style. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Reviewed-by: NMoshe Shemesh <moshe@mellanox.com>
-
由 Dan Carpenter 提交于
This isn't a fall through because it was after a return statement. The fall through annotation leads to a Smatch warning: drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:246 mlx5e_ethtool_get_sset_count() warn: ignoring unreachable code. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NGustavo A. R. Silva <gustavoars@kernel.org>
-
由 Moshe Tal 提交于
Add variable initialization to eliminate the warning "variable may be used uninitialized". Fixes: 5f29458b ("net/mlx5e: Support dump callback in TX reporter") Signed-off-by: NMoshe Tal <moshet@mellanox.com> Reviewed-by: NAya Levin <ayal@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
- 15 9月, 2020 6 次提交
-
-
由 Shannon Nelson 提交于
Clean and rebuild the debugfs info for the queues being swapped. Fixes: a34e25ab ("ionic: change the descriptor ring length without full reset") Signed-off-by: NShannon Nelson <snelson@pensando.io> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Landen Chao 提交于
Add new support for MT7531: MT7531 is the next generation of MT7530. It is also a 7-ports switch with 5 giga embedded phys, 2 cpu ports, and the same MAC logic of MT7530. Cpu port 6 only supports SGMII interface. Cpu port 5 supports either RGMII or SGMII in different HW sku, but cannot be muxed to PHY of port 0/4 like mt7530. Due to SGMII interface support, pll, and pad setting are different from MT7530. This patch adds different initial setting, and SGMII phylink handlers of MT7531. MT7531 SGMII interface can be configured in following mode: - 'SGMII AN mode' with in-band negotiation capability which is compatible with PHY_INTERFACE_MODE_SGMII. - 'SGMII force mode' without in-band negotiation which is compatible with 10B/8B encoding of PHY_INTERFACE_MODE_1000BASEX with fixed full-duplex and fixed pause. - 2.5 times faster clocked 'SGMII force mode' without in-band negotiation which is compatible with 10B/8B encoding of PHY_INTERFACE_MODE_2500BASEX with fixed full-duplex and fixed pause. Signed-off-by: NLanden Chao <landen.chao@mediatek.com> Signed-off-by: NSean Wang <sean.wang@mediatek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Landen Chao 提交于
Add a structure holding required operations for each device such as device initialization, PHY port read or write, a checker whether PHY interface is supported on a certain port, MAC port setup for either bus pad or a specific PHY interface. The patch is done for ready adding a new hardware MT7531, and keep the same setup logic of existing hardware. Signed-off-by: NLanden Chao <landen.chao@mediatek.com> Signed-off-by: NSean Wang <sean.wang@mediatek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Landen Chao 提交于
Refine message in Kconfig with fixing typo and an explicit MT7621 support. Signed-off-by: NLanden Chao <landen.chao@mediatek.com> Signed-off-by: NSean Wang <sean.wang@mediatek.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Xie He 提交于
x25_type_trans only needs to be called before we call netif_rx to pass the skb to upper layers. It does not need to be called before lapb_data_received. The LAPB module does not need the fields that are set by calling it. In the other two X.25 drivers - lapbether and hdlc_x25. x25_type_trans is only called before netif_rx and not before lapb_data_received. Cc: Martin Schiller <ms@dev.tdt.de> Signed-off-by: NXie He <xie.he.0141@gmail.com> Acked-by: NMartin Schiller <ms@dev.tdt.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Petr Machata 提交于
The SBIB register configures the size of an internal buffer that the Spectrum ASICs use when mirroring traffic on egress. This size should be taken into account when validating that the port headroom buffers are not larger than the chip can handle. Up until now this was not done, which is incidentally not a problem, because the priority group buffers that mlxsw auto-configures are small enough that the boundary condition could not be violated. However when dcbnl_setbuffer is implemented, the user has control over sizes of PG buffers, and they might overshoot the headroom capacity. However the size of the SBIB buffer depends on port speed, and that cannot be vetoed. Therefore SBIB size should be deduced from maximum port speed. Additionally, once the buffers are configured by hand, the user could get into an uncomfortable situation where their MTU change requests get vetoed, because the SBIB does not fit anymore. Therefore derive SBIB size from maximum permissible MTU as well. Remove all the code that adjusted the SBIB size whenever speed or MTU changed. Signed-off-by: NPetr Machata <petrm@nvidia.com> Signed-off-by: NIdo Schimmel <idosch@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-