- 03 12月, 2016 20 次提交
-
-
由 Jacob Keller 提交于
Replace a check of magic number 4095 with VLAN_N_VID. This makes it obvious that a later check against VLAN_N_VID is always true and can be removed. Change-ID: I28998f127a61a529480ce63d8a07e266f6c63b7b Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
This label is unnecessary, as are jumping to a block that checks aq_ret and then immediately skipping it and returning. So just jump straight to the error_param and remove this unnecessary label. Also use goto error_param even in the last check for style consistency. Change-ID: If487c7d10c4048e37c594e5eca167693aaed45f6 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we are much more robust about defining what we can and cannot offload. Previously we were performing no checks. This should bring us up to parity with the i40e PF driver. In addition the device only supports GSO as long as the MSS is 64 or greater. We were not checking this so an MSS less than that was resulting in Tx hangs. Change-ID: If533553ec92fc6ba694eab6ac81fdaf3004f3592 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we are much more robust about defining what we can and cannot offload. Previously we were just checking for the L4 tunnel header length, however there are other fields we should be verifying as there are multiple scenarios in which we cannot perform hardware offloads. In addition the device only supports GSO as long as the MSS is 64 or greater. We were not checking this so an MSS less than that was resulting in Tx hangs. Change-ID: I5e2fd5f3075c73601b4b36327b771c64fcb6c31b Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
-
由 Marcin Wojtas 提交于
Armada 3700 is a new ARMv8 SoC from Marvell using same network controller as older Armada 370/38x/XP. There are however some differences that needed taking into account when adding support for it: * open default MBUS window to 4GB of DRAM - Armada 3700 SoC's Mbus configuration for network controller has to be done on two levels: global and per-port. The first one is inherited from the bootloader. The latter can be opened in a default way, leaving arbitration to the bus controller. Hence filled mbus_dram_target_info structure is not needed * make per-CPU operation optional - Recent patches adding RSS and XPS support for Armada 38x/XP enabled per-CPU operation of the controller by default. Contrary to older SoC's Armada 3700 SoC's network controller is not capable of per-CPU processing due to interrupt lines' connectivity. This patch restores non-per-CPU operation, which is now optional and depends on neta_armada3700 flag value in mvneta_port structure. In order not to complicate the code, separate interrupt subroutine is implemented. For now, on the Armada 3700, RSS is disabled as the current implementation depend on the per cpu interrupts. [gregory.clement@free-electrons.com: extract from a larger patch, replace some ifdef and port to net-next for v4.10] Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Actually only the mvneta_bm support is not 64-bits compatible. The mvneta code itself can run on 64-bits architecture. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
Prepare the mvneta driver in order to be usable on the 64 bits platform such as the Armada 3700. [gregory.clement@free-electrons.com]: this patch was extract from a larger one to ease review and maintenance. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Until now the virtual address of the received buffer were stored in the cookie field of the rx descriptor. However, this field is 32-bits only which prevents to use the driver on a 64-bits architecture. With this patch the virtual address is stored in an array not shared with the hardware (no more need to use the DMA API). Thanks to this, it is possible to use cache contrary to the access of the rx descriptor member. The change is done in the swbm path only because the hwbm uses the cookie field, this also means that currently the hwbm is not usable in 64-bits. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NJisheng Zhang <jszhang@marvell.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
For HWBM all buffers are allocated in mvneta_bm_construct() and in runtime they are put into descriptors by hardware. There is no need to fill them at this point. Suggested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
For small frame reuse the phys_addr variable instead of accessing the uncacheable value in the rx descriptor. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
My recent commit to get more precise rx/tx counters in ndo_get_stats64() can lead to crashes at device dismantle, as Jesper found out. We must prevent mlx4_en_fold_software_stats() trying to access tx/rx rings if they are deleted. Fix this by adding a test against priv->port_up in mlx4_en_fold_software_stats() Calling mlx4_en_fold_software_stats() from mlx4_en_stop_port() allows us to eventually broadcast the latest/current counters to rtnetlink monitors. Fixes: 40931b85 ("mlx4: give precise rx/tx bytes/packets counters") Signed-off-by: NEric Dumazet <edumazet@google.com> Reported-and-bisected-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Saeed Mahameed <saeedm@dev.mellanox.co.il> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Transmit queue timeout issue is seen in two cases - Due to a race condition btw setting stop_queue at xmit() and checking for stopped_queue in NAPI poll routine, at times transmission from a SQ comes to a halt. This is fixed by using barriers and also added a check for SQ free descriptors, incase SQ is stopped and there are only CQE_RX i.e no CQE_TX. - Contrary to an assumption, a HW errata where HW doesn't stop transmission even though there are not enough CQEs available for a CQE_TX is not fixed in T88 pass 2.x. This results in a Qset error with 'CQ_WR_FULL' stalling transmission. This is fixed by adjusting RXQ's RED levels for CQ level such that there is always enough space left for CQE_TXs. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
When ndo_setup_tc is called with an egress_dev flag set, it means that the ndo call was executed on the mirred action (egress) device and not on the ingress device. In order to support this kind of ndo_setup_tc call, and insert the correct decap rule to the hardware, the uplink device on the same eswitch should be found. Currently, we use this resolution between the mirred device and the uplink on the same eswitch to offload vxlan shared device decap rules. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
Replace the representor private data to a net_device pointer holding the representor netdevice, instead of void pointer holding mlx5e_priv. It will be used by a new eswitch service function, returning the uplink representor netdevice. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
The VF Representor udp tunnel ndo entries were removed by mistake, return them. Fixes: 370bad0f ('net/mlx5e: Support HW (offloaded) and SW counters for SRIOV switchdev mode') Signed-off-by: NHadar Hen Zion <hadarh@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuval Mintz 提交于
This patch adds out of order packet handling for hardware offloaded iSCSI. Out of order packet handling requires driver buffer allocation and assistance. Signed-off-by: NArun Easi <arun.easi@cavium.com> Signed-off-by: NYuval Mintz <yuval.mintz@cavium.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuval Mintz 提交于
This adds the backbone required for the various HW initalizations which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ 4xxxx line of adapters - FW notification, resource initializations, etc. Signed-off-by: NArun Easi <arun.easi@cavium.com> Signed-off-by: NYuval Mintz <yuval.mintz@cavium.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rasmus Villemoes 提交于
This is already using the %pM printf extension; might as well also use %ph to make the code smaller. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnd Bergmann 提交于
When CONFIG_INET is disabled, the new selftest results in a link error: drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.o: In function `mlx5e_test_loopback': en_selftest.c:(.text.mlx5e_test_loopback+0x2ec): undefined reference to `ip_send_check' en_selftest.c:(.text.mlx5e_test_loopback+0x34c): undefined reference to `udp4_hwcsum' This hides the specific test in that configuration. Fixes: 0952da79 ("net/mlx5e: Add support for loopback selftest") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
After 326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock"), the rcu_read_lock() in bpf_prog_run_xdp() is superfluous, since callers need to hold rcu_read_lock() already to make sure BPF program doesn't get released in the background. Thus, drop it from bpf_prog_run_xdp(), as it can otherwise be misleading. Still keeping the bpf_prog_run_xdp() is useful as it allows for grepping in XDP supported drivers and to keep the typecheck on the context intact. For mlx4, this means we don't have a double rcu_read_lock() anymore. nfp can just make use of bpf_prog_run_xdp(), too. For qede, just move rcu_read_lock() out of the helper. When the driver gets atomic replace support, this will move to call-sites eventually. mlx5 needs actual fixing as it has the same issue as described already in 326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock"), that is, we're under RCU bh at this time, BPF programs are released via call_rcu(), and call_rcu() != call_rcu_bh(), so we need to properly mark read side as programs can get xchg()'ed in mlx5e_xdp_set() without queue reset. Fixes: 86994156 ("net/mlx5e: XDP fast RX drop bpf programs support") Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 12月, 2016 11 次提交
-
-
由 Roi Dayan 提交于
Handling flow encap entry should be inside tc del flow and is only relevant for offloaded eswitch TC rules. Fixes: 11a457e9b6c1 ("net/mlx5e: Add basic TC tunnel set action for SRIOV offloads") Signed-off-by: NRoi Dayan <roid@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Roi Dayan 提交于
Change the function that deletes offloaded TC rule to get struct mlx5e_tc_flow instance which contains both the flow handle and flow attributes. This is a cleanup needed for downstream patches, it doesn't change any functionality. Signed-off-by: NRoi Dayan <roid@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Roi Dayan 提交于
According to the reverse unwinding principle, on delete time we should first handle deletion of the steering rule and later handle the vlan deletion from the eswitch. Fixes: 8b32580d ("net/mlx5e: Add TC vlan action for SRIOV offloads") Signed-off-by: NRoi Dayan <roid@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Roi Dayan 提交于
We will never find a flow with the same cookie as cls_flower always allocates a new flow and the cookie is the allocated memory address. Fixes: e3a2b7ed ("net/mlx5e: Support offload cls_flower with drop action") Signed-off-by: NRoi Dayan <roid@mellanox.com> Reviewed-by: NHadar Hen Zion <hadarh@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
In Striding RQ implementation, we used a single UMR (User-Mode Memory Registration) memory key for all RQs. When the product of RQs number*size gets high, we hit a limitation of u16 field size in FW. Here we move to using a UMR memory key per RQ, so we can scale to any number of rings, with the maximum buffer size in each. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
In next patch we are going to create a UMR MKey per RQ, we need mlx5e_create_umr_mkey declared before mlx5e_create_rq. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
Add new type of struct mlx5_frag_buf which is used to allocate fragmented buffers rather than contiguous, and make the Completion Queues (CQs) use it as they are big (default of 2MB per CQ in Striding RQ). This fixes the failures of type: "mlx5e_open_locked: mlx5e_open_channels failed, -12" due to dma_zalloc_coherent insufficient contiguous coherent memory to satisfy the driver's request when the user tries to setup more or larger rings. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reported-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Neill Whillans 提交于
Add support for the (optional) SGMII PCS functionality of the Altera TSE MAC. If the phy-mode is set to 'sgmii' then we attempt to discover and initialise the PCS so that the MAC can communicate to the PHY. The PCS IP block provides a scratch register for testing presence of the PCS, which is mapped into one of the two MDIO spaces present in the MAC's register space. Once we have determined that the scratch register is functioning, we attempt to initialise the PCS to auto-negotiate an SGMII link with the PHY. There is no need to monitor or manage the SGMII link beyond this, since the normal PHY MDIO will then be used to monitor the media layer. The Altera TSE MAC has only one way in which it can be configured with an SGMII PCS, and as such, this patch only looks to the phy-mode to select whether or not to attempt to initialise the PCS registers. During initialisation, we report the PCS's equivalent of a PHY ID register. This can be parameterised during the IP instantiation and is often left as '0x00000000' which is not an error. Signed-off-by: NNeill Whillans <neill.whillans@codethink.co.uk> Reviewed-by: NDaniel Silverstone <daniel.silverstone@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edward Cree 提交于
It's no longer used now that Falcon is gone. Also remove a reference in a comment to an ioctl that doesn't exist. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edward Cree 提交于
Easy enough for Falcon users to enable it when making oldconfig. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edward Cree 提交于
Defalconisation removed one of the string arguments, but missed the corresponding %s. Fixes: 5a6681e2 ("sfc: separate out SFC4000 ("Falcon") support into new sfc-falcon driver") Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 12月, 2016 9 次提交
-
-
由 Souptick Joarder 提交于
In alloc_cmd_box(), pci_pool_alloc() followed by memset will be replaced by pci_pool_zalloc() Signed-off-by: NSouptick joarder <jrdr.linux@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Souptick Joarder 提交于
In mlx4_alloc_cmd_mailbox(), pci_pool_alloc() followed by memset will be replaced by pci_pool_zalloc() Signed-off-by: NSouptick joarder <jrdr.linux@gmail.com> Reviewed-by: NYuval Shaia <yuval.shaia@oracle.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
Split device budget between channels according to channel rate. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
Check budget fullness only after it's updated and update channel mask only once to keep budget balance between channels. It's also needed for farther changes. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
This patch allows to rate limit queues tx queues for cpsw interface. The rate is set in absolute Mb/s units and cannot be more a speed an interface is connected with. The rate for a tx queue can be tested with: ethtool -L eth0 rx 4 tx 4 echo 100 > /sys/class/net/eth0/queues/tx-0/tx_maxrate echo 200 > /sys/class/net/eth0/queues/tx-1/tx_maxrate echo 50 > /sys/class/net/eth0/queues/tx-2/tx_maxrate echo 30 > /sys/class/net/eth0/queues/tx-3/tx_maxrate tc qdisc add dev eth0 root handle 1: multiq tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\ dport 5001 0xffff action skbedit queue_mapping 0 tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\ dport 5002 0xffff action skbedit queue_mapping 1 tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\ dport 5003 0xffff action skbedit queue_mapping 2 tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\ dport 5004 0xffff action skbedit queue_mapping 3 iperf -c 192.168.2.1 -b 110M -p 5001 -f m -t 60 iperf -c 192.168.2.1 -b 215M -p 5002 -f m -t 60 iperf -c 192.168.2.1 -b 55M -p 5003 -f m -t 60 iperf -c 192.168.2.1 -b 32M -p 5004 -f m -t 60 Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
The cpdma has 8 rate limited tx channels. This patch adds ability for cpdma driver to use 8 tx h/w shapers. If at least one channel is not rate limited then it must have higher number, this is because the rate limited channels have to have higher priority then not rate limited channels. The channel priority is set in low-hi direction already, so that when a new channel is added with ethtool and it doesn't have rate yet, it cannot affect on rate limited channels. It can be useful for TSN streams and just in cases when h/w rate limited channels are needed. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
The weight of a channel is needed to split descriptors between channels. The weight can depend on maximum rate of channels, maximum rate of an interface or other reasons. The channel weight is in percentage and is independent for rx and tx channels. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mintz, Yuval 提交于
Add support for forwarding via XDP. Once the eBPF is attached, driver would allocate & configure a designated transmission queue meant solely for forwarding packets. Said queue would share the receive-queue's interrupt line, and would have it's own Tx statistics. Infrastructure changes required for this [spread-out through the code]: - Determine the DMA direction of the receive buffers based on the presence of the eBPF program. - Turn the sw Tx ring into a union, as regular/XDP queues have different needs for releasing resources after completion [regular requires the SKB, XDP requires the transmitted page]. Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mintz, Yuval 提交于
Add support for the ndo_xdp callback. This patch would support XDP_PASS, XDP_DROP and XDP_ABORTED commands. This also adds a per Rx queue statistic which counts number of packets which didn't reach the stack [due to XDP]. Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-