- 09 12月, 2016 2 次提交
-
-
由 Martin KaFai Lau 提交于
When XDP is active in mlx4, mlx4 is using one page/pkt. At the same time (i.e. when XDP is active), it is currently limiting MTU to be FRAG_SZ0 - ETH_HLEN - (2 * VLAN_HLEN) which is 1514 in x86. AFAICT, we can at least raise the MTU limit up to PAGE_SIZE - ETH_HLEN - (2 * VLAN_HLEN) which this patch is doing. It will be useful in the next patch which allows XDP program to extend the packet by adding new header(s). Note: In the earlier XDP patches, there is already existing guard to ensure the page/pkt scheme only applies when XDP is active in mlx4. Signed-off-by: NMartin KaFai Lau <kafai@fb.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Martin KaFai Lau 提交于
This patch allows XDP prog to extend/remove the packet data at the head (like adding or removing header). It is done by adding a new XDP helper bpf_xdp_adjust_head(). It also renames bpf_helper_changes_skb_data() to bpf_helper_changes_pkt_data() to better reflect that XDP prog does not work on skb. This patch adds one "xdp_adjust_head" bit to bpf_prog for the XDP-capable driver to check if the XDP prog requires bpf_xdp_adjust_head() support. The driver can then decide to error out during XDP_SETUP_PROG. Signed-off-by: NMartin KaFai Lau <kafai@fb.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 12月, 2016 1 次提交
-
-
由 Zhang Shengju 提交于
Since offset is zero, it's not necessary to use set function. Reset function is straightforward, and will remove the unnecessary add operation in set function. Signed-off-by: NZhang Shengju <zhangshengju@cmss.chinamobile.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 12月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
My recent commit to get more precise rx/tx counters in ndo_get_stats64() can lead to crashes at device dismantle, as Jesper found out. We must prevent mlx4_en_fold_software_stats() trying to access tx/rx rings if they are deleted. Fix this by adding a test against priv->port_up in mlx4_en_fold_software_stats() Calling mlx4_en_fold_software_stats() from mlx4_en_stop_port() allows us to eventually broadcast the latest/current counters to rtnetlink monitors. Fixes: 40931b85 ("mlx4: give precise rx/tx bytes/packets counters") Signed-off-by: NEric Dumazet <edumazet@google.com> Reported-and-bisected-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Saeed Mahameed <saeedm@dev.mellanox.co.il> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 12月, 2016 1 次提交
-
-
由 Souptick Joarder 提交于
In mlx4_alloc_cmd_mailbox(), pci_pool_alloc() followed by memset will be replaced by pci_pool_zalloc() Signed-off-by: NSouptick joarder <jrdr.linux@gmail.com> Reviewed-by: NYuval Shaia <yuval.shaia@oracle.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 11月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
mlx4 stats are chaotic because a deferred work queue is responsible to update them every 250 ms. Even sampling stats every one second with "sar -n DEV 1" gives variations like the following : lpaa23:~# sar -n DEV 1 10 | grep eth0 | cut -c1-65 07:39:22 eth0 146877.00 3265554.00 9467.15 4828168.50 07:39:23 eth0 146587.00 3260329.00 9448.15 4820445.98 07:39:24 eth0 146894.00 3259989.00 9468.55 4819943.26 07:39:25 eth0 110368.00 2454497.00 7113.95 3629012.17 <<>> 07:39:26 eth0 146563.00 3257502.00 9447.25 4816266.23 07:39:27 eth0 145678.00 3258292.00 9389.79 4817414.39 07:39:28 eth0 145268.00 3253171.00 9363.85 4809852.46 07:39:29 eth0 146439.00 3262185.00 9438.97 4823172.48 07:39:30 eth0 146758.00 3264175.00 9459.94 4826124.13 07:39:31 eth0 146843.00 3256903.00 9465.44 4815381.97 Average: eth0 142827.50 3179259.70 9206.30 4700578.16 This patch allows rx/tx bytes/packets counters being folded at the time we need stats. We now can fetch stats every 1 ms if we want to check NIC behavior on a small time window. It is also easier to detect anomalies. lpaa23:~# sar -n DEV 1 10 | grep eth0 | cut -c1-65 07:42:50 eth0 142915.00 3177696.00 9212.06 4698270.42 07:42:51 eth0 143741.00 3200232.00 9265.15 4731593.02 07:42:52 eth0 142781.00 3171600.00 9202.92 4689260.16 07:42:53 eth0 143835.00 3192932.00 9271.80 4720761.39 07:42:54 eth0 141922.00 3165174.00 9147.64 4679759.21 07:42:55 eth0 142993.00 3207038.00 9216.78 4741653.05 07:42:56 eth0 141394.06 3154335.64 9113.85 4663731.73 07:42:57 eth0 141850.00 3161202.00 9144.48 4673866.07 07:42:58 eth0 143439.00 3180736.00 9246.05 4702755.35 07:42:59 eth0 143501.00 3210992.00 9249.99 4747501.84 Average: eth0 142835.66 3182165.93 9206.98 4704874.08 Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 11月, 2016 2 次提交
-
-
由 Jack Morgenstein 提交于
net/mlx4: Fix uninitialized fields in rule when adding promiscuous mode to device managed flow steering In procedure mlx4_flow_steer_promisc_add(), several fields were left uninitialized in the rule structure. Correctly initialize these fields. Fixes: 592e49dd ("net/mlx4: Implement promiscuous mode with device managed flow-steering") Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
This reverts commit 9d769311. Using unregister_netdev at shutdown flow prevents calling the netdev's ndos or trying to access its freed resources. This fixes crashes like the following: Call Trace: [<ffffffff81587a6e>] dev_get_phys_port_id+0x1e/0x30 [<ffffffff815a36ce>] rtnl_fill_ifinfo+0x4be/0xff0 [<ffffffff815a53f3>] rtmsg_ifinfo_build_skb+0x73/0xe0 [<ffffffff815a5476>] rtmsg_ifinfo.part.27+0x16/0x50 [<ffffffff815a54c8>] rtmsg_ifinfo+0x18/0x20 [<ffffffff8158a6c6>] netdev_state_change+0x46/0x50 [<ffffffff815a5e78>] linkwatch_do_dev+0x38/0x50 [<ffffffff815a6165>] __linkwatch_run_queue+0xf5/0x170 [<ffffffff815a6205>] linkwatch_event+0x25/0x30 [<ffffffff81099a82>] process_one_work+0x152/0x400 [<ffffffff8109a325>] worker_thread+0x125/0x4b0 [<ffffffff8109a200>] ? rescuer_thread+0x350/0x350 [<ffffffff8109fc6a>] kthread+0xca/0xe0 [<ffffffff8109fba0>] ? kthread_park+0x60/0x60 [<ffffffff816a1285>] ret_from_fork+0x25/0x30 Fixes: 9d769311 ("net/mlx4_en: Avoid unregister_netdev at shutdown flow") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reported-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reported-by: NSteve Wise <swise@opengridcomputing.com> Cc: Jiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 11月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
Per RX ring packets/bytes counters are not protected by global priv->stats_lock. Better not confuse the reader, and use READ_ONCE() to show we read these counters without surrounding synchronization. Interrupt moderation is best effort, and we do not really care of ultra precise counters. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 11月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
Goal is to reorganize this critical structure to increase performance. ndo_start_xmit() should only dirty one cache line, and access as few cache lines as possible. Add sp_ (Slow Path) prefix to fields that are not used in fast path, to make clear what is going on. After this patch pahole reports something much better, as all ndo_start_xmit() needed fields are packed into two cache lines instead of seven or eight struct mlx4_en_tx_ring { u32 last_nr_txbb; /* 0 0x4 */ u32 cons; /* 0x4 0x4 */ long unsigned int wake_queue; /* 0x8 0x8 */ struct netdev_queue * tx_queue; /* 0x10 0x8 */ u32 (*free_tx_desc)(struct mlx4_en_priv *, struct mlx4_en_tx_ring *, int, u8, u64, int); /* 0x18 0x8 */ struct mlx4_en_rx_ring * recycle_ring; /* 0x20 0x8 */ /* XXX 24 bytes hole, try to pack */ /* --- cacheline 1 boundary (64 bytes) --- */ u32 prod; /* 0x40 0x4 */ unsigned int tx_dropped; /* 0x44 0x4 */ long unsigned int bytes; /* 0x48 0x8 */ long unsigned int packets; /* 0x50 0x8 */ long unsigned int tx_csum; /* 0x58 0x8 */ long unsigned int tso_packets; /* 0x60 0x8 */ long unsigned int xmit_more; /* 0x68 0x8 */ struct mlx4_bf bf; /* 0x70 0x18 */ /* --- cacheline 2 boundary (128 bytes) was 8 bytes ago --- */ __be32 doorbell_qpn; /* 0x88 0x4 */ __be32 mr_key; /* 0x8c 0x4 */ u32 size; /* 0x90 0x4 */ u32 size_mask; /* 0x94 0x4 */ u32 full_size; /* 0x98 0x4 */ u32 buf_size; /* 0x9c 0x4 */ void * buf; /* 0xa0 0x8 */ struct mlx4_en_tx_info * tx_info; /* 0xa8 0x8 */ int qpn; /* 0xb0 0x4 */ u8 queue_index; /* 0xb4 0x1 */ bool bf_enabled; /* 0xb5 0x1 */ bool bf_alloced; /* 0xb6 0x1 */ u8 hwtstamp_tx_type; /* 0xb7 0x1 */ u8 * bounce_buf; /* 0xb8 0x8 */ /* --- cacheline 3 boundary (192 bytes) --- */ long unsigned int queue_stopped; /* 0xc0 0x8 */ struct mlx4_hwq_resources sp_wqres; /* 0xc8 0x58 */ /* --- cacheline 4 boundary (256 bytes) was 32 bytes ago --- */ struct mlx4_qp sp_qp; /* 0x120 0x30 */ /* --- cacheline 5 boundary (320 bytes) was 16 bytes ago --- */ struct mlx4_qp_context sp_context; /* 0x150 0xf8 */ /* --- cacheline 9 boundary (576 bytes) was 8 bytes ago --- */ cpumask_t sp_affinity_mask; /* 0x248 0x20 */ enum mlx4_qp_state sp_qp_state; /* 0x268 0x4 */ u16 sp_stride; /* 0x26c 0x2 */ u16 sp_cqn; /* 0x26e 0x2 */ /* size: 640, cachelines: 10, members: 36 */ /* sum members: 600, holes: 1, sum holes: 24 */ /* padding: 16 */ }; Instead of this silly placement : struct mlx4_en_tx_ring { u32 last_nr_txbb; /* 0 0x4 */ u32 cons; /* 0x4 0x4 */ long unsigned int wake_queue; /* 0x8 0x8 */ /* XXX 48 bytes hole, try to pack */ /* --- cacheline 1 boundary (64 bytes) --- */ u32 prod; /* 0x40 0x4 */ /* XXX 4 bytes hole, try to pack */ long unsigned int bytes; /* 0x48 0x8 */ long unsigned int packets; /* 0x50 0x8 */ long unsigned int tx_csum; /* 0x58 0x8 */ long unsigned int tso_packets; /* 0x60 0x8 */ long unsigned int xmit_more; /* 0x68 0x8 */ unsigned int tx_dropped; /* 0x70 0x4 */ /* XXX 4 bytes hole, try to pack */ struct mlx4_bf bf; /* 0x78 0x18 */ /* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */ long unsigned int queue_stopped; /* 0x90 0x8 */ cpumask_t affinity_mask; /* 0x98 0x10 */ struct mlx4_qp qp; /* 0xa8 0x30 */ /* --- cacheline 3 boundary (192 bytes) was 24 bytes ago --- */ struct mlx4_hwq_resources wqres; /* 0xd8 0x58 */ /* --- cacheline 4 boundary (256 bytes) was 48 bytes ago --- */ u32 size; /* 0x130 0x4 */ u32 size_mask; /* 0x134 0x4 */ u16 stride; /* 0x138 0x2 */ /* XXX 2 bytes hole, try to pack */ u32 full_size; /* 0x13c 0x4 */ /* --- cacheline 5 boundary (320 bytes) --- */ u16 cqn; /* 0x140 0x2 */ /* XXX 2 bytes hole, try to pack */ u32 buf_size; /* 0x144 0x4 */ __be32 doorbell_qpn; /* 0x148 0x4 */ __be32 mr_key; /* 0x14c 0x4 */ void * buf; /* 0x150 0x8 */ struct mlx4_en_tx_info * tx_info; /* 0x158 0x8 */ struct mlx4_en_rx_ring * recycle_ring; /* 0x160 0x8 */ u32 (*free_tx_desc)(struct mlx4_en_priv *, struct mlx4_en_tx_ring *, int, u8, u64, int); /* 0x168 0x8 */ u8 * bounce_buf; /* 0x170 0x8 */ struct mlx4_qp_context context; /* 0x178 0xf8 */ /* --- cacheline 9 boundary (576 bytes) was 48 bytes ago --- */ int qpn; /* 0x270 0x4 */ enum mlx4_qp_state qp_state; /* 0x274 0x4 */ u8 queue_index; /* 0x278 0x1 */ bool bf_enabled; /* 0x279 0x1 */ bool bf_alloced; /* 0x27a 0x1 */ /* XXX 5 bytes hole, try to pack */ /* --- cacheline 10 boundary (640 bytes) --- */ struct netdev_queue * tx_queue; /* 0x280 0x8 */ int hwtstamp_tx_type; /* 0x288 0x4 */ /* size: 704, cachelines: 11, members: 36 */ /* sum members: 587, holes: 6, sum holes: 65 */ /* padding: 52 */ }; Signed-off-by: NEric Dumazet <edumazet@google.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 11月, 2016 1 次提交
-
-
由 Tariq Toukan 提交于
Make sure mlx4_en_free_resources is called under the netdev state lock. This is needed since RCU dereference of XDP prog should be protected. Fixes: 326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reported-by: NSagi Grimberg <sagi@grimberg.me> CC: Brenden Blanco <bblanco@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 11月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
While stressing a 40Gbit mlx4 NIC with busy polling, I found false sharing in mlx4 driver that can be easily avoided. This patch brings an additional 7 % performance improvement in UDP_RR workload. 1) If we received no frame during one mlx4_en_process_rx_cq() invocation, no need to call mlx4_cq_set_ci() and/or dirty ring->cons 2) Do not refill rx buffers if we have plenty of them. This avoids false sharing and allows some bulk/batch optimizations. Page allocator and its locks will thank us. Finally, mlx4_en_poll_rx_cq() should not return 0 if it determined cpu handling NIC IRQ should be changed. We should return budget-1 instead, to not fool net_rx_action() and its netdev_budget. v2: keep AVG_PERF_COUNTER(... polled) even if polled is 0 Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 11月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
There is no need calling napi_hash_del()+synchronize_rcu() before calling netif_napi_del() netif_napi_del() does this already. Using napi_hash_del() in a driver is useful only when dealing with a batch of NAPI structures, so that a single synchronize_rcu() can be used. mlx4_en_deactivate_cq() is deactivating a single NAPI. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 11月, 2016 1 次提交
-
-
由 Eric Dumazet 提交于
Do not rearm interrupts if we are busy polling. mlx4 uses separate CQ for TX and RX, so number of TX interrupts does not change, unfortunately. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Adam Belay <abelay@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Yuval Mintz <Yuval.Mintz@cavium.com> Cc: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 11月, 2016 1 次提交
-
-
由 Daniel Borkmann 提交于
Commit 67f8b1dc ("net/mlx4_en: Refactor the XDP forwarding rings scheme") added a bug in that the prog's reference count is not dropped in the error path when mlx4_en_try_alloc_resources() is failing from mlx4_xdp_set(). We previously took bpf_prog_add(prog, priv->rx_ring_num - 1), that we need to release again. Earlier in the call path, dev_change_xdp_fd() itself holds a reference to the prog as well (hence the '- 1' in the bpf_prog_add()), so a simple atomic_sub() is safe to use here. When an error is propagated, then bpf_prog_put() is called eventually from dev_change_xdp_fd() Fixes: 67f8b1dc ("net/mlx4_en: Refactor the XDP forwarding rings scheme") Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 11月, 2016 1 次提交
-
-
由 Tariq Toukan 提交于
This reverts commit 9d2afba0. The original issue would possibly exist if an external module tried calling our "ethtool_ops" without checking if it still exists. The right way of solving it is by simply doing the check in the caller side. Currently, no action is required as there's no such use case. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 11月, 2016 3 次提交
-
-
由 Tariq Toukan 提交于
XDP statistics are reported in ethtool, in total and per ring, as follows: - xdp_drop: the number of packets dropped by xdp. - xdp_tx: the number of packets forwarded by xdp. - xdp_tx_full: the number of times an xdp forward failed due to a full tx xdp ring. In addition, all packets that are dropped/forwarded by XDP are no longer accounted in rx_packets/rx_bytes of the ring, so that they count traffic that is passed to the stack. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
Separately manage the two types of TX rings: regular ones, and XDP. Upon an XDP set, do not borrow regular TX rings and convert them into XDP ones, but allocate new ones, unless we hit the max number of rings. Which means that in systems with smaller #cores we will not consume the current TX rings for XDP, while we are still in the num TX limit. XDP TX rings counters are not shown in ethtool statistics. Instead, XDP counters will be added to the respective RX rings in a downstream patch. This has no performance implications. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
Support XDP CQ type, and refactor the CQ type enum. Rename the is_tx field to match the change. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 10月, 2016 11 次提交
-
-
由 Tariq Toukan 提交于
Following the previous patch, as an optimization, the slave will not even bother sending the DUMP_ETH_STATS command over the comm channel. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jack Morgenstein 提交于
mlx4_en_DUMP_ETH_STATS took the *counter mutex* and then called the FW command, with WRAPPED attribute. As a result, the fw command is wrapped on the Hypervisor when it calls mlx4_en_DUMP_ETH_STATS. The FW command wrapper flow on the hypervisor takes the *slave_cmd_mutex* during processing. At the same time, a VF could be in the process of coming up, and could call mlx4_QUERY_FUNC_CAP. On the hypervisor, the command flow takes the *slave_cmd_mutex*, then executes mlx4_QUERY_FUNC_CAP_wrapper. mlx4_QUERY_FUNC_CAP wrapper calls mlx4_get_default_counter_index(), which takes the *counter mutex*. DEADLOCK. The fix is that the DUMP_ETH_STATS fw command should be called with the NATIVE attribute, so that on the hypervisor, this command does not enter the wrapper flow. Since the Hypervisor no longer goes through the wrapper code, we also simply return 0 in mlx4_DUMP_ETH_STATS_wrapper (i.e.the function succeeds, but the returned data will be all zeroes). No need to test if it is the Hypervisor going through the wrapper. Fixes: f9baff50 ("mlx4_core: Add "native" argument to mlx4_cmd ...") Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eugenia Emantayev 提交于
Currently interrupt test that is part of ethtool selftest runs the check over all interrupt vectors of the device. In mlx4_en package part of interrupt vectors are uninitialized since mlx4_ib doesn't exist. This causes NOP FW command to time out. Change logic to test current port interrupt vectors only. Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jack Morgenstein 提交于
In the Hypervisor, there are several FW commands which are invoked before the comm channel is initialized (in mlx4_multi_func_init). These include MOD_STAT_CONFIG, QUERY_DEV_CAP, INIT_HCA, and others. If any of these commands fails, say with a timeout, the Hypervisor driver enters the internal error reset flow. In this flow, the driver attempts to notify all slaves via the comm channel that an internal error has occurred. Since the comm channel has not yet been initialized (i.e., mapped via ioremap), this will cause dereferencing a NULL pointer. To fix this, do not access the comm channel in the internal error flow if it has not yet been initialized. Fixes: 55ad3592 ("net/mlx4_core: Enable device recovery flow with SRIOV") Fixes: ab9c17a0 ("mlx4_core: Modify driver initialization flow to accommodate SRIOV for Ethernet") Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eugenia Emantayev 提交于
Fix a kernel panic that occurs as a result of an asynchronous event handled in roce_gid_mgmt: mlx4_en_get_drvinfo is called and accesses freed resources. This happens in a shutdown flow only, since pci device is destroyed while netdevice is still alive. Fixes: c27a02cd ("mlx4_en: Add driver for Mellanox ConnectX 10GbE NIC") Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Erez Shitrit 提交于
Currently there is a race between incoming traffic and initialization flow. HW is able to receive the packets after INIT_PORT is done and unicast steering is configured. Before we set priv->port_up NAPI is not scheduled and receive queues become full. Therefore we never get new interrupts about the completions. This issue could happen if running heavy traffic during bringing port up. The resolution is to schedule NAPI once port_up is set. If receive queues were full this will process all cqes and release them. Fixes: c27a02cd ("mlx4_en: Add driver for Mellanox ConnectX 10GbE NIC") Signed-off-by: NErez Shitrit <erezsh@mellanox.com> Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eugenia Emantayev 提交于
When doing roundup_pow_of_two for large enough number with bit 31, an overflow will occur and a value equal to 1 will be returned. In this case 1 will be subtracted from the return value and division by zero will be reached. Fixes: 31c128b6 ("net/mlx4_en: Choose time-stamping shift value according to HW frequency") Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Moshe Lazer 提交于
Change the default status of quality of service back to disabled, as it hurts performance in some cases. Fixes: 38438f7c ("net/mlx4: Set enhanced QoS support by default when ...") Signed-off-by: NMoshe Lazer <moshel@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maor Gottlieb 提交于
When only one port type is supported, it should be read only. We reject changing requests, even to the auto sense mode. Fixes: 27bf91d6 ("mlx4_core: Add link type autosensing") Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jack Morgenstein 提交于
The resource type enum in the resource tracker was incorrect. RES_EQ was put in the position of RES_NPORT_ID (a FC resource). Since the remaining resources maintain their current values, and RES_EQ is not passed from slaves to the hypervisor in any FW command, this change affects only the hypervisor. Therefore, there is no backwards-compatibility issue. Fixes: 623ed84b ("mlx4_core: initial header-file changes for SRIOV support") Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnd Bergmann 提交于
There is an old warning about mlx4_SW2HW_EQ_wrapper on x86: ethernet/mellanox/mlx4/resource_tracker.c: In function ‘mlx4_SW2HW_EQ_wrapper’: ethernet/mellanox/mlx4/resource_tracker.c:3071:10: error: ‘eq’ may be used uninitialized in this function [-Werror=maybe-uninitialized] The problem here is that gcc won't track the state of the variable across a spin_unlock. Moving the assignment out of the lock is safe here and avoids the warning. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2016 1 次提交
-
-
由 Jarod Wilson 提交于
mlx4: min_mtu 46, max_mtu depends on hardware mlx5: min_mtu 68, max_mtu depends on hardware CC: netdev@vger.kernel.org CC: Tariq Toukan <tariqt@mellanox.com> CC: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: NJarod Wilson <jarod@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 10月, 2016 1 次提交
-
-
由 Brenden Blanco 提交于
In cases where the number of tx rings is not a multiple of the number of rx rings, the tx completion event will be handled on a different core from the transmit and population of the ring. Races on the ring will lead to a double-free of the page, and possibly other corruption. The rings are initialized by default with a valid multiple of rings, based on the number of cpus, therefore an invalid configuration requires ethtool to change the ring layout. For instance 'ethtool -L eth0 rx 9 tx 8' will cause packets received on rx0, and XDP_TX'd to tx48, to be completed on cpu3 (48 % 9 == 3). Resolve this discrepancy by shifting the irq for the xdp tx queues to start again from 0, modulo rx_ring_num. Fixes: 9ecc2d86 ("net/mlx4_en: add xdp forwarding and data write support") Reported-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NBrenden Blanco <bblanco@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 10月, 2016 1 次提交
-
-
由 Jack Morgenstein 提交于
In MLX qp packets, the LRH (built by the driver) has both a VL field and an SL field. When building a QP1 packet, the VL field should reflect the SLtoVL mapping and not arbitrarily contain zero (as is done now). This bug causes credit problems in IB switches at high rates of QP1 packets. The fix is to cache the SL to VL mapping in the driver, and look up the VL mapped to the SL provided in the send request when sending QP1 packets. For FW versions which support generating a port_management_config_change event with subtype sl-to-vl-table-change, the driver uses that event to update its sl-to-vl mapping cache. Otherwise, the driver snoops incoming SMP mads to update the cache. There remains the case where the FW is running in secure-host mode (so no QP0 packets are delivered to the driver), and the FW does not generate the sl2vl mapping change event. To support this case, the driver updates (via querying the FW) its sl2vl mapping cache when running in secure-host mode when it receives either a Port Up event or a client-reregister event (where the port is still up, but there may have been an opensm failover). OpenSM modifies the sl2vl mapping before Port Up and Client-reregister events occur, so if there is a mapping change the driver's cache will be properly updated. Fixes: 225c7b1f ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters") Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 30 9月, 2016 1 次提交
-
-
由 David Decotigny 提交于
This also can address following UBSAN warnings: [ 36.640343] ================================================================================ [ 36.648772] UBSAN: Undefined behaviour in drivers/net/ethernet/mellanox/mlx4/fw.c:857:26 [ 36.656853] shift exponent 64 is too large for 32-bit type 'int' [ 36.663348] ================================================================================ [ 36.671783] ================================================================================ [ 36.680213] UBSAN: Undefined behaviour in drivers/net/ethernet/mellanox/mlx4/fw.c:861:27 [ 36.688297] shift exponent 35 is too large for 32-bit type 'int' [ 36.694702] ================================================================================ Tested: reboot with UBSAN, no warning. Signed-off-by: NDavid Decotigny <decot@googlers.com> Acked-by: NEric Dumazet <edumazet@google.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 9月, 2016 5 次提交
-
-
由 Moshe Shemesh 提交于
Move the vf to VST 802.1ad mode (mlx4 VST QinQ mode) by setting vf vlan protocol to 802.1ad. VST 802.1ad mode in mlx4, is used for STAG strip/insertion by PF, while the CTAG is set by the VF. Read current vlan protocol as part of the vf configuration state. Upon setting vf vlan protocol to 802.1ad, we use a mechanism of handshake to verify that both the vf and the pf driver version support it. The handshake uses the command QUERY_FUNC_CAP: - The vf sets a pre-defined support bit in input modifier. - A pf that supports the feature sends the request to the vf through a pre-defined field in the output mailbox. - In case vf does not support the feature, the pf will fail the control command (in this case, IP link tool command to set the vf vlan protocol to 802.1ad). No change in VST 802.1Q mode. Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Moshe Shemesh 提交于
Introduce new rtnl UAPI that exposes a list of vlans per VF, giving the ability for user-space application to specify it for the VF, as an option to support 802.1ad. We adjusted IP Link tool to support this option. For future use cases, the new UAPI supports multiple vlans. For now we limit the list size to a single vlan in kernel. Add IFLA_VF_VLAN_LIST in addition to IFLA_VF_VLAN to keep backward compatibility with older versions of IP Link tool. Add a vlan protocol parameter to the ndo_set_vf_vlan callback. We kept 802.1Q as the drivers' default vlan protocol. Suitable ip link tool command examples: Set vf vlan protocol 802.1ad: ip link set eth0 vf 1 vlan 100 proto 802.1ad Set vf to VST (802.1Q) mode: ip link set eth0 vf 1 vlan 100 proto 802.1Q Or by omitting the new parameter ip link set eth0 vf 1 vlan 100 Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Moshe Shemesh 提交于
In Ethernet VF, disable vlan HW acceleration on VF while it is set to VF vlan protocol 802.1ad mode. Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Moshe Shemesh 提交于
Check device capability to support VF vlan protocol 802.1ad mode. Add vport attribute vlan protocol. Init vport vlan protocol by default to 802.1Q. Add update QP support for VF vlan protocol 802.1ad. Add func capability vlan_offload_disable to disable all vlan HW acceleration on VF while the VF is set to VF vlan protocol 802.1ad mode. No change in VF vlan protocol 802.1Q (VST) mode. Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Moshe Shemesh 提交于
Separate QUERY_FUNC_CAP flags0 from QUERY_FUNC_CAP flags, as 'flags' is already used for another set of flags in FUNC CAP, while phv bit should be part of a different set of flags. Remove QUERY_FUNC_CAP port_flags field, as it is not in use. Fixes: 77fc29c4 ('net/mlx4_core: Preparations for 802.1ad VLAN support') Fixes: 5cc914f1 ('mlx4_core: Added FW commands and their wrappers for supporting SRIOV') Signed-off-by: NMoshe Shemesh <moshe@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 9月, 2016 1 次提交
-
-
由 Nicolas Pitre 提交于
Drivers must be ready to accept NULL from ptp_clock_register() if the PTP clock subsystem is configured out. This patch documents that and ensures that all drivers cope well with a NULL return. Signed-off-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NEugenia Emantayev <eugenia@mellanox.com> Acked-by: NRichard Cochran <richardcochran@gmail.com> Acked-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-