提交 1dcf3ac4 编写于 作者: D David S. Miller

Merge branch 'mlx5-next'

Amir Vadai says:

====================
net/mlx5: ConnectX-4 100G Ethernet driver

This patchset extends the mlx5_core driver to support Ethernet
functionality. The Ethernet functionality in the mlx5 driver is
integrated into the core driver and not as separated driver. The
IB functionality remains in the mlx5_ib driver as before.

This functionality will enable the Ethernet capability of Mellanox's new
famility of cards - ConnectX-4. Due to the fact that backword
compatability is being kept, existing Connect-IB cards that are using
this driver are fully working with the modified driver, and no issues
with current deployments should be seen.

Like the ConnectX-3 cards, ConnectX-4 is a VPI (Virtual Port Interface -
every port can be configured as Infiniband or Ethernet) card.
Unlike previous generations, the ConnectX-4 has a separate PCI function
per port.

The current code has a limitation that Infiniband and Ethernet port types
are mutually exclusive. When the driver is compiled with Ethernet
support, the Infiniband functionality is disabled and vice versa. To
control that we added the CONFIG_MLX5_CORE_EN config directive
which is 'n' by default, but can be changed by the user.

This limitation is short-lived and would be addressed soon.

As part of this patchset, mlx5_ifc.h was heavily modified [1]. This file
is now generated automatically from the device specification document.
Since this patch is too big for the mail server, it might be missing in
the mailing list, but could be pulled from an external git repository [2].

irq name selection is done at driver initialization and doesn't contain the
interface name as part of the irq name.
irq_balancer will still work thanks to an improvement introduced by Neil Horman
[3] to use sysfs instead of /proc/interrupts.

Patchset was applied on top of commit ed2dfd90 ("tcp/dccp: warn user for
preferred ip_local_port_range")

[1] - Patch 4/11 ("net/mlx5_core: HW data structs/types definitions preparation for mlx5 ehternet driver")
[2] - http://git.openfabrics.org/?p=~amirv/linux.git;a=shortlog;h=refs/heads/mlx5e_v1
[3] - kernel: da8d1c8b PCI/sysfs: add per pci device msi[x] irq listing (v5)
      irq_balancer: 32a7757 Complete rework of how we detect and classify irqs

Thanks to Achiad, Saeed, Yevheny, Or and the whole team for making this happen,
Amir

Changes from V4:
- Removed Patch 3/12: net/mlx5_core: Add EQ renaming mechanism
- Patch 12/12: net/mlx5: Extend mlx5_core to support ConnectX-4 Ethernet functionality
  - irq name is created on driver initialization, therefore it won't contain
    the network interface name in it. This won't effect irq_balancer thanks to
    patches introduced by Neil Horman to use sysfs instead of /proc/interrupts.

Changes from V3:
- PATCH 8/11: net/mlx5_core: Set/Query port MTU commands
  - Return value directly - no need for err.

Changes from V2:
- Improved changelogs and cover-letter
- Added CONFIG_MLX5_EN to disable/enable the Ethernet functionality
- Moved en.h and wq.[ch] into the patch with data-path related code

Changes from V1:
- Added patch 1/12 ("net/mlx5_core,mlx5_ib: Do not use vmap() on coherent
  memory")

Changes from V0:
- Removed V0 Patch 1/11 ("net/mlx5_core: Virtually extend work/completion queue
  buffers by one page") due to misuse of DMA API. Thanks Dave.
- Patch 1/11 ("net/mlx5_core: Set irq affinity hints"):
  - Use kcalloc instead of kzalloc
  - Fix build error when CONFIG_CPUMASK_OFFSTACK=n. Driver loading will fail
    now if cpumask allocation is failing.
  - Using dev_to_node helper. Thanks, Ido.
- Patch 3/11 ("net/mlx5_core: HW data structs/types definitions preparation for
  mlx5 ehternet driver")
  - Removed Mellanox internal comment at the head of the file. Thanks Joe
- Patch 6/11 ("net/mlx5_core: Implement get/set port status")
  - Use direct return of function's result. Thanks Sergei.
- Added Patch 8/11 ("net/mlx5_core: Set/Query port MTU commands")
- Patch 9/11 ("net/mlx5: Ethernet Datapath files")
  - Use rq->wqe_sz instead of skb_end_offset. Thanks Ido.
  - Use dma_wmb() when possible instead of wmb(). Thanks Alex.
  - Fix checkpatch issues
- Patch 10/11 ("net/mlx5: ethernet resources handling")
  - checkpatch issues
  - Added missing include
- Patch 11/11 ("net/mlx5: Ethernet driver")
  - checkpatch issues
  - fixed typo
  - Modified use of affinity hint
  - Using dev_to_node helper. Thanks, Ido.
  - Use new hardware commands from Patch 8/11 ("net/mlx5_core: Set/Query port
    MTU commands") to get/set port MTU in hardware.
  - Removed NETIF_F_SG since hardware ring wraparound is not supported
  - Use dma_wmb() when possible instead of wmb(). Thanks Alex.
====================
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
config MLX5_INFINIBAND
tristate "Mellanox Connect-IB HCA support"
depends on NETDEVICES && ETHERNET && PCI
select NET_VENDOR_MELLANOX
select MLX5_CORE
depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
---help---
This driver provides low-level InfiniBand support for
Mellanox Connect-IB PCI Express host channel adapters (HCAs).
......
......@@ -590,8 +590,7 @@ static int alloc_cq_buf(struct mlx5_ib_dev *dev, struct mlx5_ib_cq_buf *buf,
{
int err;
err = mlx5_buf_alloc(dev->mdev, nent * cqe_size,
PAGE_SIZE * 2, &buf->buf);
err = mlx5_buf_alloc(dev->mdev, nent * cqe_size, &buf->buf);
if (err)
return err;
......@@ -754,7 +753,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev, int entries,
return ERR_PTR(-EINVAL);
entries = roundup_pow_of_two(entries + 1);
if (entries > dev->mdev->caps.gen.max_cqes)
if (entries > (1 << MLX5_CAP_GEN(dev->mdev, log_max_cq_sz)))
return ERR_PTR(-EINVAL);
cq = kzalloc(sizeof(*cq), GFP_KERNEL);
......@@ -921,7 +920,7 @@ int mlx5_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
int err;
u32 fsel;
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_CQ_MODER))
if (!MLX5_CAP_GEN(dev->mdev, cq_moderation))
return -ENOSYS;
in = kzalloc(sizeof(*in), GFP_KERNEL);
......@@ -1076,7 +1075,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
int uninitialized_var(cqe_size);
unsigned long flags;
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_RESIZE_CQ)) {
if (!MLX5_CAP_GEN(dev->mdev, cq_resize)) {
pr_info("Firmware does not support resize CQ\n");
return -ENOSYS;
}
......@@ -1085,7 +1084,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
return -EINVAL;
entries = roundup_pow_of_two(entries + 1);
if (entries > dev->mdev->caps.gen.max_cqes + 1)
if (entries > (1 << MLX5_CAP_GEN(dev->mdev, log_max_cq_sz)) + 1)
return -EINVAL;
if (entries == ibcq->cqe + 1)
......
......@@ -129,7 +129,7 @@ int mlx5_query_ext_port_caps(struct mlx5_ib_dev *dev, u8 port)
packet_error = be16_to_cpu(out_mad->status);
dev->mdev->caps.gen.ext_port_cap[port - 1] = (!err && !packet_error) ?
dev->mdev->port_caps[port - 1].ext_port_cap = (!err && !packet_error) ?
MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO : 0;
out:
......
......@@ -66,15 +66,13 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
struct ib_device_attr *props)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
struct mlx5_general_caps *gen;
int err = -ENOMEM;
int max_rq_sg;
int max_sq_sg;
u64 flags;
gen = &dev->mdev->caps.gen;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
......@@ -96,18 +94,18 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
IB_DEVICE_PORT_ACTIVE_EVENT |
IB_DEVICE_SYS_IMAGE_GUID |
IB_DEVICE_RC_RNR_NAK_GEN;
flags = gen->flags;
if (flags & MLX5_DEV_CAP_FLAG_BAD_PKEY_CNTR)
if (MLX5_CAP_GEN(mdev, pkv))
props->device_cap_flags |= IB_DEVICE_BAD_PKEY_CNTR;
if (flags & MLX5_DEV_CAP_FLAG_BAD_QKEY_CNTR)
if (MLX5_CAP_GEN(mdev, qkv))
props->device_cap_flags |= IB_DEVICE_BAD_QKEY_CNTR;
if (flags & MLX5_DEV_CAP_FLAG_APM)
if (MLX5_CAP_GEN(mdev, apm))
props->device_cap_flags |= IB_DEVICE_AUTO_PATH_MIG;
props->device_cap_flags |= IB_DEVICE_LOCAL_DMA_LKEY;
if (flags & MLX5_DEV_CAP_FLAG_XRC)
if (MLX5_CAP_GEN(mdev, xrc))
props->device_cap_flags |= IB_DEVICE_XRC;
props->device_cap_flags |= IB_DEVICE_MEM_MGT_EXTENSIONS;
if (flags & MLX5_DEV_CAP_FLAG_SIG_HAND_OVER) {
if (MLX5_CAP_GEN(mdev, sho)) {
props->device_cap_flags |= IB_DEVICE_SIGNATURE_HANDOVER;
/* At this stage no support for signature handover */
props->sig_prot_cap = IB_PROT_T10DIF_TYPE_1 |
......@@ -116,7 +114,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
props->sig_guard_cap = IB_GUARD_T10DIF_CRC |
IB_GUARD_T10DIF_CSUM;
}
if (flags & MLX5_DEV_CAP_FLAG_BLOCK_MCAST)
if (MLX5_CAP_GEN(mdev, block_lb_mc))
props->device_cap_flags |= IB_DEVICE_BLOCK_MULTICAST_LOOPBACK;
props->vendor_id = be32_to_cpup((__be32 *)(out_mad->data + 36)) &
......@@ -126,37 +124,38 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
memcpy(&props->sys_image_guid, out_mad->data + 4, 8);
props->max_mr_size = ~0ull;
props->page_size_cap = gen->min_page_sz;
props->max_qp = 1 << gen->log_max_qp;
props->max_qp_wr = gen->max_wqes;
max_rq_sg = gen->max_rq_desc_sz / sizeof(struct mlx5_wqe_data_seg);
max_sq_sg = (gen->max_sq_desc_sz - sizeof(struct mlx5_wqe_ctrl_seg)) /
sizeof(struct mlx5_wqe_data_seg);
props->page_size_cap = 1ull << MLX5_CAP_GEN(mdev, log_pg_sz);
props->max_qp = 1 << MLX5_CAP_GEN(mdev, log_max_qp);
props->max_qp_wr = 1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
max_rq_sg = MLX5_CAP_GEN(mdev, max_wqe_sz_rq) /
sizeof(struct mlx5_wqe_data_seg);
max_sq_sg = (MLX5_CAP_GEN(mdev, max_wqe_sz_sq) -
sizeof(struct mlx5_wqe_ctrl_seg)) /
sizeof(struct mlx5_wqe_data_seg);
props->max_sge = min(max_rq_sg, max_sq_sg);
props->max_cq = 1 << gen->log_max_cq;
props->max_cqe = gen->max_cqes - 1;
props->max_mr = 1 << gen->log_max_mkey;
props->max_pd = 1 << gen->log_max_pd;
props->max_qp_rd_atom = 1 << gen->log_max_ra_req_qp;
props->max_qp_init_rd_atom = 1 << gen->log_max_ra_res_qp;
props->max_srq = 1 << gen->log_max_srq;
props->max_srq_wr = gen->max_srq_wqes - 1;
props->local_ca_ack_delay = gen->local_ca_ack_delay;
props->max_cq = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_eq_sz)) - 1;
props->max_mr = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
props->max_pd = 1 << MLX5_CAP_GEN(mdev, log_max_pd);
props->max_qp_rd_atom = 1 << MLX5_CAP_GEN(mdev, log_max_ra_req_qp);
props->max_qp_init_rd_atom = 1 << MLX5_CAP_GEN(mdev, log_max_ra_res_qp);
props->max_srq = 1 << MLX5_CAP_GEN(mdev, log_max_srq);
props->max_srq_wr = (1 << MLX5_CAP_GEN(mdev, log_max_srq_sz)) - 1;
props->local_ca_ack_delay = MLX5_CAP_GEN(mdev, local_ca_ack_delay);
props->max_res_rd_atom = props->max_qp_rd_atom * props->max_qp;
props->max_srq_sge = max_rq_sg - 1;
props->max_fast_reg_page_list_len = (unsigned int)-1;
props->local_ca_ack_delay = gen->local_ca_ack_delay;
props->atomic_cap = IB_ATOMIC_NONE;
props->masked_atomic_cap = IB_ATOMIC_NONE;
props->max_pkeys = be16_to_cpup((__be16 *)(out_mad->data + 28));
props->max_mcast_grp = 1 << gen->log_max_mcg;
props->max_mcast_qp_attach = gen->max_qp_mcg;
props->max_mcast_grp = 1 << MLX5_CAP_GEN(mdev, log_max_mcg);
props->max_mcast_qp_attach = MLX5_CAP_GEN(mdev, max_qp_mcg);
props->max_total_mcast_qp_attach = props->max_mcast_qp_attach *
props->max_mcast_grp;
props->max_map_per_fmr = INT_MAX; /* no limit in ConnectIB */
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
if (dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_ON_DMND_PG)
if (MLX5_CAP_GEN(mdev, pg))
props->device_cap_flags |= IB_DEVICE_ON_DEMAND_PAGING;
props->odp_caps = dev->odp_caps;
#endif
......@@ -172,14 +171,13 @@ int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
struct mlx5_general_caps *gen;
int ext_active_speed;
int err = -ENOMEM;
gen = &dev->mdev->caps.gen;
if (port < 1 || port > gen->num_ports) {
if (port < 1 || port > MLX5_CAP_GEN(mdev, num_ports)) {
mlx5_ib_warn(dev, "invalid port number %d\n", port);
return -EINVAL;
}
......@@ -210,8 +208,8 @@ int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
props->phys_state = out_mad->data[33] >> 4;
props->port_cap_flags = be32_to_cpup((__be32 *)(out_mad->data + 20));
props->gid_tbl_len = out_mad->data[50];
props->max_msg_sz = 1 << gen->log_max_msg;
props->pkey_tbl_len = gen->port[port - 1].pkey_table_len;
props->max_msg_sz = 1 << MLX5_CAP_GEN(mdev, log_max_msg);
props->pkey_tbl_len = mdev->port_caps[port - 1].pkey_table_len;
props->bad_pkey_cntr = be16_to_cpup((__be16 *)(out_mad->data + 46));
props->qkey_viol_cntr = be16_to_cpup((__be16 *)(out_mad->data + 48));
props->active_width = out_mad->data[31] & 0xf;
......@@ -238,7 +236,7 @@ int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
/* If reported active speed is QDR, check if is FDR-10 */
if (props->active_speed == 4) {
if (gen->ext_port_cap[port - 1] &
if (mdev->port_caps[port - 1].ext_port_cap &
MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO) {
init_query_mad(in_mad);
in_mad->attr_id = MLX5_ATTR_EXTENDED_PORT_INFO;
......@@ -392,7 +390,6 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
struct mlx5_ib_alloc_ucontext_req_v2 req;
struct mlx5_ib_alloc_ucontext_resp resp;
struct mlx5_ib_ucontext *context;
struct mlx5_general_caps *gen;
struct mlx5_uuar_info *uuari;
struct mlx5_uar *uars;
int gross_uuars;
......@@ -403,7 +400,6 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
int i;
size_t reqlen;
gen = &dev->mdev->caps.gen;
if (!dev->ib_active)
return ERR_PTR(-EAGAIN);
......@@ -436,14 +432,14 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
num_uars = req.total_num_uuars / MLX5_NON_FP_BF_REGS_PER_PAGE;
gross_uuars = num_uars * MLX5_BF_REGS_PER_PAGE;
resp.qp_tab_size = 1 << gen->log_max_qp;
resp.bf_reg_size = gen->bf_reg_size;
resp.cache_line_size = L1_CACHE_BYTES;
resp.max_sq_desc_sz = gen->max_sq_desc_sz;
resp.max_rq_desc_sz = gen->max_rq_desc_sz;
resp.max_send_wqebb = gen->max_wqes;
resp.max_recv_wr = gen->max_wqes;
resp.max_srq_recv_wr = gen->max_srq_wqes;
resp.qp_tab_size = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp);
resp.bf_reg_size = 1 << MLX5_CAP_GEN(dev->mdev, log_bf_reg_size);
resp.cache_line_size = L1_CACHE_BYTES;
resp.max_sq_desc_sz = MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq);
resp.max_rq_desc_sz = MLX5_CAP_GEN(dev->mdev, max_wqe_sz_rq);
resp.max_send_wqebb = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz);
resp.max_recv_wr = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz);
resp.max_srq_recv_wr = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
context = kzalloc(sizeof(*context), GFP_KERNEL);
if (!context)
......@@ -493,7 +489,7 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
mutex_init(&context->db_page_mutex);
resp.tot_uuars = req.total_num_uuars;
resp.num_ports = gen->num_ports;
resp.num_ports = MLX5_CAP_GEN(dev->mdev, num_ports);
err = ib_copy_to_udata(udata, &resp,
sizeof(resp) - sizeof(resp.reserved));
if (err)
......@@ -895,11 +891,9 @@ static void mlx5_ib_event(struct mlx5_core_dev *dev, void *context,
static void get_ext_port_caps(struct mlx5_ib_dev *dev)
{
struct mlx5_general_caps *gen;
int port;
gen = &dev->mdev->caps.gen;
for (port = 1; port <= gen->num_ports; port++)
for (port = 1; port <= MLX5_CAP_GEN(dev->mdev, num_ports); port++)
mlx5_query_ext_port_caps(dev, port);
}
......@@ -907,11 +901,9 @@ static int get_port_caps(struct mlx5_ib_dev *dev)
{
struct ib_device_attr *dprops = NULL;
struct ib_port_attr *pprops = NULL;
struct mlx5_general_caps *gen;
int err = -ENOMEM;
int port;
gen = &dev->mdev->caps.gen;
pprops = kmalloc(sizeof(*pprops), GFP_KERNEL);
if (!pprops)
goto out;
......@@ -926,14 +918,17 @@ static int get_port_caps(struct mlx5_ib_dev *dev)
goto out;
}
for (port = 1; port <= gen->num_ports; port++) {
for (port = 1; port <= MLX5_CAP_GEN(dev->mdev, num_ports); port++) {
err = mlx5_ib_query_port(&dev->ib_dev, port, pprops);
if (err) {
mlx5_ib_warn(dev, "query_port %d failed %d\n", port, err);
mlx5_ib_warn(dev, "query_port %d failed %d\n",
port, err);
break;
}
gen->port[port - 1].pkey_table_len = dprops->max_pkeys;
gen->port[port - 1].gid_table_len = pprops->gid_tbl_len;
dev->mdev->port_caps[port - 1].pkey_table_len =
dprops->max_pkeys;
dev->mdev->port_caps[port - 1].gid_table_len =
pprops->gid_tbl_len;
mlx5_ib_dbg(dev, "pkey_table_len %d, gid_table_len %d\n",
dprops->max_pkeys, pprops->gid_tbl_len);
}
......@@ -1207,8 +1202,8 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
strlcpy(dev->ib_dev.name, "mlx5_%d", IB_DEVICE_NAME_MAX);
dev->ib_dev.owner = THIS_MODULE;
dev->ib_dev.node_type = RDMA_NODE_IB_CA;
dev->ib_dev.local_dma_lkey = mdev->caps.gen.reserved_lkey;
dev->num_ports = mdev->caps.gen.num_ports;
dev->ib_dev.local_dma_lkey = 0 /* not supported for now */;
dev->num_ports = MLX5_CAP_GEN(mdev, num_ports);
dev->ib_dev.phys_port_cnt = dev->num_ports;
dev->ib_dev.num_comp_vectors =
dev->mdev->priv.eq_table.num_comp_vectors;
......@@ -1286,9 +1281,9 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
dev->ib_dev.free_fast_reg_page_list = mlx5_ib_free_fast_reg_page_list;
dev->ib_dev.check_mr_status = mlx5_ib_check_mr_status;
mlx5_ib_internal_query_odp_caps(dev);
mlx5_ib_internal_fill_odp_caps(dev);
if (mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_XRC) {
if (MLX5_CAP_GEN(mdev, xrc)) {
dev->ib_dev.alloc_xrcd = mlx5_ib_alloc_xrcd;
dev->ib_dev.dealloc_xrcd = mlx5_ib_dealloc_xrcd;
dev->ib_dev.uverbs_cmd_mask |=
......
......@@ -617,7 +617,7 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask,
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
extern struct workqueue_struct *mlx5_ib_page_fault_wq;
int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev);
void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev);
void mlx5_ib_mr_pfault_handler(struct mlx5_ib_qp *qp,
struct mlx5_ib_pfault *pfault);
void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp);
......@@ -631,9 +631,9 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
unsigned long end);
#else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
static inline int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev)
static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
{
return 0;
return;
}
static inline void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp) {}
......
......@@ -975,8 +975,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, u64 virt_addr,
struct mlx5_ib_mr *mr;
int inlen;
int err;
bool pg_cap = !!(dev->mdev->caps.gen.flags &
MLX5_DEV_CAP_FLAG_ON_DMND_PG);
bool pg_cap = !!(MLX5_CAP_GEN(dev->mdev, pg));
mr = kzalloc(sizeof(*mr), GFP_KERNEL);
if (!mr)
......
......@@ -109,40 +109,33 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
ib_umem_odp_unmap_dma_pages(umem, start, end);
}
#define COPY_ODP_BIT_MLX_TO_IB(reg, ib_caps, field_name, bit_name) do { \
if (be32_to_cpu(reg.field_name) & MLX5_ODP_SUPPORT_##bit_name) \
ib_caps->field_name |= IB_ODP_SUPPORT_##bit_name; \
} while (0)
int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev)
void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
{
int err;
struct mlx5_odp_caps hw_caps;
struct ib_odp_caps *caps = &dev->odp_caps;
memset(caps, 0, sizeof(*caps));
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_ON_DMND_PG))
return 0;
err = mlx5_query_odp_caps(dev->mdev, &hw_caps);
if (err)
goto out;
if (!MLX5_CAP_GEN(dev->mdev, pg))
return;
caps->general_caps = IB_ODP_SUPPORT;
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.ud_odp_caps,
SEND);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
SEND);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
RECV);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
WRITE);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
READ);
out:
return err;
if (MLX5_CAP_ODP(dev->mdev, ud_odp_caps.send))
caps->per_transport_caps.ud_odp_caps |= IB_ODP_SUPPORT_SEND;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.send))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_SEND;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.receive))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_RECV;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.write))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_WRITE;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.read))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_READ;
return;
}
static struct mlx5_ib_mr *mlx5_ib_odp_find_mr_lkey(struct mlx5_ib_dev *dev,
......
......@@ -220,13 +220,11 @@ static void mlx5_ib_qp_event(struct mlx5_core_qp *qp, int type)
static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
int has_rq, struct mlx5_ib_qp *qp, struct mlx5_ib_create_qp *ucmd)
{
struct mlx5_general_caps *gen;
int wqe_size;
int wq_size;
gen = &dev->mdev->caps.gen;
/* Sanity check RQ size before proceeding */
if (cap->max_recv_wr > gen->max_wqes)
if (cap->max_recv_wr > (1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz)))
return -EINVAL;
if (!has_rq) {
......@@ -246,10 +244,11 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
wq_size = roundup_pow_of_two(cap->max_recv_wr) * wqe_size;
wq_size = max_t(int, wq_size, MLX5_SEND_WQE_BB);
qp->rq.wqe_cnt = wq_size / wqe_size;
if (wqe_size > gen->max_rq_desc_sz) {
if (wqe_size > MLX5_CAP_GEN(dev->mdev, max_wqe_sz_rq)) {
mlx5_ib_dbg(dev, "wqe_size %d, max %d\n",
wqe_size,
gen->max_rq_desc_sz);
MLX5_CAP_GEN(dev->mdev,
max_wqe_sz_rq));
return -EINVAL;
}
qp->rq.wqe_shift = ilog2(wqe_size);
......@@ -330,11 +329,9 @@ static int calc_send_wqe(struct ib_qp_init_attr *attr)
static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
struct mlx5_ib_qp *qp)
{
struct mlx5_general_caps *gen;
int wqe_size;
int wq_size;
gen = &dev->mdev->caps.gen;
if (!attr->cap.max_send_wr)
return 0;
......@@ -343,9 +340,9 @@ static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
if (wqe_size < 0)
return wqe_size;
if (wqe_size > gen->max_sq_desc_sz) {
if (wqe_size > MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq)) {
mlx5_ib_dbg(dev, "wqe_size(%d) > max_sq_desc_sz(%d)\n",
wqe_size, gen->max_sq_desc_sz);
wqe_size, MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq));
return -EINVAL;
}
......@@ -358,9 +355,10 @@ static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
wq_size = roundup_pow_of_two(attr->cap.max_send_wr * wqe_size);
qp->sq.wqe_cnt = wq_size / MLX5_SEND_WQE_BB;
if (qp->sq.wqe_cnt > gen->max_wqes) {
if (qp->sq.wqe_cnt > (1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz))) {
mlx5_ib_dbg(dev, "wqe count(%d) exceeds limits(%d)\n",
qp->sq.wqe_cnt, gen->max_wqes);
qp->sq.wqe_cnt,
1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz));
return -ENOMEM;
}
qp->sq.wqe_shift = ilog2(MLX5_SEND_WQE_BB);
......@@ -375,13 +373,11 @@ static int set_user_buf_size(struct mlx5_ib_dev *dev,
struct mlx5_ib_qp *qp,
struct mlx5_ib_create_qp *ucmd)
{
struct mlx5_general_caps *gen;
int desc_sz = 1 << qp->sq.wqe_shift;
gen = &dev->mdev->caps.gen;
if (desc_sz > gen->max_sq_desc_sz) {
if (desc_sz > MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq)) {
mlx5_ib_warn(dev, "desc_sz %d, max_sq_desc_sz %d\n",
desc_sz, gen->max_sq_desc_sz);
desc_sz, MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq));
return -EINVAL;
}
......@@ -393,9 +389,10 @@ static int set_user_buf_size(struct mlx5_ib_dev *dev,
qp->sq.wqe_cnt = ucmd->sq_wqe_count;
if (qp->sq.wqe_cnt > gen->max_wqes) {
if (qp->sq.wqe_cnt > (1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz))) {
mlx5_ib_warn(dev, "wqe_cnt %d, max_wqes %d\n",
qp->sq.wqe_cnt, gen->max_wqes);
qp->sq.wqe_cnt,
1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz));
return -EINVAL;
}
......@@ -768,7 +765,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
qp->sq.offset = qp->rq.wqe_cnt << qp->rq.wqe_shift;
qp->buf_size = err + (qp->rq.wqe_cnt << qp->rq.wqe_shift);
err = mlx5_buf_alloc(dev->mdev, qp->buf_size, PAGE_SIZE * 2, &qp->buf);
err = mlx5_buf_alloc(dev->mdev, qp->buf_size, &qp->buf);
if (err) {
mlx5_ib_dbg(dev, "err %d\n", err);
goto err_uuar;
......@@ -866,22 +863,21 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
struct ib_udata *udata, struct mlx5_ib_qp *qp)
{
struct mlx5_ib_resources *devr = &dev->devr;
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_ib_create_qp_resp resp;
struct mlx5_create_qp_mbox_in *in;
struct mlx5_general_caps *gen;
struct mlx5_ib_create_qp ucmd;
int inlen = sizeof(*in);
int err;
mlx5_ib_odp_create_qp(qp);
gen = &dev->mdev->caps.gen;
mutex_init(&qp->mutex);
spin_lock_init(&qp->sq.lock);
spin_lock_init(&qp->rq.lock);
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
if (!(gen->flags & MLX5_DEV_CAP_FLAG_BLOCK_MCAST)) {
if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
return -EINVAL;
} else {
......@@ -914,15 +910,17 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
if (pd) {
if (pd->uobject) {
__u32 max_wqes =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
mlx5_ib_dbg(dev, "invalid rq params\n");
return -EINVAL;
}
if (ucmd.sq_wqe_count > gen->max_wqes) {
if (ucmd.sq_wqe_count > max_wqes) {
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
ucmd.sq_wqe_count, gen->max_wqes);
ucmd.sq_wqe_count, max_wqes);
return -EINVAL;
}
err = create_user_qp(dev, pd, qp, udata, &in, &resp, &inlen);
......@@ -1226,7 +1224,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata)
{
struct mlx5_general_caps *gen;
struct mlx5_ib_dev *dev;
struct mlx5_ib_qp *qp;
u16 xrcdn = 0;
......@@ -1244,12 +1241,11 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
}
dev = to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
}
gen = &dev->mdev->caps.gen;
switch (init_attr->qp_type) {
case IB_QPT_XRC_TGT:
case IB_QPT_XRC_INI:
if (!(gen->flags & MLX5_DEV_CAP_FLAG_XRC)) {
if (!MLX5_CAP_GEN(dev->mdev, xrc)) {
mlx5_ib_dbg(dev, "XRC not supported\n");
return ERR_PTR(-ENOSYS);
}
......@@ -1356,9 +1352,6 @@ enum {
static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
{
struct mlx5_general_caps *gen;
gen = &dev->mdev->caps.gen;
if (rate == IB_RATE_PORT_CURRENT) {
return 0;
} else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) {
......@@ -1366,7 +1359,7 @@ static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
} else {
while (rate != IB_RATE_2_5_GBPS &&
!(1 << (rate + MLX5_STAT_RATE_OFFSET) &
gen->stat_rate_support))
MLX5_CAP_GEN(dev->mdev, stat_rate_support)))
--rate;
}
......@@ -1377,10 +1370,8 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, const struct ib_ah_attr *ah,
struct mlx5_qp_path *path, u8 port, int attr_mask,
u32 path_flags, const struct ib_qp_attr *attr)
{
struct mlx5_general_caps *gen;
int err;
gen = &dev->mdev->caps.gen;
path->fl = (path_flags & MLX5_PATH_FLAG_FL) ? 0x80 : 0;
path->free_ar = (path_flags & MLX5_PATH_FLAG_FREE_AR) ? 0x80 : 0;
......@@ -1391,9 +1382,11 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, const struct ib_ah_attr *ah,
path->rlid = cpu_to_be16(ah->dlid);
if (ah->ah_flags & IB_AH_GRH) {
if (ah->grh.sgid_index >= gen->port[port - 1].gid_table_len) {
if (ah->grh.sgid_index >=
dev->mdev->port_caps[port - 1].gid_table_len) {
pr_err("sgid_index (%u) too large. max is %d\n",
ah->grh.sgid_index, gen->port[port - 1].gid_table_len);
ah->grh.sgid_index,
dev->mdev->port_caps[port - 1].gid_table_len);
return -EINVAL;
}
path->grh_mlid |= 1 << 7;
......@@ -1570,7 +1563,6 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
struct mlx5_ib_qp *qp = to_mqp(ibqp);
struct mlx5_ib_cq *send_cq, *recv_cq;
struct mlx5_qp_context *context;
struct mlx5_general_caps *gen;
struct mlx5_modify_qp_mbox_in *in;
struct mlx5_ib_pd *pd;
enum mlx5_qp_state mlx5_cur, mlx5_new;
......@@ -1579,7 +1571,6 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
int mlx5_st;
int err;
gen = &dev->mdev->caps.gen;
in = kzalloc(sizeof(*in), GFP_KERNEL);
if (!in)
return -ENOMEM;
......@@ -1619,7 +1610,8 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
err = -EINVAL;
goto out;
}
context->mtu_msgmax = (attr->path_mtu << 5) | gen->log_max_msg;
context->mtu_msgmax = (attr->path_mtu << 5) |
(u8)MLX5_CAP_GEN(dev->mdev, log_max_msg);
}
if (attr_mask & IB_QP_DEST_QPN)
......@@ -1777,11 +1769,9 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
struct mlx5_ib_dev *dev = to_mdev(ibqp->device);
struct mlx5_ib_qp *qp = to_mqp(ibqp);
enum ib_qp_state cur_state, new_state;
struct mlx5_general_caps *gen;
int err = -EINVAL;
int port;
gen = &dev->mdev->caps.gen;
mutex_lock(&qp->mutex);
cur_state = attr_mask & IB_QP_CUR_STATE ? attr->cur_qp_state : qp->state;
......@@ -1793,21 +1783,25 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
goto out;
if ((attr_mask & IB_QP_PORT) &&
(attr->port_num == 0 || attr->port_num > gen->num_ports))
(attr->port_num == 0 ||
attr->port_num > MLX5_CAP_GEN(dev->mdev, num_ports)))
goto out;
if (attr_mask & IB_QP_PKEY_INDEX) {
port = attr_mask & IB_QP_PORT ? attr->port_num : qp->port;
if (attr->pkey_index >= gen->port[port - 1].pkey_table_len)
if (attr->pkey_index >=
dev->mdev->port_caps[port - 1].pkey_table_len)
goto out;
}
if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
attr->max_rd_atomic > (1 << gen->log_max_ra_res_qp))
attr->max_rd_atomic >
(1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_res_qp)))
goto out;
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
attr->max_dest_rd_atomic > (1 << gen->log_max_ra_req_qp))
attr->max_dest_rd_atomic >
(1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_req_qp)))
goto out;
if (cur_state == new_state && cur_state == IB_QPS_RESET) {
......@@ -3009,7 +3003,7 @@ static void to_ib_ah_attr(struct mlx5_ib_dev *ibdev, struct ib_ah_attr *ib_ah_at
ib_ah_attr->port_num = path->port;
if (ib_ah_attr->port_num == 0 ||
ib_ah_attr->port_num > dev->caps.gen.num_ports)
ib_ah_attr->port_num > MLX5_CAP_GEN(dev, num_ports))
return;
ib_ah_attr->sl = path->sl & 0xf;
......@@ -3135,12 +3129,10 @@ struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev,
struct ib_udata *udata)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_general_caps *gen;
struct mlx5_ib_xrcd *xrcd;
int err;
gen = &dev->mdev->caps.gen;
if (!(gen->flags & MLX5_DEV_CAP_FLAG_XRC))
if (!MLX5_CAP_GEN(dev->mdev, xrc))
return ERR_PTR(-ENOSYS);
xrcd = kmalloc(sizeof(*xrcd), GFP_KERNEL);
......
......@@ -165,7 +165,7 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
return err;
}
if (mlx5_buf_alloc(dev->mdev, buf_size, PAGE_SIZE * 2, &srq->buf)) {
if (mlx5_buf_alloc(dev->mdev, buf_size, &srq->buf)) {
mlx5_ib_dbg(dev, "buf alloc failed\n");
err = -ENOMEM;
goto err_db;
......@@ -236,7 +236,6 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
struct ib_udata *udata)
{
struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct mlx5_general_caps *gen;
struct mlx5_ib_srq *srq;
int desc_size;
int buf_size;
......@@ -245,13 +244,13 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
int uninitialized_var(inlen);
int is_xrc;
u32 flgs, xrcdn;
__u32 max_srq_wqes = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
gen = &dev->mdev->caps.gen;
/* Sanity check SRQ size before proceeding */
if (init_attr->attr.max_wr >= gen->max_srq_wqes) {
if (init_attr->attr.max_wr >= max_srq_wqes) {
mlx5_ib_dbg(dev, "max_wr %d, cap %d\n",
init_attr->attr.max_wr,
gen->max_srq_wqes);
max_srq_wqes);
return ERR_PTR(-EINVAL);
}
......
......@@ -3,6 +3,18 @@
#
config MLX5_CORE
tristate
tristate "Mellanox Technologies ConnectX-4 and Connect-IB core driver"
depends on PCI
default n
---help---
Core driver for low level functionality of the ConnectX-4 and
Connect-IB cards by Mellanox Technologies.
config MLX5_CORE_EN
bool "Mellanox Technologies ConnectX-4 Ethernet support"
depends on MLX5_INFINIBAND=n && NETDEVICES && ETHERNET && PCI && MLX5_CORE
default n
---help---
Ethernet support in Mellanox Technologies ConnectX-4 NIC.
Ethernet and Infiniband support in ConnectX-4 are currently mutually
exclusive.
......@@ -3,3 +3,6 @@ obj-$(CONFIG_MLX5_CORE) += mlx5_core.o
mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
health.o mcg.o cq.o srq.o alloc.o qp.o port.o mr.o pd.o \
mad.o
mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o flow_table.o vport.o transobj.o \
en_main.o en_flow_table.o en_ethtool.o en_tx.o en_rx.o \
en_txrx.o
......@@ -42,95 +42,36 @@
#include "mlx5_core.h"
/* Handling for queue buffers -- we allocate a bunch of memory and
* register it in a memory region at HCA virtual address 0. If the
* requested size is > max_direct, we split the allocation into
* multiple pages, so we don't require too much contiguous memory.
* register it in a memory region at HCA virtual address 0.
*/
int mlx5_buf_alloc(struct mlx5_core_dev *dev, int size, int max_direct,
struct mlx5_buf *buf)
int mlx5_buf_alloc(struct mlx5_core_dev *dev, int size, struct mlx5_buf *buf)
{
dma_addr_t t;
buf->size = size;
if (size <= max_direct) {
buf->nbufs = 1;
buf->npages = 1;
buf->page_shift = (u8)get_order(size) + PAGE_SHIFT;
buf->direct.buf = dma_zalloc_coherent(&dev->pdev->dev,
size, &t, GFP_KERNEL);
if (!buf->direct.buf)
return -ENOMEM;
buf->direct.map = t;
while (t & ((1 << buf->page_shift) - 1)) {
--buf->page_shift;
buf->npages *= 2;
}
} else {
int i;
buf->direct.buf = NULL;
buf->nbufs = (size + PAGE_SIZE - 1) / PAGE_SIZE;
buf->npages = buf->nbufs;
buf->page_shift = PAGE_SHIFT;
buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list),
GFP_KERNEL);
if (!buf->page_list)
return -ENOMEM;
for (i = 0; i < buf->nbufs; i++) {
buf->page_list[i].buf =
dma_zalloc_coherent(&dev->pdev->dev, PAGE_SIZE,
&t, GFP_KERNEL);
if (!buf->page_list[i].buf)
goto err_free;
buf->page_list[i].map = t;
}
if (BITS_PER_LONG == 64) {
struct page **pages;
pages = kmalloc(sizeof(*pages) * buf->nbufs, GFP_KERNEL);
if (!pages)
goto err_free;
for (i = 0; i < buf->nbufs; i++)
pages[i] = virt_to_page(buf->page_list[i].buf);
buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
kfree(pages);
if (!buf->direct.buf)
goto err_free;
}
}
buf->npages = 1;
buf->page_shift = (u8)get_order(size) + PAGE_SHIFT;
buf->direct.buf = dma_zalloc_coherent(&dev->pdev->dev,
size, &t, GFP_KERNEL);
if (!buf->direct.buf)
return -ENOMEM;
return 0;
buf->direct.map = t;
err_free:
mlx5_buf_free(dev, buf);
while (t & ((1 << buf->page_shift) - 1)) {
--buf->page_shift;
buf->npages *= 2;
}
return -ENOMEM;
return 0;
}
EXPORT_SYMBOL_GPL(mlx5_buf_alloc);
void mlx5_buf_free(struct mlx5_core_dev *dev, struct mlx5_buf *buf)
{
int i;
if (buf->nbufs == 1)
dma_free_coherent(&dev->pdev->dev, buf->size, buf->direct.buf,
buf->direct.map);
else {
if (BITS_PER_LONG == 64)
vunmap(buf->direct.buf);
for (i = 0; i < buf->nbufs; i++)
if (buf->page_list[i].buf)
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
buf->page_list[i].buf,
buf->page_list[i].map);
kfree(buf->page_list);
}
dma_free_coherent(&dev->pdev->dev, buf->size, buf->direct.buf,
buf->direct.map);
}
EXPORT_SYMBOL_GPL(mlx5_buf_free);
......@@ -230,10 +171,7 @@ void mlx5_fill_page_array(struct mlx5_buf *buf, __be64 *pas)
int i;
for (i = 0; i < buf->npages; i++) {
if (buf->nbufs == 1)
addr = buf->direct.map + (i << buf->page_shift);
else
addr = buf->page_list[i].map;
addr = buf->direct.map + (i << buf->page_shift);
pas[i] = cpu_to_be64(addr);
}
......
......@@ -75,25 +75,6 @@ enum {
MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10,
};
enum {
MLX5_CMD_STAT_OK = 0x0,
MLX5_CMD_STAT_INT_ERR = 0x1,
MLX5_CMD_STAT_BAD_OP_ERR = 0x2,
MLX5_CMD_STAT_BAD_PARAM_ERR = 0x3,
MLX5_CMD_STAT_BAD_SYS_STATE_ERR = 0x4,
MLX5_CMD_STAT_BAD_RES_ERR = 0x5,
MLX5_CMD_STAT_RES_BUSY = 0x6,
MLX5_CMD_STAT_LIM_ERR = 0x8,
MLX5_CMD_STAT_BAD_RES_STATE_ERR = 0x9,
MLX5_CMD_STAT_IX_ERR = 0xa,
MLX5_CMD_STAT_NO_RES_ERR = 0xf,
MLX5_CMD_STAT_BAD_INP_LEN_ERR = 0x50,
MLX5_CMD_STAT_BAD_OUTP_LEN_ERR = 0x51,
MLX5_CMD_STAT_BAD_QP_STATE_ERR = 0x10,
MLX5_CMD_STAT_BAD_PKT_ERR = 0x30,
MLX5_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR = 0x40,
};
static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd,
struct mlx5_cmd_msg *in,
struct mlx5_cmd_msg *out,
......@@ -390,8 +371,17 @@ const char *mlx5_command_str(int command)
case MLX5_CMD_OP_ARM_RQ:
return "ARM_RQ";
case MLX5_CMD_OP_RESIZE_SRQ:
return "RESIZE_SRQ";
case MLX5_CMD_OP_CREATE_XRC_SRQ:
return "CREATE_XRC_SRQ";
case MLX5_CMD_OP_DESTROY_XRC_SRQ:
return "DESTROY_XRC_SRQ";
case MLX5_CMD_OP_QUERY_XRC_SRQ:
return "QUERY_XRC_SRQ";
case MLX5_CMD_OP_ARM_XRC_SRQ:
return "ARM_XRC_SRQ";
case MLX5_CMD_OP_ALLOC_PD:
return "ALLOC_PD";
......@@ -408,8 +398,8 @@ const char *mlx5_command_str(int command)
case MLX5_CMD_OP_ATTACH_TO_MCG:
return "ATTACH_TO_MCG";
case MLX5_CMD_OP_DETACH_FROM_MCG:
return "DETACH_FROM_MCG";
case MLX5_CMD_OP_DETTACH_FROM_MCG:
return "DETTACH_FROM_MCG";
case MLX5_CMD_OP_ALLOC_XRCD:
return "ALLOC_XRCD";
......
......@@ -219,6 +219,24 @@ int mlx5_core_modify_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
}
EXPORT_SYMBOL(mlx5_core_modify_cq);
int mlx5_core_modify_cq_moderation(struct mlx5_core_dev *dev,
struct mlx5_core_cq *cq,
u16 cq_period,
u16 cq_max_count)
{
struct mlx5_modify_cq_mbox_in in;
memset(&in, 0, sizeof(in));
in.cqn = cpu_to_be32(cq->cqn);
in.ctx.cq_period = cpu_to_be16(cq_period);
in.ctx.cq_max_count = cpu_to_be16(cq_max_count);
in.field_select = cpu_to_be32(MLX5_CQ_MODIFY_PERIOD |
MLX5_CQ_MODIFY_COUNT);
return mlx5_core_modify_cq(dev, cq, &in, sizeof(in));
}
int mlx5_init_cq_table(struct mlx5_core_dev *dev)
{
struct mlx5_cq_table *table = &dev->priv.cq_table;
......
/*
* Copyright (c) 2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/if_vlan.h>
#include <linux/etherdevice.h>
#include <linux/mlx5/driver.h>
#include <linux/mlx5/qp.h>
#include <linux/mlx5/cq.h>
#include "vport.h"
#include "wq.h"
#include "transobj.h"
#include "mlx5_core.h"
#define MLX5E_MAX_NUM_TC 8
#define MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE 0x7
#define MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE 0xa
#define MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE 0xd
#define MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE 0x7
#define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE 0xa
#define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE 0xd
#define MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ (16 * 1024)
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC 0x10
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS 0x20
#define MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC 0x10
#define MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS 0x20
#define MLX5E_PARAMS_DEFAULT_MIN_RX_WQES 0x80
#define MLX5E_PARAMS_DEFAULT_RX_HASH_LOG_TBL_SZ 0x7
#define MLX5E_PARAMS_MIN_MTU 46
#define MLX5E_TX_CQ_POLL_BUDGET 128
#define MLX5E_UPDATE_STATS_INTERVAL 200 /* msecs */
static const char vport_strings[][ETH_GSTRING_LEN] = {
/* vport statistics */
"rx_packets",
"rx_bytes",
"tx_packets",
"tx_bytes",
"rx_error_packets",
"rx_error_bytes",
"tx_error_packets",
"tx_error_bytes",
"rx_unicast_packets",
"rx_unicast_bytes",
"tx_unicast_packets",
"tx_unicast_bytes",
"rx_multicast_packets",
"rx_multicast_bytes",
"tx_multicast_packets",
"tx_multicast_bytes",
"rx_broadcast_packets",
"rx_broadcast_bytes",
"tx_broadcast_packets",
"tx_broadcast_bytes",
/* SW counters */
"tso_packets",
"tso_bytes",
"lro_packets",
"lro_bytes",
"rx_csum_good",
"rx_csum_none",
"tx_csum_offload",
"tx_queue_stopped",
"tx_queue_wake",
"tx_queue_dropped",
"rx_wqe_err",
};
struct mlx5e_vport_stats {
/* HW counters */
u64 rx_packets;
u64 rx_bytes;
u64 tx_packets;
u64 tx_bytes;
u64 rx_error_packets;
u64 rx_error_bytes;
u64 tx_error_packets;
u64 tx_error_bytes;
u64 rx_unicast_packets;
u64 rx_unicast_bytes;
u64 tx_unicast_packets;
u64 tx_unicast_bytes;
u64 rx_multicast_packets;
u64 rx_multicast_bytes;
u64 tx_multicast_packets;
u64 tx_multicast_bytes;
u64 rx_broadcast_packets;
u64 rx_broadcast_bytes;
u64 tx_broadcast_packets;
u64 tx_broadcast_bytes;
/* SW counters */
u64 tso_packets;
u64 tso_bytes;
u64 lro_packets;
u64 lro_bytes;
u64 rx_csum_good;
u64 rx_csum_none;
u64 tx_csum_offload;
u64 tx_queue_stopped;
u64 tx_queue_wake;
u64 tx_queue_dropped;
u64 rx_wqe_err;
#define NUM_VPORT_COUNTERS 31
};
static const char rq_stats_strings[][ETH_GSTRING_LEN] = {
"packets",
"csum_none",
"lro_packets",
"lro_bytes",
"wqe_err"
};
struct mlx5e_rq_stats {
u64 packets;
u64 csum_none;
u64 lro_packets;
u64 lro_bytes;
u64 wqe_err;
#define NUM_RQ_STATS 5
};
static const char sq_stats_strings[][ETH_GSTRING_LEN] = {
"packets",
"tso_packets",
"tso_bytes",
"csum_offload_none",
"stopped",
"wake",
"dropped",
"nop"
};
struct mlx5e_sq_stats {
u64 packets;
u64 tso_packets;
u64 tso_bytes;
u64 csum_offload_none;
u64 stopped;
u64 wake;
u64 dropped;
u64 nop;
#define NUM_SQ_STATS 8
};
struct mlx5e_stats {
struct mlx5e_vport_stats vport;
};
struct mlx5e_params {
u8 log_sq_size;
u8 log_rq_size;
u16 num_channels;
u8 default_vlan_prio;
u8 num_tc;
u16 rx_cq_moderation_usec;
u16 rx_cq_moderation_pkts;
u16 tx_cq_moderation_usec;
u16 tx_cq_moderation_pkts;
u16 min_rx_wqes;
u16 rx_hash_log_tbl_sz;
bool lro_en;
u32 lro_wqe_sz;
};
enum {
MLX5E_RQ_STATE_POST_WQES_ENABLE,
};
enum cq_flags {
MLX5E_CQ_HAS_CQES = 1,
};
struct mlx5e_cq {
/* data path - accessed per cqe */
struct mlx5_cqwq wq;
void *sqrq;
unsigned long flags;
/* data path - accessed per napi poll */
struct napi_struct *napi;
struct mlx5_core_cq mcq;
struct mlx5e_channel *channel;
/* control */
struct mlx5_wq_ctrl wq_ctrl;
} ____cacheline_aligned_in_smp;
struct mlx5e_rq {
/* data path */
struct mlx5_wq_ll wq;
u32 wqe_sz;
struct sk_buff **skb;
struct device *pdev;
struct net_device *netdev;
struct mlx5e_rq_stats stats;
struct mlx5e_cq cq;
unsigned long state;
int ix;
/* control */
struct mlx5_wq_ctrl wq_ctrl;
u32 rqn;
struct mlx5e_channel *channel;
} ____cacheline_aligned_in_smp;
struct mlx5e_tx_skb_cb {
u32 num_bytes;
u8 num_wqebbs;
u8 num_dma;
};
#define MLX5E_TX_SKB_CB(__skb) ((struct mlx5e_tx_skb_cb *)__skb->cb)
struct mlx5e_sq_dma {
dma_addr_t addr;
u32 size;
};
enum {
MLX5E_SQ_STATE_WAKE_TXQ_ENABLE,
};
struct mlx5e_sq {
/* data path */
/* dirtied @completion */
u16 cc;
u32 dma_fifo_cc;
/* dirtied @xmit */
u16 pc ____cacheline_aligned_in_smp;
u32 dma_fifo_pc;
u32 bf_offset;
struct mlx5e_sq_stats stats;
struct mlx5e_cq cq;
/* pointers to per packet info: write@xmit, read@completion */
struct sk_buff **skb;
struct mlx5e_sq_dma *dma_fifo;
/* read only */
struct mlx5_wq_cyc wq;
u32 dma_fifo_mask;
void __iomem *uar_map;
struct netdev_queue *txq;
u32 sqn;
u32 bf_buf_size;
struct device *pdev;
__be32 mkey_be;
unsigned long state;
/* control path */
struct mlx5_wq_ctrl wq_ctrl;
struct mlx5_uar uar;
struct mlx5e_channel *channel;
int tc;
} ____cacheline_aligned_in_smp;
static inline bool mlx5e_sq_has_room_for(struct mlx5e_sq *sq, u16 n)
{
return (((sq->wq.sz_m1 & (sq->cc - sq->pc)) >= n) ||
(sq->cc == sq->pc));
}
enum channel_flags {
MLX5E_CHANNEL_NAPI_SCHED = 1,
};
struct mlx5e_channel {
/* data path */
struct mlx5e_rq rq;
struct mlx5e_sq sq[MLX5E_MAX_NUM_TC];
struct napi_struct napi;
struct device *pdev;
struct net_device *netdev;
__be32 mkey_be;
u8 num_tc;
unsigned long flags;
/* control */
struct mlx5e_priv *priv;
int ix;
int cpu;
};
enum mlx5e_traffic_types {
MLX5E_TT_IPV4_TCP = 0,
MLX5E_TT_IPV6_TCP = 1,
MLX5E_TT_IPV4_UDP = 2,
MLX5E_TT_IPV6_UDP = 3,
MLX5E_TT_IPV4 = 4,
MLX5E_TT_IPV6 = 5,
MLX5E_TT_ANY = 6,
MLX5E_NUM_TT = 7,
};
enum {
MLX5E_RQT_SPREADING = 0,
MLX5E_RQT_DEFAULT_RQ = 1,
MLX5E_NUM_RQT = 2,
};
struct mlx5e_eth_addr_info {
u8 addr[ETH_ALEN + 2];
u32 tt_vec;
u32 ft_ix[MLX5E_NUM_TT]; /* flow table index per traffic type */
};
#define MLX5E_ETH_ADDR_HASH_SIZE (1 << BITS_PER_BYTE)
struct mlx5e_eth_addr_db {
struct hlist_head netdev_uc[MLX5E_ETH_ADDR_HASH_SIZE];
struct hlist_head netdev_mc[MLX5E_ETH_ADDR_HASH_SIZE];
struct mlx5e_eth_addr_info broadcast;
struct mlx5e_eth_addr_info allmulti;
struct mlx5e_eth_addr_info promisc;
bool broadcast_enabled;
bool allmulti_enabled;
bool promisc_enabled;
};
enum {
MLX5E_STATE_ASYNC_EVENTS_ENABLE,
MLX5E_STATE_OPENED,
};
struct mlx5e_vlan_db {
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
u32 active_vlans_ft_ix[VLAN_N_VID];
u32 untagged_rule_ft_ix;
u32 any_vlan_rule_ft_ix;
bool filter_disabled;
};
struct mlx5e_flow_table {
void *vlan;
void *main;
};
struct mlx5e_priv {
/* priv data path fields - start */
int order_base_2_num_channels;
int queue_mapping_channel_mask;
int num_tc;
int default_vlan_prio;
/* priv data path fields - end */
unsigned long state;
struct mutex state_lock; /* Protects Interface state */
struct mlx5_uar cq_uar;
u32 pdn;
struct mlx5_core_mr mr;
struct mlx5e_channel **channel;
u32 tisn[MLX5E_MAX_NUM_TC];
u32 rqtn;
u32 tirn[MLX5E_NUM_TT];
struct mlx5e_flow_table ft;
struct mlx5e_eth_addr_db eth_addr;
struct mlx5e_vlan_db vlan;
struct mlx5e_params params;
spinlock_t async_events_spinlock; /* sync hw events */
struct work_struct update_carrier_work;
struct work_struct set_rx_mode_work;
struct delayed_work update_stats_work;
struct mlx5_core_dev *mdev;
struct net_device *netdev;
struct mlx5e_stats stats;
};
#define MLX5E_NET_IP_ALIGN 2
struct mlx5e_tx_wqe {
struct mlx5_wqe_ctrl_seg ctrl;
struct mlx5_wqe_eth_seg eth;
};
struct mlx5e_rx_wqe {
struct mlx5_wqe_srq_next_seg next;
struct mlx5_wqe_data_seg data;
};
enum mlx5e_link_mode {
MLX5E_1000BASE_CX_SGMII = 0,
MLX5E_1000BASE_KX = 1,
MLX5E_10GBASE_CX4 = 2,
MLX5E_10GBASE_KX4 = 3,
MLX5E_10GBASE_KR = 4,
MLX5E_20GBASE_KR2 = 5,
MLX5E_40GBASE_CR4 = 6,
MLX5E_40GBASE_KR4 = 7,
MLX5E_56GBASE_R4 = 8,
MLX5E_10GBASE_CR = 12,
MLX5E_10GBASE_SR = 13,
MLX5E_10GBASE_ER = 14,
MLX5E_40GBASE_SR4 = 15,
MLX5E_40GBASE_LR4 = 16,
MLX5E_100GBASE_CR4 = 20,
MLX5E_100GBASE_SR4 = 21,
MLX5E_100GBASE_KR4 = 22,
MLX5E_100GBASE_LR4 = 23,
MLX5E_100BASE_TX = 24,
MLX5E_100BASE_T = 25,
MLX5E_10GBASE_T = 26,
MLX5E_25GBASE_CR = 27,
MLX5E_25GBASE_KR = 28,
MLX5E_25GBASE_SR = 29,
MLX5E_50GBASE_CR2 = 30,
MLX5E_50GBASE_KR2 = 31,
MLX5E_LINK_MODES_NUMBER,
};
#define MLX5E_PROT_MASK(link_mode) (1 << link_mode)
u16 mlx5e_select_queue(struct net_device *dev, struct sk_buff *skb,
void *accel_priv, select_queue_fallback_t fallback);
netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev);
netdev_tx_t mlx5e_xmit_multi_tc(struct sk_buff *skb, struct net_device *dev);
void mlx5e_completion_event(struct mlx5_core_cq *mcq);
void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);
int mlx5e_napi_poll(struct napi_struct *napi, int budget);
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq);
bool mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget);
bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq);
void mlx5e_update_stats(struct mlx5e_priv *priv);
int mlx5e_open_flow_table(struct mlx5e_priv *priv);
void mlx5e_close_flow_table(struct mlx5e_priv *priv);
void mlx5e_init_eth_addr(struct mlx5e_priv *priv);
void mlx5e_set_rx_mode_core(struct mlx5e_priv *priv);
void mlx5e_set_rx_mode_work(struct work_struct *work);
int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
u16 vid);
int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
u16 vid);
void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv);
void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv);
int mlx5e_add_all_vlan_rules(struct mlx5e_priv *priv);
void mlx5e_del_all_vlan_rules(struct mlx5e_priv *priv);
int mlx5e_open_locked(struct net_device *netdev);
int mlx5e_close_locked(struct net_device *netdev);
int mlx5e_update_priv_params(struct mlx5e_priv *priv,
struct mlx5e_params *new_params);
static inline void mlx5e_tx_notify_hw(struct mlx5e_sq *sq,
struct mlx5e_tx_wqe *wqe)
{
/* ensure wqe is visible to device before updating doorbell record */
dma_wmb();
*sq->wq.db = cpu_to_be32(sq->pc);
/* ensure doorbell record is visible to device before ringing the
* doorbell
*/
wmb();
mlx5_write64((__be32 *)&wqe->ctrl,
sq->uar_map + MLX5_BF_OFFSET + sq->bf_offset,
NULL);
sq->bf_offset ^= sq->bf_buf_size;
}
static inline void mlx5e_cq_arm(struct mlx5e_cq *cq)
{
struct mlx5_core_cq *mcq;
mcq = &cq->mcq;
mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map, NULL, cq->wq.cc);
}
extern const struct ethtool_ops mlx5e_ethtool_ops;
/*
* Copyright (c) 2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "en.h"
static void mlx5e_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *drvinfo)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5_core_dev *mdev = priv->mdev;
strlcpy(drvinfo->driver, DRIVER_NAME, sizeof(drvinfo->driver));
strlcpy(drvinfo->version, DRIVER_VERSION " (" DRIVER_RELDATE ")",
sizeof(drvinfo->version));
snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
"%d.%d.%d",
fw_rev_maj(mdev), fw_rev_min(mdev), fw_rev_sub(mdev));
strlcpy(drvinfo->bus_info, pci_name(mdev->pdev),
sizeof(drvinfo->bus_info));
}
static const struct {
u32 supported;
u32 advertised;
u32 speed;
} ptys2ethtool_table[MLX5E_LINK_MODES_NUMBER] = {
[MLX5E_1000BASE_CX_SGMII] = {
.supported = SUPPORTED_1000baseKX_Full,
.advertised = ADVERTISED_1000baseKX_Full,
.speed = 1000,
},
[MLX5E_1000BASE_KX] = {
.supported = SUPPORTED_1000baseKX_Full,
.advertised = ADVERTISED_1000baseKX_Full,
.speed = 1000,
},
[MLX5E_10GBASE_CX4] = {
.supported = SUPPORTED_10000baseKX4_Full,
.advertised = ADVERTISED_10000baseKX4_Full,
.speed = 10000,
},
[MLX5E_10GBASE_KX4] = {
.supported = SUPPORTED_10000baseKX4_Full,
.advertised = ADVERTISED_10000baseKX4_Full,
.speed = 10000,
},
[MLX5E_10GBASE_KR] = {
.supported = SUPPORTED_10000baseKR_Full,
.advertised = ADVERTISED_10000baseKR_Full,
.speed = 10000,
},
[MLX5E_20GBASE_KR2] = {
.supported = SUPPORTED_20000baseKR2_Full,
.advertised = ADVERTISED_20000baseKR2_Full,
.speed = 20000,
},
[MLX5E_40GBASE_CR4] = {
.supported = SUPPORTED_40000baseCR4_Full,
.advertised = ADVERTISED_40000baseCR4_Full,
.speed = 40000,
},
[MLX5E_40GBASE_KR4] = {
.supported = SUPPORTED_40000baseKR4_Full,
.advertised = ADVERTISED_40000baseKR4_Full,
.speed = 40000,
},
[MLX5E_56GBASE_R4] = {
.supported = SUPPORTED_56000baseKR4_Full,
.advertised = ADVERTISED_56000baseKR4_Full,
.speed = 56000,
},
[MLX5E_10GBASE_CR] = {
.supported = SUPPORTED_10000baseKR_Full,
.advertised = ADVERTISED_10000baseKR_Full,
.speed = 10000,
},
[MLX5E_10GBASE_SR] = {
.supported = SUPPORTED_10000baseKR_Full,
.advertised = ADVERTISED_10000baseKR_Full,
.speed = 10000,
},
[MLX5E_10GBASE_ER] = {
.supported = SUPPORTED_10000baseKR_Full,
.advertised = ADVERTISED_10000baseKR_Full,
.speed = 10000,
},
[MLX5E_40GBASE_SR4] = {
.supported = SUPPORTED_40000baseSR4_Full,
.advertised = ADVERTISED_40000baseSR4_Full,
.speed = 40000,
},
[MLX5E_40GBASE_LR4] = {
.supported = SUPPORTED_40000baseLR4_Full,
.advertised = ADVERTISED_40000baseLR4_Full,
.speed = 40000,
},
[MLX5E_100GBASE_CR4] = {
.speed = 100000,
},
[MLX5E_100GBASE_SR4] = {
.speed = 100000,
},
[MLX5E_100GBASE_KR4] = {
.speed = 100000,
},
[MLX5E_100GBASE_LR4] = {
.speed = 100000,
},
[MLX5E_100BASE_TX] = {
.speed = 100,
},
[MLX5E_100BASE_T] = {
.supported = SUPPORTED_100baseT_Full,
.advertised = ADVERTISED_100baseT_Full,
.speed = 100,
},
[MLX5E_10GBASE_T] = {
.supported = SUPPORTED_10000baseT_Full,
.advertised = ADVERTISED_10000baseT_Full,
.speed = 1000,
},
[MLX5E_25GBASE_CR] = {
.speed = 25000,
},
[MLX5E_25GBASE_KR] = {
.speed = 25000,
},
[MLX5E_25GBASE_SR] = {
.speed = 25000,
},
[MLX5E_50GBASE_CR2] = {
.speed = 50000,
},
[MLX5E_50GBASE_KR2] = {
.speed = 50000,
},
};
static int mlx5e_get_sset_count(struct net_device *dev, int sset)
{
struct mlx5e_priv *priv = netdev_priv(dev);
switch (sset) {
case ETH_SS_STATS:
return NUM_VPORT_COUNTERS +
priv->params.num_channels * NUM_RQ_STATS +
priv->params.num_channels * priv->num_tc *
NUM_SQ_STATS;
/* fallthrough */
default:
return -EOPNOTSUPP;
}
}
static void mlx5e_get_strings(struct net_device *dev,
uint32_t stringset, uint8_t *data)
{
int i, j, tc, idx = 0;
struct mlx5e_priv *priv = netdev_priv(dev);
switch (stringset) {
case ETH_SS_PRIV_FLAGS:
break;
case ETH_SS_TEST:
break;
case ETH_SS_STATS:
/* VPORT counters */
for (i = 0; i < NUM_VPORT_COUNTERS; i++)
strcpy(data + (idx++) * ETH_GSTRING_LEN,
vport_strings[i]);
/* per channel counters */
for (i = 0; i < priv->params.num_channels; i++)
for (j = 0; j < NUM_RQ_STATS; j++)
sprintf(data + (idx++) * ETH_GSTRING_LEN,
"rx%d_%s", i, rq_stats_strings[j]);
for (i = 0; i < priv->params.num_channels; i++)
for (tc = 0; tc < priv->num_tc; tc++)
for (j = 0; j < NUM_SQ_STATS; j++)
sprintf(data +
(idx++) * ETH_GSTRING_LEN,
"tx%d_%d_%s", i, tc,
sq_stats_strings[j]);
break;
}
}
static void mlx5e_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 *data)
{
struct mlx5e_priv *priv = netdev_priv(dev);
int i, j, tc, idx = 0;
if (!data)
return;
mutex_lock(&priv->state_lock);
if (test_bit(MLX5E_STATE_OPENED, &priv->state))
mlx5e_update_stats(priv);
mutex_unlock(&priv->state_lock);
for (i = 0; i < NUM_VPORT_COUNTERS; i++)
data[idx++] = ((u64 *)&priv->stats.vport)[i];
/* per channel counters */
for (i = 0; i < priv->params.num_channels; i++)
for (j = 0; j < NUM_RQ_STATS; j++)
data[idx++] = !test_bit(MLX5E_STATE_OPENED,
&priv->state) ? 0 :
((u64 *)&priv->channel[i]->rq.stats)[j];
for (i = 0; i < priv->params.num_channels; i++)
for (tc = 0; tc < priv->num_tc; tc++)
for (j = 0; j < NUM_SQ_STATS; j++)
data[idx++] = !test_bit(MLX5E_STATE_OPENED,
&priv->state) ? 0 :
((u64 *)&priv->channel[i]->sq[tc].stats)[j];
}
static void mlx5e_get_ringparam(struct net_device *dev,
struct ethtool_ringparam *param)
{
struct mlx5e_priv *priv = netdev_priv(dev);
param->rx_max_pending = 1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE;
param->tx_max_pending = 1 << MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE;
param->rx_pending = 1 << priv->params.log_rq_size;
param->tx_pending = 1 << priv->params.log_sq_size;
}
static int mlx5e_set_ringparam(struct net_device *dev,
struct ethtool_ringparam *param)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5e_params new_params;
u16 min_rx_wqes;
u8 log_rq_size;
u8 log_sq_size;
int err = 0;
if (param->rx_jumbo_pending) {
netdev_info(dev, "%s: rx_jumbo_pending not supported\n",
__func__);
return -EINVAL;
}
if (param->rx_mini_pending) {
netdev_info(dev, "%s: rx_mini_pending not supported\n",
__func__);
return -EINVAL;
}
if (param->rx_pending < (1 << MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE)) {
netdev_info(dev, "%s: rx_pending (%d) < min (%d)\n",
__func__, param->rx_pending,
1 << MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE);
return -EINVAL;
}
if (param->rx_pending > (1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE)) {
netdev_info(dev, "%s: rx_pending (%d) > max (%d)\n",
__func__, param->rx_pending,
1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE);
return -EINVAL;
}
if (param->tx_pending < (1 << MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)) {
netdev_info(dev, "%s: tx_pending (%d) < min (%d)\n",
__func__, param->tx_pending,
1 << MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE);
return -EINVAL;
}
if (param->tx_pending > (1 << MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE)) {
netdev_info(dev, "%s: tx_pending (%d) > max (%d)\n",
__func__, param->tx_pending,
1 << MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE);
return -EINVAL;
}
log_rq_size = order_base_2(param->rx_pending);
log_sq_size = order_base_2(param->tx_pending);
min_rx_wqes = min_t(u16, param->rx_pending - 1,
MLX5E_PARAMS_DEFAULT_MIN_RX_WQES);
if (log_rq_size == priv->params.log_rq_size &&
log_sq_size == priv->params.log_sq_size &&
min_rx_wqes == priv->params.min_rx_wqes)
return 0;
mutex_lock(&priv->state_lock);
new_params = priv->params;
new_params.log_rq_size = log_rq_size;
new_params.log_sq_size = log_sq_size;
new_params.min_rx_wqes = min_rx_wqes;
err = mlx5e_update_priv_params(priv, &new_params);
mutex_unlock(&priv->state_lock);
return err;
}
static void mlx5e_get_channels(struct net_device *dev,
struct ethtool_channels *ch)
{
struct mlx5e_priv *priv = netdev_priv(dev);
int ncv = priv->mdev->priv.eq_table.num_comp_vectors;
ch->max_combined = ncv;
ch->combined_count = priv->params.num_channels;
}
static int mlx5e_set_channels(struct net_device *dev,
struct ethtool_channels *ch)
{
struct mlx5e_priv *priv = netdev_priv(dev);
int ncv = priv->mdev->priv.eq_table.num_comp_vectors;
unsigned int count = ch->combined_count;
struct mlx5e_params new_params;
int err = 0;
if (!count) {
netdev_info(dev, "%s: combined_count=0 not supported\n",
__func__);
return -EINVAL;
}
if (ch->rx_count || ch->tx_count) {
netdev_info(dev, "%s: separate rx/tx count not supported\n",
__func__);
return -EINVAL;
}
if (count > ncv) {
netdev_info(dev, "%s: count (%d) > max (%d)\n",
__func__, count, ncv);
return -EINVAL;
}
if (priv->params.num_channels == count)
return 0;
mutex_lock(&priv->state_lock);
new_params = priv->params;
new_params.num_channels = count;
err = mlx5e_update_priv_params(priv, &new_params);
mutex_unlock(&priv->state_lock);
return err;
}
static int mlx5e_get_coalesce(struct net_device *netdev,
struct ethtool_coalesce *coal)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
coal->rx_coalesce_usecs = priv->params.rx_cq_moderation_usec;
coal->rx_max_coalesced_frames = priv->params.rx_cq_moderation_pkts;
coal->tx_coalesce_usecs = priv->params.tx_cq_moderation_usec;
coal->tx_max_coalesced_frames = priv->params.tx_cq_moderation_pkts;
return 0;
}
static int mlx5e_set_coalesce(struct net_device *netdev,
struct ethtool_coalesce *coal)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5e_channel *c;
int tc;
int i;
priv->params.tx_cq_moderation_usec = coal->tx_coalesce_usecs;
priv->params.tx_cq_moderation_pkts = coal->tx_max_coalesced_frames;
priv->params.rx_cq_moderation_usec = coal->rx_coalesce_usecs;
priv->params.rx_cq_moderation_pkts = coal->rx_max_coalesced_frames;
for (i = 0; i < priv->params.num_channels; ++i) {
c = priv->channel[i];
for (tc = 0; tc < c->num_tc; tc++) {
mlx5_core_modify_cq_moderation(mdev,
&c->sq[tc].cq.mcq,
coal->tx_coalesce_usecs,
coal->tx_max_coalesced_frames);
}
mlx5_core_modify_cq_moderation(mdev, &c->rq.cq.mcq,
coal->rx_coalesce_usecs,
coal->rx_max_coalesced_frames);
}
return 0;
}
static u32 ptys2ethtool_supported_link(u32 eth_proto_cap)
{
int i;
u32 supported_modes = 0;
for (i = 0; i < MLX5E_LINK_MODES_NUMBER; ++i) {
if (eth_proto_cap & MLX5E_PROT_MASK(i))
supported_modes |= ptys2ethtool_table[i].supported;
}
return supported_modes;
}
static u32 ptys2ethtool_adver_link(u32 eth_proto_cap)
{
int i;
u32 advertising_modes = 0;
for (i = 0; i < MLX5E_LINK_MODES_NUMBER; ++i) {
if (eth_proto_cap & MLX5E_PROT_MASK(i))
advertising_modes |= ptys2ethtool_table[i].advertised;
}
return advertising_modes;
}
static u32 ptys2ethtool_supported_port(u32 eth_proto_cap)
{
if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
| MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
| MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
| MLX5E_PROT_MASK(MLX5E_40GBASE_SR4)
| MLX5E_PROT_MASK(MLX5E_100GBASE_SR4)
| MLX5E_PROT_MASK(MLX5E_1000BASE_CX_SGMII))) {
return SUPPORTED_FIBRE;
}
if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_100GBASE_KR4)
| MLX5E_PROT_MASK(MLX5E_40GBASE_KR4)
| MLX5E_PROT_MASK(MLX5E_10GBASE_KR)
| MLX5E_PROT_MASK(MLX5E_10GBASE_KX4)
| MLX5E_PROT_MASK(MLX5E_1000BASE_KX))) {
return SUPPORTED_Backplane;
}
return 0;
}
static void get_speed_duplex(struct net_device *netdev,
u32 eth_proto_oper,
struct ethtool_cmd *cmd)
{
int i;
u32 speed = SPEED_UNKNOWN;
u8 duplex = DUPLEX_UNKNOWN;
if (!netif_carrier_ok(netdev))
goto out;
for (i = 0; i < MLX5E_LINK_MODES_NUMBER; ++i) {
if (eth_proto_oper & MLX5E_PROT_MASK(i)) {
speed = ptys2ethtool_table[i].speed;
duplex = DUPLEX_FULL;
break;
}
}
out:
ethtool_cmd_speed_set(cmd, speed);
cmd->duplex = duplex;
}
static void get_supported(u32 eth_proto_cap, u32 *supported)
{
*supported |= ptys2ethtool_supported_port(eth_proto_cap);
*supported |= ptys2ethtool_supported_link(eth_proto_cap);
*supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
}
static void get_advertising(u32 eth_proto_cap, u8 tx_pause,
u8 rx_pause, u32 *advertising)
{
*advertising |= ptys2ethtool_adver_link(eth_proto_cap);
*advertising |= tx_pause ? ADVERTISED_Pause : 0;
*advertising |= (tx_pause ^ rx_pause) ? ADVERTISED_Asym_Pause : 0;
}
static u8 get_connector_port(u32 eth_proto)
{
if (eth_proto & (MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
| MLX5E_PROT_MASK(MLX5E_40GBASE_SR4)
| MLX5E_PROT_MASK(MLX5E_100GBASE_SR4)
| MLX5E_PROT_MASK(MLX5E_1000BASE_CX_SGMII))) {
return PORT_FIBRE;
}
if (eth_proto & (MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
| MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
| MLX5E_PROT_MASK(MLX5E_100GBASE_CR4))) {
return PORT_DA;
}
if (eth_proto & (MLX5E_PROT_MASK(MLX5E_10GBASE_KX4)
| MLX5E_PROT_MASK(MLX5E_10GBASE_KR)
| MLX5E_PROT_MASK(MLX5E_40GBASE_KR4)
| MLX5E_PROT_MASK(MLX5E_100GBASE_KR4))) {
return PORT_NONE;
}
return PORT_OTHER;
}
static void get_lp_advertising(u32 eth_proto_lp, u32 *lp_advertising)
{
*lp_advertising = ptys2ethtool_adver_link(eth_proto_lp);
}
static int mlx5e_get_settings(struct net_device *netdev,
struct ethtool_cmd *cmd)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
u32 out[MLX5_ST_SZ_DW(ptys_reg)];
u32 eth_proto_cap;
u32 eth_proto_admin;
u32 eth_proto_lp;
u32 eth_proto_oper;
int err;
err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN);
if (err) {
netdev_err(netdev, "%s: query port ptys failed: %d\n",
__func__, err);
goto err_query_ptys;
}
eth_proto_cap = MLX5_GET(ptys_reg, out, eth_proto_capability);
eth_proto_admin = MLX5_GET(ptys_reg, out, eth_proto_admin);
eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper);
eth_proto_lp = MLX5_GET(ptys_reg, out, eth_proto_lp_advertise);
cmd->supported = 0;
cmd->advertising = 0;
get_supported(eth_proto_cap, &cmd->supported);
get_advertising(eth_proto_admin, 0, 0, &cmd->advertising);
get_speed_duplex(netdev, eth_proto_oper, cmd);
eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
cmd->port = get_connector_port(eth_proto_oper);
get_lp_advertising(eth_proto_lp, &cmd->lp_advertising);
cmd->transceiver = XCVR_INTERNAL;
err_query_ptys:
return err;
}
static u32 mlx5e_ethtool2ptys_adver_link(u32 link_modes)
{
u32 i, ptys_modes = 0;
for (i = 0; i < MLX5E_LINK_MODES_NUMBER; ++i) {
if (ptys2ethtool_table[i].advertised & link_modes)
ptys_modes |= MLX5E_PROT_MASK(i);
}
return ptys_modes;
}
static u32 mlx5e_ethtool2ptys_speed_link(u32 speed)
{
u32 i, speed_links = 0;
for (i = 0; i < MLX5E_LINK_MODES_NUMBER; ++i) {
if (ptys2ethtool_table[i].speed == speed)
speed_links |= MLX5E_PROT_MASK(i);
}
return speed_links;
}
static int mlx5e_set_settings(struct net_device *netdev,
struct ethtool_cmd *cmd)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
u32 link_modes;
u32 speed;
u32 eth_proto_cap, eth_proto_admin;
u8 port_status;
int err;
speed = ethtool_cmd_speed(cmd);
link_modes = cmd->autoneg == AUTONEG_ENABLE ?
mlx5e_ethtool2ptys_adver_link(cmd->advertising) :
mlx5e_ethtool2ptys_speed_link(speed);
err = mlx5_query_port_proto_cap(mdev, &eth_proto_cap, MLX5_PTYS_EN);
if (err) {
netdev_err(netdev, "%s: query port eth proto cap failed: %d\n",
__func__, err);
goto out;
}
link_modes = link_modes & eth_proto_cap;
if (!link_modes) {
netdev_err(netdev, "%s: Not supported link mode(s) requested",
__func__);
err = -EINVAL;
goto out;
}
err = mlx5_query_port_proto_admin(mdev, &eth_proto_admin, MLX5_PTYS_EN);
if (err) {
netdev_err(netdev, "%s: query port eth proto admin failed: %d\n",
__func__, err);
goto out;
}
if (link_modes == eth_proto_admin)
goto out;
err = mlx5_set_port_proto(mdev, link_modes, MLX5_PTYS_EN);
if (err) {
netdev_err(netdev, "%s: set port eth proto admin failed: %d\n",
__func__, err);
goto out;
}
err = mlx5_query_port_status(mdev, &port_status);
if (err)
goto out;
if (port_status == MLX5_PORT_DOWN)
return 0;
err = mlx5_set_port_status(mdev, MLX5_PORT_DOWN);
if (err)
goto out;
err = mlx5_set_port_status(mdev, MLX5_PORT_UP);
out:
return err;
}
const struct ethtool_ops mlx5e_ethtool_ops = {
.get_drvinfo = mlx5e_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_strings = mlx5e_get_strings,
.get_sset_count = mlx5e_get_sset_count,
.get_ethtool_stats = mlx5e_get_ethtool_stats,
.get_ringparam = mlx5e_get_ringparam,
.set_ringparam = mlx5e_set_ringparam,
.get_channels = mlx5e_get_channels,
.set_channels = mlx5e_set_channels,
.get_coalesce = mlx5e_get_coalesce,
.set_coalesce = mlx5e_set_coalesce,
.get_settings = mlx5e_get_settings,
.set_settings = mlx5e_set_settings,
};
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/*
* Copyright (c) 2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "en.h"
struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq)
{
struct mlx5_cqwq *wq = &cq->wq;
u32 ci = mlx5_cqwq_get_ci(wq);
struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(wq, ci);
int cqe_ownership_bit = cqe->op_own & MLX5_CQE_OWNER_MASK;
int sw_ownership_val = mlx5_cqwq_get_wrap_cnt(wq) & 1;
if (cqe_ownership_bit != sw_ownership_val)
return NULL;
mlx5_cqwq_pop(wq);
/* ensure cqe content is read after cqe ownership bit */
rmb();
return cqe;
}
int mlx5e_napi_poll(struct napi_struct *napi, int budget)
{
struct mlx5e_channel *c = container_of(napi, struct mlx5e_channel,
napi);
bool busy = false;
int i;
clear_bit(MLX5E_CHANNEL_NAPI_SCHED, &c->flags);
for (i = 0; i < c->num_tc; i++)
busy |= mlx5e_poll_tx_cq(&c->sq[i].cq);
busy |= mlx5e_poll_rx_cq(&c->rq.cq, budget);
busy |= mlx5e_post_rx_wqes(c->rq.cq.sqrq);
if (busy)
return budget;
napi_complete(napi);
/* avoid losing completion event during/after polling cqs */
if (test_bit(MLX5E_CHANNEL_NAPI_SCHED, &c->flags)) {
napi_schedule(napi);
return 0;
}
for (i = 0; i < c->num_tc; i++)
mlx5e_cq_arm(&c->sq[i].cq);
mlx5e_cq_arm(&c->rq.cq);
return 0;
}
void mlx5e_completion_event(struct mlx5_core_cq *mcq)
{
struct mlx5e_cq *cq = container_of(mcq, struct mlx5e_cq, mcq);
set_bit(MLX5E_CQ_HAS_CQES, &cq->flags);
set_bit(MLX5E_CHANNEL_NAPI_SCHED, &cq->channel->flags);
barrier();
napi_schedule(cq->napi);
}
void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event)
{
struct mlx5e_cq *cq = container_of(mcq, struct mlx5e_cq, mcq);
struct mlx5e_channel *c = cq->channel;
struct mlx5e_priv *priv = c->priv;
struct net_device *netdev = priv->netdev;
netdev_err(netdev, "%s: cqn=0x%.6x event=0x%.2x\n",
__func__, mcq->cqn, event);
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册