diff --git a/Documentation/networking/device_drivers/ethernet/mellanox/mlx5.rst b/Documentation/networking/device_drivers/ethernet/mellanox/mlx5.rst index 5edf50d7dbd5278c63241cadb56d7b22b31adb45..e8fa7ac9e6b117c5cd2a68bf7b90fb956262fa2c 100644 --- a/Documentation/networking/device_drivers/ethernet/mellanox/mlx5.rst +++ b/Documentation/networking/device_drivers/ethernet/mellanox/mlx5.rst @@ -25,7 +25,7 @@ Enabling the driver and kconfig options | at build time via kernel Kconfig flags. | Basic features, ethernet net device rx/tx offloads and XDP, are available with the most basic flags | CONFIG_MLX5_CORE=y/m and CONFIG_MLX5_CORE_EN=y. -| For the list of advanced features please see below. +| For the list of advanced features, please see below. **CONFIG_MLX5_CORE=(y/m/n)** (module mlx5_core.ko) @@ -89,11 +89,11 @@ Enabling the driver and kconfig options **CONFIG_MLX5_EN_IPSEC=(y/n)** -| Enables `IPSec XFRM cryptography-offload accelaration `_. +| Enables `IPSec XFRM cryptography-offload acceleration `_. **CONFIG_MLX5_EN_TLS=(y/n)** -| TLS cryptography-offload accelaration. +| TLS cryptography-offload acceleration. **CONFIG_MLX5_INFINIBAND=(y/n/m)** (module mlx5_ib.ko) @@ -139,14 +139,14 @@ flow_steering_mode: Device flow steering mode The flow steering mode parameter controls the flow steering mode of the driver. Two modes are supported: 1. 'dmfs' - Device managed flow steering. -2. 'smfs - Software/Driver managed flow steering. +2. 'smfs' - Software/Driver managed flow steering. In DMFS mode, the HW steering entities are created and managed through the Firmware. In SMFS mode, the HW steering entities are created and managed though by -the driver directly into Hardware without firmware intervention. +the driver directly into hardware without firmware intervention. -SMFS mode is faster and provides better rule inserstion rate compared to default DMFS mode. +SMFS mode is faster and provides better rule insertion rate compared to default DMFS mode. User command examples: @@ -165,9 +165,9 @@ User command examples: enable_roce: RoCE enablement state ---------------------------------- RoCE enablement state controls driver support for RoCE traffic. -When RoCE is disabled, there is no gid table, only raw ethernet QPs are supported and traffic on the well known UDP RoCE port is handled as raw ethernet traffic. +When RoCE is disabled, there is no gid table, only raw ethernet QPs are supported and traffic on the well-known UDP RoCE port is handled as raw ethernet traffic. -To change RoCE enablement state a user must change the driverinit cmode value and run devlink reload. +To change RoCE enablement state, a user must change the driverinit cmode value and run devlink reload. User command examples: @@ -186,7 +186,7 @@ User command examples: esw_port_metadata: Eswitch port metadata state ---------------------------------------------- -When applicable, disabling Eswitch metadata can increase packet rate +When applicable, disabling eswitch metadata can increase packet rate up to 20% depending on the use case and packet sizes. Eswitch port metadata state controls whether to internally tag packets with @@ -253,26 +253,26 @@ mlx5 subfunction ================ mlx5 supports subfunction management using devlink port (see :ref:`Documentation/networking/devlink/devlink-port.rst `) interface. -A Subfunction has its own function capabilities and its own resources. This +A subfunction has its own function capabilities and its own resources. This means a subfunction has its own dedicated queues (txq, rxq, cq, eq). These queues are neither shared nor stolen from the parent PCI function. -When a subfunction is RDMA capable, it has its own QP1, GID table and rdma +When a subfunction is RDMA capable, it has its own QP1, GID table, and RDMA resources neither shared nor stolen from the parent PCI function. A subfunction has a dedicated window in PCI BAR space that is not shared -with ther other subfunctions or the parent PCI function. This ensures that all -devices (netdev, rdma, vdpa etc.) of the subfunction accesses only assigned +with the other subfunctions or the parent PCI function. This ensures that all +devices (netdev, rdma, vdpa, etc.) of the subfunction accesses only assigned PCI BAR space. -A Subfunction supports eswitch representation through which it supports tc +A subfunction supports eswitch representation through which it supports tc offloads. The user configures eswitch to send/receive packets from/to the subfunction port. Subfunctions share PCI level resources such as PCI MSI-X IRQs with other subfunctions and/or with its parent PCI function. -Example mlx5 software, system and device view:: +Example mlx5 software, system, and device view:: _______ | admin | @@ -310,7 +310,7 @@ Example mlx5 software, system and device view:: | (device add/del) _____|____ ____|________ | | | subfunction | - | PCI NIC |---- activate/deactive events---->| host driver | + | PCI NIC |--- activate/deactivate events--->| host driver | |__________| | (mlx5_core) | |_____________| @@ -320,7 +320,7 @@ Subfunction is created using devlink port interface. $ devlink dev eswitch set pci/0000:06:00.0 mode switchdev -- Add a devlink port of subfunction flaovur:: +- Add a devlink port of subfunction flavour:: $ devlink port add pci/0000:06:00.0 flavour pcisf pfnum 0 sfnum 88 pci/0000:06:00.0/32768: type eth netdev eth6 flavour pcisf controller 0 pfnum 0 sfnum 88 external false splittable false @@ -379,18 +379,18 @@ device created for the PCI VF/SF. function: hw_addr 00:00:00:00:00:00 -- Set the MAC address of the VF identified by its unique devlink port index:: +- Set the MAC address of the SF identified by its unique devlink port index:: $ devlink port function set pci/0000:06:00.0/32768 hw_addr 00:00:00:00:88:88 $ devlink port show pci/0000:06:00.0/32768 - pci/0000:06:00.0/32768: type eth netdev enp6s0pf0sf88 flavour pcivf pfnum 0 sfnum 88 + pci/0000:06:00.0/32768: type eth netdev enp6s0pf0sf88 flavour pcisf pfnum 0 sfnum 88 function: hw_addr 00:00:00:00:88:88 SF state setup -------------- -To use the SF, the user must active the SF using the SF function state +To use the SF, the user must activate the SF using the SF function state attribute. - Get the state of the SF identified by its unique devlink port index:: @@ -447,7 +447,7 @@ for it. Additionally, the SF port also gets the event when the driver attaches to the auxiliary device of the subfunction. This results in changing the operational -state of the function. This provides visiblity to the user to decide when is it +state of the function. This provides visibility to the user to decide when is it safe to delete the SF port for graceful termination of the subfunction. - Show the SF port operational state:: @@ -464,14 +464,14 @@ tx reporter ----------- The tx reporter is responsible for reporting and recovering of the following two error scenarios: -- TX timeout +- tx timeout Report on kernel tx timeout detection. Recover by searching lost interrupts. -- TX error completion +- tx error completion Report on error tx completion. - Recover by flushing the TX queue and reset it. + Recover by flushing the tx queue and reset it. -TX reporter also support on demand diagnose callback, on which it provides +tx reporter also support on demand diagnose callback, on which it provides real time information of its send queues status. User commands examples: @@ -491,32 +491,32 @@ rx reporter ----------- The rx reporter is responsible for reporting and recovering of the following two error scenarios: -- RX queues initialization (population) timeout - RX queues descriptors population on ring initialization is done in - napi context via triggering an irq, in case of a failure to get - the minimum amount of descriptors, a timeout would occur and it - could be recoverable by polling the EQ (Event Queue). -- RX completions with errors (reported by HW on interrupt context) +- rx queues' initialization (population) timeout + Population of rx queues' descriptors on ring initialization is done + in napi context via triggering an irq. In case of a failure to get + the minimum amount of descriptors, a timeout would occur, and + descriptors could be recovered by polling the EQ (Event Queue). +- rx completions with errors (reported by HW on interrupt context) Report on rx completion error. Recover (if needed) by flushing the related queue and reset it. -RX reporter also supports on demand diagnose callback, on which it -provides real time information of its receive queues status. +rx reporter also supports on demand diagnose callback, on which it +provides real time information of its receive queues' status. -- Diagnose rx queues status, and corresponding completion queue:: +- Diagnose rx queues' status and corresponding completion queue:: $ devlink health diagnose pci/0000:82:00.0 reporter rx -NOTE: This command has valid output only when interface is up, otherwise the command has empty output. +NOTE: This command has valid output only when interface is up. Otherwise, the command has empty output. - Show number of rx errors indicated, number of recover flows ended successfully, - is autorecover enabled and graceful period from last recover:: + is autorecover enabled, and graceful period from last recover:: $ devlink health show pci/0000:82:00.0 reporter rx fw reporter ----------- -The fw reporter implements diagnose and dump callbacks. +The fw reporter implements `diagnose` and `dump` callbacks. It follows symptoms of fw error such as fw syndrome by triggering fw core dump and storing it into the dump buffer. The fw reporter diagnose command can be triggered any time by the user to check @@ -537,7 +537,7 @@ running it on other PF or any VF will return "Operation not permitted". fw fatal reporter ----------------- -The fw fatal reporter implements dump and recover callbacks. +The fw fatal reporter implements `dump` and `recover` callbacks. It follows fatal errors indications by CR-space dump and recover flow. The CR-space dump uses vsc interface which is valid even if the FW command interface is not functional, which is the case in most FW fatal errors. @@ -552,7 +552,7 @@ User commands examples: $ devlink health recover pci/0000:82:00.0 reporter fw_fatal -- Read FW CR-space dump if already strored or trigger new one:: +- Read FW CR-space dump if already stored or trigger new one:: $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal @@ -561,10 +561,10 @@ NOTE: This command can run only on PF. mlx5 tracepoints ================ -mlx5 driver provides internal trace points for tracking and debugging using +mlx5 driver provides internal tracepoints for tracking and debugging using kernel tracepoints interfaces (refer to Documentation/trace/ftrace.rst). -For the list of support mlx5 events check /sys/kernel/debug/tracing/events/mlx5/ +For the list of support mlx5 events, check `/sys/kernel/debug/tracing/events/mlx5/`. tc and eswitch offloads tracepoints: diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index bc97958818bb5ace9bad3da86af3a70761e0502f..e6e021af6aa923a6d749f07caf5d81b983abcb1f 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -230,8 +230,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, struct ib_umem_odp *umem_odp = container_of(mni, struct ib_umem_odp, notifier); struct mlx5_ib_mr *mr; - const u64 umr_block_mask = (MLX5_UMR_MTT_ALIGNMENT / - sizeof(struct mlx5_mtt)) - 1; + const u64 umr_block_mask = MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT - 1; u64 idx = 0, blk_start_idx = 0; u64 invalidations = 0; unsigned long start; diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index d5105b5c9979b582083fed732fff25b0f5cc2462..029e9536ec28f2ffc252890b8ef1ae5c2b4b91c6 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -418,7 +418,7 @@ int mlx5r_umr_rereg_pd_access(struct mlx5_ib_mr *mr, struct ib_pd *pd, } #define MLX5_MAX_UMR_CHUNK \ - ((1 << (MLX5_MAX_UMR_SHIFT + 4)) - MLX5_UMR_MTT_ALIGNMENT) + ((1 << (MLX5_MAX_UMR_SHIFT + 4)) - MLX5_UMR_FLEX_ALIGNMENT) #define MLX5_SPARE_UMR_CHUNK 0x10000 /* @@ -428,11 +428,11 @@ int mlx5r_umr_rereg_pd_access(struct mlx5_ib_mr *mr, struct ib_pd *pd, */ static void *mlx5r_umr_alloc_xlt(size_t *nents, size_t ent_size, gfp_t gfp_mask) { - const size_t xlt_chunk_align = MLX5_UMR_MTT_ALIGNMENT / ent_size; + const size_t xlt_chunk_align = MLX5_UMR_FLEX_ALIGNMENT / ent_size; size_t size; void *res = NULL; - static_assert(PAGE_SIZE % MLX5_UMR_MTT_ALIGNMENT == 0); + static_assert(PAGE_SIZE % MLX5_UMR_FLEX_ALIGNMENT == 0); /* * MLX5_IB_UPD_XLT_ATOMIC doesn't signal an atomic context just that the @@ -666,7 +666,7 @@ int mlx5r_umr_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) } final_size = (void *)cur_mtt - (void *)mtt; - sg.length = ALIGN(final_size, MLX5_UMR_MTT_ALIGNMENT); + sg.length = ALIGN(final_size, MLX5_UMR_FLEX_ALIGNMENT); memset(cur_mtt, 0, sg.length - final_size); mlx5r_umr_final_update_xlt(dev, &wqe, mr, &sg, flags); @@ -690,7 +690,7 @@ int mlx5r_umr_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages, int desc_size = (flags & MLX5_IB_UPD_XLT_INDIRECT) ? sizeof(struct mlx5_klm) : sizeof(struct mlx5_mtt); - const int page_align = MLX5_UMR_MTT_ALIGNMENT / desc_size; + const int page_align = MLX5_UMR_FLEX_ALIGNMENT / desc_size; struct mlx5_ib_dev *dev = mr_to_mdev(mr); struct device *ddev = &dev->mdev->pdev->dev; const int page_mask = page_align - 1; @@ -711,7 +711,7 @@ int mlx5r_umr_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages, if (WARN_ON(!mr->umem->is_odp)) return -EINVAL; - /* UMR copies MTTs in units of MLX5_UMR_MTT_ALIGNMENT bytes, + /* UMR copies MTTs in units of MLX5_UMR_FLEX_ALIGNMENT bytes, * so we need to align the offset and length accordingly */ if (idx & page_mask) { @@ -748,7 +748,7 @@ int mlx5r_umr_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages, mlx5_odp_populate_xlt(xlt, idx, npages, mr, flags); dma_sync_single_for_device(ddev, sg.addr, sg.length, DMA_TO_DEVICE); - sg.length = ALIGN(size_to_map, MLX5_UMR_MTT_ALIGNMENT); + sg.length = ALIGN(size_to_map, MLX5_UMR_FLEX_ALIGNMENT); if (pages_mapped + pages_iter >= pages_to_map) mlx5r_umr_final_update_xlt(dev, &wqe, mr, &sg, flags); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c index e7a894ba5c3eae9584673149ba8febd0e9e9ba90..d3ca745d107d62ca9e6cac4bb2d9ef729ed7dded 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index ff5b302531d516692f8a5250279421f95cc3f772..65790ff58a743abdf9f7c6e38ead3d1f70c26392 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -103,11 +103,11 @@ struct page_pool; * size actually used at runtime, but it's not a problem when calculating static * array sizes. */ -#define MLX5_UMR_MAX_MTT_SPACE \ +#define MLX5_UMR_MAX_FLEX_SPACE \ (ALIGN_DOWN(MLX5_SEND_WQE_MAX_SIZE - sizeof(struct mlx5e_umr_wqe), \ - MLX5_UMR_MTT_ALIGNMENT)) + MLX5_UMR_FLEX_ALIGNMENT)) #define MLX5_MPWRQ_MAX_PAGES_PER_WQE \ - rounddown_pow_of_two(MLX5_UMR_MAX_MTT_SPACE / sizeof(struct mlx5_mtt)) + rounddown_pow_of_two(MLX5_UMR_MAX_FLEX_SPACE / sizeof(struct mlx5_mtt)) #define MLX5E_MAX_RQ_NUM_MTTS \ (ALIGN_DOWN(U16_MAX, 4) * 2) /* Fits into u16 and aligned by WQEBB. */ @@ -160,7 +160,7 @@ struct page_pool; (((wqe_size) - sizeof(struct mlx5e_umr_wqe)) / sizeof(struct mlx5_klm)) #define MLX5E_KLM_ENTRIES_PER_WQE(wqe_size)\ - ALIGN_DOWN(MLX5E_KLM_MAX_ENTRIES_PER_WQE(wqe_size), MLX5_UMR_KLM_ALIGNMENT) + ALIGN_DOWN(MLX5E_KLM_MAX_ENTRIES_PER_WQE(wqe_size), MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT) #define MLX5E_MAX_KLM_PER_WQE(mdev) \ MLX5E_KLM_ENTRIES_PER_WQE(MLX5_SEND_WQE_BB * mlx5e_get_max_sq_aligned_wqebbs(mdev)) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 9ec9662d1d0b81a7c85dfe681bd77712e2f62c55..585bdc8383ee21fb6371757b66a6ee1bf3edd425 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -107,7 +107,7 @@ u8 mlx5e_mpwrq_log_wqe_sz(struct mlx5_core_dev *mdev, u8 page_shift, /* Keep in sync with MLX5_MPWRQ_MAX_PAGES_PER_WQE. */ max_wqe_size = mlx5e_get_max_sq_aligned_wqebbs(mdev) * MLX5_SEND_WQE_BB; max_pages_per_wqe = ALIGN_DOWN(max_wqe_size - sizeof(struct mlx5e_umr_wqe), - MLX5_UMR_MTT_ALIGNMENT) / umr_entry_size; + MLX5_UMR_FLEX_ALIGNMENT) / umr_entry_size; max_log_mpwqe_size = ilog2(max_pages_per_wqe) + page_shift; WARN_ON_ONCE(max_log_mpwqe_size < MLX5E_ORDER2_MAX_PACKET_MTU); @@ -146,7 +146,7 @@ u16 mlx5e_mpwrq_umr_wqe_sz(struct mlx5_core_dev *mdev, u8 page_shift, u16 umr_wqe_sz; umr_wqe_sz = sizeof(struct mlx5e_umr_wqe) + - ALIGN(pages_per_wqe * umr_entry_size, MLX5_UMR_MTT_ALIGNMENT); + ALIGN(pages_per_wqe * umr_entry_size, MLX5_UMR_FLEX_ALIGNMENT); WARN_ON_ONCE(DIV_ROUND_UP(umr_wqe_sz, MLX5_SEND_WQE_DS) > MLX5_WQE_CTRL_DS_MASK); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/trap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/trap.c index 53b270f652b91bc6a5fa100dac6f279e698691ec..915ce201aeb26536bcdd7e7327e17623d3de3d6f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/trap.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/trap.c @@ -3,6 +3,7 @@ #include "act.h" #include "en/tc_priv.h" +#include "eswitch.h" static bool tc_act_can_offload_trap(struct mlx5e_tc_act_parse_state *parse_state, @@ -10,13 +11,6 @@ tc_act_can_offload_trap(struct mlx5e_tc_act_parse_state *parse_state, int act_index, struct mlx5_flow_attr *attr) { - struct netlink_ext_ack *extack = parse_state->extack; - - if (parse_state->flow_action->num_entries != 1) { - NL_SET_ERR_MSG_MOD(extack, "action trap is supported as a sole action only"); - return false; - } - return true; } @@ -27,7 +21,7 @@ tc_act_parse_trap(struct mlx5e_tc_act_parse_state *parse_state, struct mlx5_flow_attr *attr) { attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; - attr->flags |= MLX5_ATTR_FLAG_SLOW_PATH; + attr->dest_ft = mlx5_eswitch_get_slow_fdb(priv->mdev->priv.eswitch); return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index a715601865d319a519fe2e63708949d7c152d0e0..1b03ab03fc5a3743c072a5509250ffd687b42a91 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -345,29 +345,27 @@ static void mlx5e_xfrm_free_state(struct xfrm_state *x) kfree(sa_entry); } -int mlx5e_ipsec_init(struct mlx5e_priv *priv) +void mlx5e_ipsec_init(struct mlx5e_priv *priv) { struct mlx5e_ipsec *ipsec; - int ret; + int ret = -ENOMEM; if (!mlx5_ipsec_device_caps(priv->mdev)) { netdev_dbg(priv->netdev, "Not an IPSec offload device\n"); - return 0; + return; } ipsec = kzalloc(sizeof(*ipsec), GFP_KERNEL); if (!ipsec) - return -ENOMEM; + return; hash_init(ipsec->sadb_rx); spin_lock_init(&ipsec->sadb_rx_lock); ipsec->mdev = priv->mdev; ipsec->wq = alloc_ordered_workqueue("mlx5e_ipsec: %s", 0, priv->netdev->name); - if (!ipsec->wq) { - ret = -ENOMEM; + if (!ipsec->wq) goto err_wq; - } ret = mlx5e_accel_ipsec_fs_init(ipsec); if (ret) @@ -375,13 +373,14 @@ int mlx5e_ipsec_init(struct mlx5e_priv *priv) priv->ipsec = ipsec; netdev_dbg(priv->netdev, "IPSec attached to netdevice\n"); - return 0; + return; err_fs_init: destroy_workqueue(ipsec->wq); err_wq: kfree(ipsec); - return (ret != -EOPNOTSUPP) ? ret : 0; + mlx5_core_err(priv->mdev, "IPSec initialization failed, %d\n", ret); + return; } void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 16bcceec16c445500ccb979e6a67f496c90304c6..4c47347d0ee2386d11846864e03e939eb9b8d72b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -146,7 +146,7 @@ struct mlx5e_ipsec_sa_entry { struct mlx5e_ipsec_modify_state_work modify_work; }; -int mlx5e_ipsec_init(struct mlx5e_priv *priv); +void mlx5e_ipsec_init(struct mlx5e_priv *priv); void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv); void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv); @@ -174,9 +174,8 @@ mlx5e_ipsec_sa2dev(struct mlx5e_ipsec_sa_entry *sa_entry) return sa_entry->ipsec->mdev; } #else -static inline int mlx5e_ipsec_init(struct mlx5e_priv *priv) +static inline void mlx5e_ipsec_init(struct mlx5e_priv *priv) { - return 0; } static inline void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c index f900709639f6e6af866f6bd46fa153aff910333c..9369a580743e1ffd9ce1955b64b4c87f7113234b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c @@ -186,7 +186,7 @@ static int mlx5e_macsec_aso_reg_mr(struct mlx5_core_dev *mdev, struct mlx5e_macs return err; } - dma_device = &mdev->pdev->dev; + dma_device = mlx5_core_dma_dev(mdev); dma_addr = dma_map_single(dma_device, umr->ctx, sizeof(umr->ctx), DMA_BIDIRECTIONAL); err = dma_mapping_error(dma_device, dma_addr); if (err) { @@ -1299,12 +1299,12 @@ static void macsec_aso_build_wqe_ctrl_seg(struct mlx5e_macsec_aso *macsec_aso, struct mlx5_wqe_aso_ctrl_seg *aso_ctrl, struct mlx5_aso_ctrl_param *param) { + struct mlx5e_macsec_umr *umr = macsec_aso->umr; + memset(aso_ctrl, 0, sizeof(*aso_ctrl)); - if (macsec_aso->umr->dma_addr) { - aso_ctrl->va_l = cpu_to_be32(macsec_aso->umr->dma_addr | ASO_CTRL_READ_EN); - aso_ctrl->va_h = cpu_to_be32((u64)macsec_aso->umr->dma_addr >> 32); - aso_ctrl->l_key = cpu_to_be32(macsec_aso->umr->mkey); - } + aso_ctrl->va_l = cpu_to_be32(umr->dma_addr | ASO_CTRL_READ_EN); + aso_ctrl->va_h = cpu_to_be32((u64)umr->dma_addr >> 32); + aso_ctrl->l_key = cpu_to_be32(umr->mkey); if (!param) return; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 217c8a478977119b605f6f76826dd321b891564b..8d36e2de53a992c9542ccadda90234d8c688de06 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -208,7 +208,7 @@ static u16 mlx5e_mpwrq_umr_octowords(u32 entries, enum mlx5e_mpwrq_umr_mode umr_ u8 umr_entry_size = mlx5e_mpwrq_umr_entry_size(umr_mode); u32 sz; - sz = ALIGN(entries * umr_entry_size, MLX5_UMR_MTT_ALIGNMENT); + sz = ALIGN(entries * umr_entry_size, MLX5_UMR_FLEX_ALIGNMENT); return sz / MLX5_OCTWORD; } @@ -5238,10 +5238,6 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev, } priv->fs = fs; - err = mlx5e_ipsec_init(priv); - if (err) - mlx5_core_err(mdev, "IPSec initialization failed, %d\n", err); - err = mlx5e_ktls_init(priv); if (err) mlx5_core_err(mdev, "TLS initialization failed, %d\n", err); @@ -5254,7 +5250,6 @@ static void mlx5e_nic_cleanup(struct mlx5e_priv *priv) { mlx5e_health_destroy_reporters(priv); mlx5e_ktls_cleanup(priv); - mlx5e_ipsec_cleanup(priv); mlx5e_fs_cleanup(priv->fs); } @@ -5383,6 +5378,7 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv) int err; mlx5e_fs_init_l2_addr(priv->fs, netdev); + mlx5e_ipsec_init(priv); err = mlx5e_macsec_init(priv); if (err) @@ -5446,6 +5442,7 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv) mlx5_lag_remove_netdev(mdev, priv->netdev); mlx5_vxlan_reset_to_default(mdev->vxlan); mlx5e_macsec_cleanup(priv); + mlx5e_ipsec_cleanup(priv); } int mlx5e_update_nic_rx(struct mlx5e_priv *priv) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 1b53e8852c86c9f0a87f7d4f2def900074985bf9..623886462c1022d88542e4f887edaa983dfa5d61 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -751,7 +751,6 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev, struct net_device *netdev) { struct mlx5e_priv *priv = netdev_priv(netdev); - int err; priv->fs = mlx5e_fs_init(priv->profile, mdev, !test_bit(MLX5E_STATE_DESTROYING, &priv->state)); @@ -760,10 +759,6 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev, return -ENOMEM; } - err = mlx5e_ipsec_init(priv); - if (err) - mlx5_core_err(mdev, "Uplink rep IPsec initialization failed, %d\n", err); - mlx5e_vxlan_set_netdev_info(priv); mlx5e_build_rep_params(netdev); mlx5e_timestamp_init(priv); @@ -773,7 +768,6 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev, static void mlx5e_cleanup_rep(struct mlx5e_priv *priv) { mlx5e_fs_cleanup(priv->fs); - mlx5e_ipsec_cleanup(priv); } static int mlx5e_create_rep_ttc_table(struct mlx5e_priv *priv) @@ -1112,6 +1106,8 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv) struct mlx5_core_dev *mdev = priv->mdev; u16 max_mtu; + mlx5e_ipsec_init(priv); + netdev->min_mtu = ETH_MIN_MTU; mlx5_query_port_max_mtu(priv->mdev, &max_mtu, 1); netdev->max_mtu = MLX5E_HW2SW_MTU(&priv->channels.params, max_mtu); @@ -1158,6 +1154,8 @@ static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv) mlx5e_rep_tc_disable(priv); mlx5_lag_remove_netdev(mdev, priv->netdev); mlx5_vxlan_reset_to_default(mdev->vxlan); + + mlx5e_ipsec_cleanup(priv); } static MLX5E_DEFINE_STATS_GRP(sw_rep, 0); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index b1ea0b995d9c90c09c91b085c4f8d13c1ac14871..c8820ab221694aae3afb8472640ccdf5fe65b5b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -593,8 +593,8 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, int headroom, i; headroom = rq->buff.headroom; - new_entries = klm_entries - (shampo->pi & (MLX5_UMR_KLM_ALIGNMENT - 1)); - entries = ALIGN(klm_entries, MLX5_UMR_KLM_ALIGNMENT); + new_entries = klm_entries - (shampo->pi & (MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT - 1)); + entries = ALIGN(klm_entries, MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT); wqe_bbs = MLX5E_KLM_UMR_WQEBBS(entries); pi = mlx5e_icosq_get_next_pi(sq, wqe_bbs); umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi); @@ -603,7 +603,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, for (i = 0; i < entries; i++, index++) { dma_info = &shampo->info[index]; if (i >= klm_entries || (index < shampo->pi && shampo->pi - index < - MLX5_UMR_KLM_ALIGNMENT)) + MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT)) goto update_klm; header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; @@ -668,8 +668,8 @@ static int mlx5e_alloc_rx_hd_mpwqe(struct mlx5e_rq *rq) if (!klm_entries) return 0; - klm_entries += (shampo->pi & (MLX5_UMR_KLM_ALIGNMENT - 1)); - index = ALIGN_DOWN(shampo->pi, MLX5_UMR_KLM_ALIGNMENT); + klm_entries += (shampo->pi & (MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT - 1)); + index = ALIGN_DOWN(shampo->pi, MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT); entries_before = shampo->hd_per_wq - index; if (unlikely(entries_before < klm_entries)) @@ -727,6 +727,17 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) }; } + /* Pad if needed, in case the value set to ucseg->xlt_octowords + * in mlx5e_build_umr_wqe() needed alignment. + */ + if (rq->mpwqe.pages_per_wqe & (MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT - 1)) { + int pad = ALIGN(rq->mpwqe.pages_per_wqe, MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT) - + rq->mpwqe.pages_per_wqe; + + memset(&umr_wqe->inline_mtts[rq->mpwqe.pages_per_wqe], 0, + sizeof(*umr_wqe->inline_mtts) * pad); + } + bitmap_zero(wi->xdp_xmit_bitmap, rq->mpwqe.pages_per_wqe); wi->consumed_strides = 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 48241317a53545ce7c46e40c9a60aafbcf301650..0db41fa4a9a64e152acfac728573e581acf85ca2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -97,8 +97,8 @@ struct mlx5_flow_attr { } lag; /* keep this union last */ union { - struct mlx5_esw_flow_attr esw_attr[0]; - struct mlx5_nic_flow_attr nic_attr[0]; + DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr); + DECLARE_FLEX_ARRAY(struct mlx5_nic_flow_attr, nic_attr); }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 3029bc1c0dd04c073f48e13bcbcc5590a7ca8209..42d9df417e20ddb659263954f9e604be5d424d57 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -744,6 +744,11 @@ static inline int mlx5_eswitch_num_vfs(struct mlx5_eswitch *esw) return 0; } +static inline struct mlx5_flow_table * +mlx5_eswitch_get_slow_fdb(struct mlx5_eswitch *esw) +{ + return esw->fdb_table.offloads.slow_fdb; +} #else /* CONFIG_MLX5_ESWITCH */ /* eswitch API stubs */ static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 8c6c9bcb3dc3ff61befdb12d2007055e34a5115c..9b6fbb19c22a95bed78bb88f74270d17bea48533 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -248,7 +248,7 @@ esw_setup_slow_path_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_ac if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level)) flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; - dest[i].ft = esw->fdb_table.offloads.slow_fdb; + dest[i].ft = mlx5_eswitch_get_slow_fdb(esw); } static int @@ -479,13 +479,15 @@ esw_setup_dests(struct mlx5_flow_destination *dest, esw_src_port_rewrite_supported(esw)) attr->flags |= MLX5_ATTR_FLAG_SRC_REWRITE; - if (attr->flags & MLX5_ATTR_FLAG_SAMPLE && - !(attr->flags & MLX5_ATTR_FLAG_SLOW_PATH)) { - esw_setup_sampler_dest(dest, flow_act, attr->sample_attr.sampler_id, *i); - (*i)++; - } else if (attr->flags & MLX5_ATTR_FLAG_SLOW_PATH) { + if (attr->flags & MLX5_ATTR_FLAG_SLOW_PATH) { esw_setup_slow_path_dest(dest, flow_act, esw, *i); (*i)++; + goto out; + } + + if (attr->flags & MLX5_ATTR_FLAG_SAMPLE) { + esw_setup_sampler_dest(dest, flow_act, attr->sample_attr.sampler_id, *i); + (*i)++; } else if (attr->flags & MLX5_ATTR_FLAG_ACCEPT) { esw_setup_accept_dest(dest, flow_act, chains, *i); (*i)++; @@ -506,6 +508,7 @@ esw_setup_dests(struct mlx5_flow_destination *dest, } } +out: return err; } @@ -1046,7 +1049,7 @@ mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *on_esw, if (rep->vport == MLX5_VPORT_UPLINK) spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; - flow_rule = mlx5_add_flow_rules(on_esw->fdb_table.offloads.slow_fdb, + flow_rule = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(on_esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow_rule)) esw_warn(on_esw->dev, "FDB: Failed to add send to vport rule err %ld\n", @@ -1095,7 +1098,7 @@ mlx5_eswitch_add_send_to_vport_meta_rule(struct mlx5_eswitch *esw, u16 vport_num mlx5_eswitch_get_vport_metadata_for_match(esw, vport_num)); dest.vport.num = vport_num; - flow_rule = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + flow_rule = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow_rule)) esw_warn(esw->dev, "FDB: Failed to add send to vport meta rule vport %d, err %ld\n", @@ -1248,7 +1251,7 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw, esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch, spec, MLX5_VPORT_PF); - flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow)) { err = PTR_ERR(flow); @@ -1260,7 +1263,7 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw, if (mlx5_ecpf_vport_exists(esw->dev)) { vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); MLX5_SET(fte_match_set_misc, misc, source_port, MLX5_VPORT_ECPF); - flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow)) { err = PTR_ERR(flow); @@ -1274,7 +1277,7 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw, peer_dev->priv.eswitch, spec, vport->vport); - flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow)) { err = PTR_ERR(flow); @@ -1363,7 +1366,7 @@ static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw) dest.vport.num = esw->manager_vport; flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; - flow_rule = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + flow_rule = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow_rule)) { err = PTR_ERR(flow_rule); @@ -1378,7 +1381,7 @@ static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw) dmac_v = MLX5_ADDR_OF(fte_match_param, headers_v, outer_headers.dmac_47_16); dmac_v[0] = 0x01; - flow_rule = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + flow_rule = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), spec, &flow_act, &dest, 1); if (IS_ERR(flow_rule)) { err = PTR_ERR(flow_rule); @@ -1927,7 +1930,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw) fdb_chains_err: mlx5_destroy_flow_table(esw->fdb_table.offloads.tc_miss_table); tc_miss_table_err: - mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb); + mlx5_destroy_flow_table(mlx5_eswitch_get_slow_fdb(esw)); slow_fdb_err: /* Holds true only as long as DMFS is the default */ mlx5_flow_namespace_set_mode(root_ns, MLX5_FLOW_STEERING_MODE_DMFS); @@ -1938,7 +1941,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw) static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw) { - if (!esw->fdb_table.offloads.slow_fdb) + if (!mlx5_eswitch_get_slow_fdb(esw)) return; esw_debug(esw->dev, "Destroy offloads FDB Tables\n"); @@ -1954,7 +1957,7 @@ static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw) esw_chains_destroy(esw, esw_chains(esw)); mlx5_destroy_flow_table(esw->fdb_table.offloads.tc_miss_table); - mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb); + mlx5_destroy_flow_table(mlx5_eswitch_get_slow_fdb(esw)); /* Holds true only as long as DMFS is the default */ mlx5_flow_namespace_set_mode(esw->fdb_table.offloads.ns, MLX5_FLOW_STEERING_MODE_DMFS); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c index edd9102583144192c91e6d5103534be02dc02469..3a9a6bb9158de2e8f4777d595f4aff700fa1db54 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c @@ -210,6 +210,18 @@ static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw, return (port_mask & port_value) == MLX5_VPORT_UPLINK; } +static bool +mlx5_eswitch_is_push_vlan_no_cap(struct mlx5_eswitch *esw, + struct mlx5_flow_act *flow_act) +{ + if (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH && + !(mlx5_fs_get_capabilities(esw->dev, MLX5_FLOW_NAMESPACE_FDB) & + MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX)) + return true; + + return false; +} + bool mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, @@ -225,10 +237,7 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw, (!mlx5_eswitch_offload_is_uplink_port(esw, spec) && !esw_attr->int_port)) return false; - /* push vlan on RX */ - if (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH && - !(mlx5_fs_get_capabilities(esw->dev, MLX5_FLOW_NAMESPACE_FDB) & - MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX)) + if (mlx5_eswitch_is_push_vlan_no_cap(esw, flow_act)) return true; /* hairpin */ @@ -252,19 +261,31 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw, struct mlx5_flow_act term_tbl_act = {}; struct mlx5_flow_handle *rule = NULL; bool term_table_created = false; + bool is_push_vlan_on_rx; int num_vport_dests = 0; int i, curr_dest; + is_push_vlan_on_rx = mlx5_eswitch_is_push_vlan_no_cap(esw, flow_act); mlx5_eswitch_termtbl_actions_move(flow_act, &term_tbl_act); term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; for (i = 0; i < num_dest; i++) { struct mlx5_termtbl_handle *tt; + bool hairpin = false; /* only vport destinations can be terminated */ if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT) continue; + if (attr->dests[num_vport_dests].rep && + attr->dests[num_vport_dests].rep->vport == MLX5_VPORT_UPLINK) + hairpin = true; + + if (!is_push_vlan_on_rx && !hairpin) { + num_vport_dests++; + continue; + } + if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) { term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat; @@ -312,6 +333,9 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw, for (curr_dest = 0; curr_dest < num_vport_dests; curr_dest++) { struct mlx5_termtbl_handle *tt = attr->dests[curr_dest].termtbl; + if (!tt) + continue; + attr->dests[curr_dest].termtbl = NULL; /* search for the destination associated with the diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c index c971ff04dd0463a368575dd85f8f7f5b069ccb7c..0f9e4f01c85a2cd0418ede16a735feb2e051b5df 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c @@ -334,9 +334,6 @@ struct mlx5_aso *mlx5_aso_create(struct mlx5_core_dev *mdev, u32 pdn) void mlx5_aso_destroy(struct mlx5_aso *aso) { - if (IS_ERR_OR_NULL(aso)) - return; - mlx5_aso_destroy_sq(aso); mlx5_aso_destroy_cq(&aso->cq); kfree(aso); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 70e8dc305bec6a2c653201551fe32ad2f3ecd672..7f5db13e3550c0cdca124a6e714410d7b12cb505 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -1604,8 +1603,6 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx) int err; memcpy(&dev->profile, &profile[profile_idx], sizeof(dev->profile)); - INIT_LIST_HEAD(&priv->ctx_list); - spin_lock_init(&priv->ctx_lock); lockdep_register_key(&dev->lock_key); mutex_init(&dev->intf_state_mutex); lockdep_set_class(&dev->intf_state_mutex, &dev->lock_key); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/uar.c b/drivers/net/ethernet/mellanox/mlx5/core/uar.c index 8455e79bc44a69bf32237c4a7d918fd50b0f5a1c..1513112ecec8f75c24e7ec1026a68edd6861f18d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/uar.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/uar.c @@ -31,7 +31,6 @@ */ #include -#include #include #include "mlx5_core.h" diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index eb3fac30488bb6473a9fcdbb5ae52f3631e8dc10..5fe5d198b57ade8b1ec02435f63da2fa0e17aebd 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -290,10 +290,9 @@ enum { MLX5_UMR_INLINE = (1 << 7), }; -#define MLX5_UMR_KLM_ALIGNMENT 4 -#define MLX5_UMR_MTT_ALIGNMENT 0x40 -#define MLX5_UMR_MTT_MASK (MLX5_UMR_MTT_ALIGNMENT - 1) -#define MLX5_UMR_MTT_MIN_CHUNK_SIZE MLX5_UMR_MTT_ALIGNMENT +#define MLX5_UMR_FLEX_ALIGNMENT 0x40 +#define MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT (MLX5_UMR_FLEX_ALIGNMENT / sizeof(struct mlx5_mtt)) +#define MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT (MLX5_UMR_FLEX_ALIGNMENT / sizeof(struct mlx5_klm)) #define MLX5_USER_INDEX_LEN (MLX5_FLD_SZ_BYTES(qpc, user_index) * 8) diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 06cbad166225a317cfc4838e3e53f19bd9d610e1..d476255c9a3f0d9ea7f0dcd3c8231fbcd9e30d3e 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -606,8 +606,6 @@ struct mlx5_priv { struct list_head pgdir_list; /* end: alloc staff */ - struct list_head ctx_list; - spinlock_t ctx_lock; struct mlx5_adev **adev; int adev_idx; int sw_vhca_id;