- 11 9月, 2020 1 次提交
-
-
由 Jason Gunthorpe 提交于
mtr_umem_page_count() does the same thing, replace it with the core code. Also, ib_umem_find_best_pgsz() should always be called to check that the umem meets the page_size requirement. If there is a limited set of page_sizes that work it the pgsz_bitmap should be set to that set. 0 is a failure and the umem cannot be used. Lightly tidy the control flow to implement this flow properly. Link: https://lore.kernel.org/r/12-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 10 9月, 2020 6 次提交
-
-
由 Jason Gunthorpe 提交于
This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
Like any other verbs objects, CQ shouldn't fail during destroy, but mlx5_ib didn't follow this contract with mixed IB verbs objects with DEVX. Such mix causes to the situation where FW and kernel are fully interdependent on the reference counting of each side. Kernel verbs and drivers that don't have DEVX flows shouldn't fail. Fixes: e39afe3d ("RDMA: Convert CQ allocations to be under core responsibility") Link: https://lore.kernel.org/r/20200907120921.476363-7-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
In similar way to other IB objects, restore the ability to return error on SRQ destroy. Strictly speaking, this change is not necessary, and provided here to ensure a symmetrical interface like other destroy functions. Fixes: 68e326de ("RDMA: Handle SRQ allocations by IB/core") Link: https://lore.kernel.org/r/20200907120921.476363-5-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
Like any other IB verbs objects, AH are refcounted by ib_core. The release of those objects are controlled by ib_core with promise that AH destroy can't fail. Being SW object for now, this change makes dealloc_ah() to behave like any other destroy IB flows. Fixes: d3456914 ("RDMA: Handle AH allocations by IB/core") Link: https://lore.kernel.org/r/20200907120921.476363-3-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
The IB verbs objects are counted by the kernel and ib_core ensures that deallocate PD will success so it will be called once all other objects that depends on PD will be released. This is achieved by managing various reference counters on such objects. The mlx5 driver didn't follow this standard flow when allowed DEVX objects that are not managed by ib_core to be interleaved with the ones under ib_core responsibility. In such interleaved scenarios deallocate command can fail and ib_core will leave uobject in internal DB and attempt to clean it later to free resources anyway. This change partially restores returned value from dealloc_pd() for all drivers, but keeping in mind that non-DEVX devices and kernel verbs paths shouldn't fail. Fixes: 21a428a0 ("RDMA: Handle PD allocations by IB/core") Link: https://lore.kernel.org/r/20200907120921.476363-2-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lijun Ou 提交于
Some variables have been initialized when used. As a result, here removes some unncessary initial assignment. Link: https://lore.kernel.org/r/1599547944-30671-1-git-send-email-oulijun@huawei.comSigned-off-by: NLijun Ou <oulijun@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 31 8月, 2020 1 次提交
-
-
由 Weihang Li 提交于
The UDP source port number in RoCE v2 is used to create entropy for network routers (ECMP), load balancers and 802.3ad link aggregation switching that are not aware of RoCE IB headers. Considering that the IB core has achieved a new interface to get a hashed value of it, the fixed value of it in QPC and UD WQE in hns driver could be fixed and the port number is to be set dynamically now. For QPC of RC, the value could be hashed from flow_lable if the user pass it in or from remote qpn and local qpn. For WQE of UD, it is set according to fl or as a random value. Link: https://lore.kernel.org/r/1598002289-8611-1-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 27 8月, 2020 1 次提交
-
-
由 Lang Cheng 提交于
It should be considered an illegal operation if the ULP attempts to modify a QP from another state to the current hardware state. Otherwise, the ULP can modify some fields of QPC at any time. For example, for a QP in state of RTS, modify it from RTR to RTS can change the PSN, which is always not as expected. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/1598353674-24270-1-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 20 8月, 2020 1 次提交
-
-
由 Weihang Li 提交于
This patch caused some issues on SEND operation, and it should be reverted to make the drivers work correctly. There will be a better solution that has been tested carefully to solve the original problem. This reverts commit 711195e5. Fixes: 711195e5 ("RDMA/hns: Reserve one sge in order to avoid local length error") Link: https://lore.kernel.org/r/1597829984-20223-1-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 19 8月, 2020 1 次提交
-
-
由 Colin Ian King 提交于
There is a spelling mistake in a dev_dbg message. Fix it. Link: https://lore.kernel.org/r/20200805141111.22804-1-colin.king@canonical.comSigned-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 31 7月, 2020 3 次提交
-
-
由 Xi Wang 提交于
If the hns ROCEE reports a general error CQE (types not specified by the IB General Specifications), it's no need to change the QP state to error, and the driver should just skip it. Fixes: 7c044adc ("RDMA/hns: Simplify the cqe code of poll cq") Link: https://lore.kernel.org/r/1595932941-40613-8-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
One qp state migrations legal configuration was deleted mistakenly. Fixes: 357f3429 ("RDMA/hns: Simplify the state judgment code of qp") Link: https://lore.kernel.org/r/1595932941-40613-7-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
The hns_roce_cmq_setup_basic_desc() can clear the whole desc, so removes these redundant memset operations. Link: https://lore.kernel.org/r/1595932941-40613-6-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 30 7月, 2020 4 次提交
-
-
由 Weihang Li 提交于
There are some functions called by set_rc_wqe() use two parameters: "void *wqe" and "struct hns_roce_v2_rc_send_wqe *rc_sq_wqe", but the first one can be got from the second one. So remove the redundant wqe from related functions. Link: https://lore.kernel.org/r/1595932941-40613-5-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
HIP08_A is an temporary version and all features of it are supported by HIP08_B. So remove the relevant code. Link: https://lore.kernel.org/r/1595932941-40613-4-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The parts about preparing and sending mailbox to hardware is not strongly related to other codes in hns_roce_v2_set_hem(), and can be encapsulated into a separate function. Link: https://lore.kernel.org/r/1595932941-40613-3-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
HNS_ROCE_SQ_OPCODE_XXXs and HNS_ROCE_V2_WQE_OP_XXXs have same values, so remove a set of redundant definitions. In addition, remove the suffix of HNS_ROCE_V2_WQE_OP_BIND_MW_TYPE. Link: https://lore.kernel.org/r/1595932941-40613-2-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 16 7月, 2020 2 次提交
-
-
由 Xi Wang 提交于
ROCE uses "VA % buf_page_size" to caclulate the offset in the PBL's first page, the actual PA corresponding to the MR's VA is equal to MR's PA plus this offset. The first PA in PBL has already been aligned to PAGE_SIZE after calling ib_umem_get(), but the MR's VA may not. If the buf_page_size is smaller than the PAGE_SIZE, this will lead the HW to access the wrong memory because the offset is smaller than expected. Fixes: 9b2cf76c ("RDMA/hns: Optimize PBL buffer allocation process") Link: https://lore.kernel.org/r/1594726935-45666-1-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The RoCE Engine will schedule to another QP after one has sent (2 ^ lp_pktn_ini) packets. lp_pktn_ini is set in QPC and should be calculated from 2 factors: 1. current MTU as a integer 2. the RoCE Engine's maximum slice length 64KB But the driver use MTU as a enum ib_mtu and the max inline capability, the lp_pktn_ini will be much bigger than expected which may cause traffic of some QPs to never get scheduled. Fixes: b713128d ("RDMA/hns: Adjust lp_pktn_ini dynamically") Link: https://lore.kernel.org/r/1594726138-49294-1-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 08 7月, 2020 1 次提交
-
-
由 Xi Wang 提交于
If hns ROCEE is set to level-0 addressing, the length of the entire buffer can be used as the page size. The driver needn't to split the buffer into small units because all pages are continuous. Link: https://lore.kernel.org/r/1593525696-12570-1-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 07 7月, 2020 1 次提交
-
-
由 Gal Pressman 提交于
Allocating an MR flow can only be initiated by kernel users, and not from userspace so a udata parameter is redundant. Link: https://lore.kernel.org/r/20200706120343.10816-4-galpress@amazon.comSigned-off-by: NGal Pressman <galpress@amazon.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 23 6月, 2020 1 次提交
-
-
由 Maor Gottlieb 提交于
In order to avoid double multiplexing of the resource when it is a CQ, add a dedicated callback function. Link: https://lore.kernel.org/r/20200623113043.1228482-6-leon@kernel.orgSigned-off-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 18 6月, 2020 2 次提交
-
-
由 Yangyang Li 提交于
If a IMP reset caused by some hardware errors and hns RoCE driver reset occurred at the same time, there is a possiblity that the IMP will stop dealing with command and users can't use the hardware. The logs are as follows: hns3 0000:fd:00.1: cleaned 0, need to clean 1 hns3 0000:fd:00.1: firmware version query failed -11 hns3 0000:fd:00.1: Cmd queue init failed hns3 0000:fd:00.1: Upgrade reset level hns3 0000:fd:00.1: global reset interrupt The hns NIC driver divides the reset process into 3 status: initialization, hardware resetting and softwaring restting. RoCE driver gets reset status by interfaces provided by NIC driver and commands will not be sent to the IMP if the driver is in any above status. The main reason for this issue is that there is a time gap between status 1 and 2, if the RoCE driver sends commands to the IMP during this gap, the IMP will stop working because it is not ready. To eliminate the time gap, the hns NIC driver has added a new interface in commit a4de0228 ("net: hns3: provide .get_cmdq_stat interface for the client"), so RoCE driver can ensure that no commands will be sent during resetting. Link: https://lore.kernel.org/r/1592314778-52822-1-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yangyang Li 提交于
ibmr.device is assigned after MR is successfully registered, but both write_mtpt() and frmr_write_mtpt() accesses it during the mr registration process, which may cause the following error when trying to register MR in userspace and pbl_hop_num is set to 0. pc : hns_roce_mtr_find+0xa0/0x200 [hns_roce] lr : set_mtpt_pbl+0x54/0x118 [hns_roce_hw_v2] sp : ffff00023e73ba20 x29: ffff00023e73ba20 x28: ffff00023e73bad8 x27: 0000000000000000 x26: 0000000000000000 x25: 0000000000000002 x24: 0000000000000000 x23: ffff00023e73bad0 x22: 0000000000000000 x21: ffff0000094d9000 x20: 0000000000000000 x19: ffff8020a6bdb2c0 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0140000000000000 x12: 0040000000000041 x11: ffff000240000000 x10: 0000000000001000 x9 : 0000000000000000 x8 : ffff802fb7558480 x7 : ffff802fb7558480 x6 : 000000000003483d x5 : ffff00023e73bad0 x4 : 0000000000000002 x3 : ffff00023e73bad8 x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0000094d9708 Call trace: hns_roce_mtr_find+0xa0/0x200 [hns_roce] set_mtpt_pbl+0x54/0x118 [hns_roce_hw_v2] hns_roce_v2_write_mtpt+0x14c/0x168 [hns_roce_hw_v2] hns_roce_mr_enable+0x6c/0x148 [hns_roce] hns_roce_reg_user_mr+0xd8/0x130 [hns_roce] ib_uverbs_reg_mr+0x14c/0x2e0 [ib_uverbs] ib_uverbs_write+0x27c/0x3e8 [ib_uverbs] __vfs_write+0x60/0x190 vfs_write+0xac/0x1c0 ksys_write+0x6c/0xd8 __arm64_sys_write+0x24/0x30 el0_svc_common+0x78/0x130 el0_svc_handler+0x38/0x78 el0_svc+0x8/0xc Solve above issue by adding a pointer of structure hns_roce_dev as a parameter of write_mtpt() and frmr_write_mtpt(), so that both of these functions can access it before finishing MR's registration. Fixes: 9b2cf76c ("RDMA/hns: Optimize PBL buffer allocation process") Link: https://lore.kernel.org/r/1592314629-51715-1-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 14 6月, 2020 1 次提交
-
-
由 Masahiro Yamada 提交于
Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over '---help---'"), the number of '---help---' has been gradually decreasing, but there are still more than 2400 instances. This commit finishes the conversion. While I touched the lines, I also fixed the indentation. There are a variety of indentation styles found. a) 4 spaces + '---help---' b) 7 spaces + '---help---' c) 8 spaces + '---help---' d) 1 space + 1 tab + '---help---' e) 1 tab + '---help---' (correct indentation) f) 1 tab + 1 space + '---help---' g) 1 tab + 2 spaces + '---help---' In order to convert all of them to 1 tab + 'help', I ran the following commend: $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/' Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 03 6月, 2020 1 次提交
-
-
由 Dan Carpenter 提交于
The "dmac" variable is used before it is initialized. Fixes: 494c3b31 ("RDMA/hns: Refactor the QP context filling process related to WQE buffer configure") Link: https://lore.kernel.org/r/20200529083918.GA1298465@mwandaSigned-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 30 5月, 2020 1 次提交
-
-
由 Colin Ian King 提交于
The pointer raq is being assigned twice. Fix this by removing one of the redundant assignments. Fixes: 14ba8730 ("RDMA/hns: Remove redundant type cast for general pointers") Link: https://lore.kernel.org/r/20200528150427.420624-1-colin.king@canonical.com Addressses-Coverity: ("Evaluation order violation") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 26 5月, 2020 12 次提交
-
-
由 Yixian Liu 提交于
Instead of i with the sge number of wr will make the comparision more clear, that is, when the sge number in wr is small than the maximum supported sge number in the queue, then a stop sge needed to be filled at the end of sges in wr. Link: https://lore.kernel.org/r/1590152579-32364-5-git-send-email-liweihang@huawei.comSigned-off-by: NYixian Liu <liuyixian@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Lang Cheng 提交于
Set hns_roce_v2_cq_set_ci to inline type and remove unnecessary next_cqe_sw_v2(). Link: https://lore.kernel.org/r/1590152579-32364-4-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Wenpeng Liang 提交于
The redundant parameters "hr_dev" need to be removed from free_kernel_wrid() and free_srq_wrid(). Link: https://lore.kernel.org/r/1590152579-32364-3-git-send-email-liweihang@huawei.comSigned-off-by: NWenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Weihang Li 提交于
There is no need to do a type cast on genernal pointers, they could be assigned to any type of variables. In addition, optimize initialization of some variables and adjust order of them. Link: https://lore.kernel.org/r/1590152579-32364-2-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Xi Wang 提交于
Currently, the MTR region is configed before hns_roce_mtr_map() is invoked, but in some scenarios, the region is configed at MTR creation, the caller need to store this config and call hns_roce_mtr_map() later. So optimize the usage by wrapping the MTR region config into MTR. Link: https://lore.kernel.org/r/1589982799-28728-10-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Xi Wang 提交于
Split the code related to WQE buffer configure from the QPC filling process into two functions: config_qp_sq_buf() and config_qp_rq_buf(), this will make the code more readable. Link: https://lore.kernel.org/r/1589982799-28728-9-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Weihang Li 提交于
Number of sge/eqe is always non-negative, they should be defined in type of unsigned. Link: https://lore.kernel.org/r/1589982799-28728-8-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Weihang Li 提交于
page_shift is used to calculate the page size, it's always non-negative, and should be in type of unsigned. Link: https://lore.kernel.org/r/1589982799-28728-7-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Xi Wang 提交于
Rename the function related to QP buffer to make the code more readable. Link: https://lore.kernel.org/r/1589982799-28728-6-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yangyang Li 提交于
The codes related to assert are no longer used and need to be deleted. Link: https://lore.kernel.org/r/1589982799-28728-5-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Lang Cheng 提交于
Add unlikely() and likely() to optimize main I/O process code. Link: https://lore.kernel.org/r/1589982799-28728-4-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Lang Cheng 提交于
It's easier to understand and maintain enable flags of cq using a single field in type of u32 than defining a field for every flags in the structure hns_roce_cq, and we can add new flags for features more conveniently in the future. Link: https://lore.kernel.org/r/1589982799-28728-3-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-