- 24 6月, 2021 1 次提交
-
-
由 Selvin Xavier 提交于
Updating the maintainers file as Devesh decidied to leave Broadcom. Link: https://lore.kernel.org/r/1624436089-28263-1-git-send-email-selvin.xavier@broadcom.comSigned-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 23 6月, 2021 23 次提交
-
-
由 Kamal Heib 提交于
Instead of hard code the gid_table_len value, use the value from the ib_query_port() attributes. Fixes: b48c24c2 ("RDMA/irdma: Implement device supported verb APIs") Link: https://lore.kernel.org/r/20210620201503.67055-1-kamalheib1@gmail.comSigned-off-by: NKamal Heib <kamalheib1@gmail.com> Acked-by: NTatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
rxe_init_packet() in rxe_net.c calls skb_put_zero() to reserve space for the payload and zero it out. All these bytes are then re-written with RoCE headers and payload. Remove this useless extra copy. Fixes: ecb238f6 ("IB/cxgb4: use skb_put_zero()/__skb_put_zero") Link: https://lore.kernel.org/r/20210618045742.204195-7-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
Currently prepare_ack_packet writes almost all the fields of the BTH in the ack packet twice. Replace code with the subroutine init_bth(). Fixes: 8700e3e7 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20210618045742.204195-6-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
Currently get_srq_wqe() in rxe_resp.c copies the maximum possible number of bytes from the wqe into the QPs copy of the SRQ wqe. This is usually extra work and risks reading past the end of the SRQ circular buffer if the SRQ is configured with less than the maximum possible number of SGEs. Check the number of SGEs is not too large. Compute the actual number of bytes in the WR and copy only those. Fixes: 8700e3e7 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20210618045742.204195-5-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
build_rdma_network_hdr() in rxe_resp.c does more copying than is needed. Remove this subroutine and eliminate the extra copies for IPV6 and reduce the extra copying for IPV4. Fixes: e404f945 ("IB/rxe: improved debug prints & code cleanup") Link: https://lore.kernel.org/r/20210618045742.204195-4-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
For IPV4 packets sent on the wire the rxe driver calls ip_local_out() which immediately calls __ip_local_out() which sets iph->tot_len and calls ip_send_check(). This code is duplicated in prepare4(). On the loopback path the IP header checksum and tot_len fields are not used so they do not need to be set. Remove this redundant code. Fixes: 8700e3e7 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20210618045742.204195-3-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
In send_atomic_ack() in rxe_resp.c there is code copying ack_pkt into the skb->cb[]. This doesn't do anything useful because the cb[] is not used in the transmit path by the rxe driver. Remove this code. Fixes: 4c93496f ("IB/rxe: do not copy extra stack memory to skb") Link: https://lore.kernel.org/r/20210618045742.204195-2-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearson@hpe.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
ULP can get more error information of CQ through verbs instead of prints. Link: https://lore.kernel.org/r/1624362836-11631-1-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Shiraz Saleem 提交于
iwmr->page_size stores the return from ib_umem_find_best_pgsz and maybe zero when used in ib_umem_num_dma_blocks thus causing a divide by zero error. Fix this by erroring out of irdma_reg_user when 0 is returned from ib_umem_find_best_pgsz. Link: https://lore.kernel.org/r/20210622175232.439-3-tatyana.e.nikolova@intel.comReported-by: Ncoverity-bot <keescook+coverity-bot@chromium.org> Addresses-Coverity-ID: 1505149 ("Integer handling issues") Fixes: b48c24c2 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NTatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
'orignal' should be 'original'. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/1624011020-16992-11-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yixing Liu 提交于
The QP type has been checked in check_send_valid(), if it's not RC, it will process the UD/GSI branch. Link: https://lore.kernel.org/r/1624011020-16992-10-git-send-email-liweihang@huawei.comSigned-off-by: NYixing Liu <liuyixing1@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Wenpeng Liang 提交于
The process of flushing CQE can be encapsultated into a function, which can reduce duplicate code. Link: https://lore.kernel.org/r/1624011020-16992-9-git-send-email-liweihang@huawei.comSigned-off-by: NWenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
hns_roce_init_qp_table() will only return 0, because this function does not need to return a value, so it is modified to void type. Link: https://lore.kernel.org/r/1624011020-16992-8-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Xi Wang 提交于
Remove unused members in EQ context structure. Fixes: 782832f2 ("RDMA/hns: Simplify the function config_eqc()") Link: https://lore.kernel.org/r/1624011020-16992-7-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
When query_qp is called by userspace, max_send_wr and max_send_sge are set to 0 by the kernel driver. However, the userspace does not use these two return values from the kernel driver, but uses its own calculated values. So there is no need for special treatment. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1624011020-16992-6-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Some kernel ULPs need to use the return value of qp_init_attr, so add member assignments for qp_init_attr. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1624011020-16992-5-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yixing Liu 提交于
Remove redundant print and fix a character type mismatch. Fixes: 0e0ab04b ("RDMA/hns: Refactor the MTR creation flow") Link: https://lore.kernel.org/r/1624011020-16992-4-git-send-email-liweihang@huawei.comSigned-off-by: NYixing Liu <liuyixing1@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yixing Liu 提交于
A random value will be returned if the condition below is not met, so it needs to be initialized. Fixes: 9ea9a53e ("RDMA/hns: Add mapped page count checking for MTR") Link: https://lore.kernel.org/r/1624011020-16992-3-git-send-email-liweihang@huawei.comSigned-off-by: NYixing Liu <liuyixing1@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
When a non-inline WR reuses a WQE that was used for inline last time, the remaining inline flag should be cleared. Fixes: 62490fd5 ("RDMA/hns: Avoid unnecessary memset on WQEs in post_send") Link: https://lore.kernel.org/r/1624011020-16992-2-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
Aharon Landau says: ==================== In case device supports only real-time timestamp, the kernel will fail to create QP despite rdma-core requested such timestamp type. It is because device returns free-running timestamp, and the conversion from free-running to real-time is performed in the user space. This series fixes it, by returning real-time timestamp. ==================== * mlx5_realtime_ts: RDMA/mlx5: Support real-time timestamp directly from the device RDMA/mlx5: Refactor get_ts_format functions to simplify code Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
Linux 5.13-rc7 Needed for dependencies in following patches. Merge conflict in rxe_cmop.c resolved by compining both patches. Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Aharon Landau 提交于
Currently, if the user asks for a real-time timestamp, the device will return a free-running one, and the timestamp will be translated to real-time in the user-space. When the device supports only real-time timestamp and not free-running, the creation of the QP will fail even though the user needs supported the real-time one. To prevent this, we will return the real-time timestamp directly from the device. Link: https://lore.kernel.org/r/c6cfc8e6f038575c5c2de6505830f7e74e4de80d.1623829775.git.leonro@nvidia.comSigned-off-by: NAharon Landau <aharonl@nvidia.com> Reviewed-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Kees Cook 提交于
In preparation for FORTIFY_SOURCE performing compile-time and run-time field bounds checking for memcpy(), memmove(), and memset(), avoid intentionally reading across neighboring array fields. Without a flexible array, this looks like an attempt to perform a memcpy() read beyond the end of the packet->mad.data array: drivers/infiniband/core/user_mad.c: memcpy(packet->msg->mad, packet->mad.data, IB_MGMT_MAD_HDR); Switch from [0] to [] to use the appropriately handled type for trailing bytes. Link: https://lore.kernel.org/r/20210616202615.1247242-1-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 22 6月, 2021 16 次提交
-
-
由 Aharon Landau 提交于
QPC, SQC and RQC timestamp formats and capabilities are always equal because they represent general hardware support. So instead of code duplication, let's merge them into general enum and logic. Signed-off-by: NAharon Landau <aharonl@nvidia.com> Reviewed-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
-
由 Xiao Yang 提交于
rxe_mr_init_user() always returns the fixed -EINVAL when ib_umem_get() fails so it's hard for user to know which actual error happens in ib_umem_get(). For example, ib_umem_get() will return -EOPNOTSUPP when trying to pin pages on a DAX file. Return actual error as mlx4/mlx5 does. Link: https://lore.kernel.org/r/20210621071456.4259-1-ice_yangxiao@163.comSigned-off-by: NXiao Yang <yangx.jy@fujitsu.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Kees Cook 提交于
In preparation for FORTIFY_SOURCE performing compile-time and run-time field bounds checking for memcpy(), memmove(), and memset(), avoid intentionally writing across neighboring array fields. Use the ether_addr_copy() helper instead, as already done for smac. Link: https://lore.kernel.org/r/20210616203744.1248551-1-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
With fast memory registration on write request, rnbd-clt can do bigger IO without split. rnbd-clt now can query rtrs-clt to get the max_segments, instead of using BMAX_SEGMENTS. BMAX_SEGMENTS is not longer needed, so remove it. Link: https://lore.kernel.org/r/20210621055340.11789-6-jinpu.wang@ionos.com Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
As we can do fast memory registration on write, we can increase the max_segments, default to 512K. Link: https://lore.kernel.org/r/20210621055340.11789-5-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
With write path fast memory registration, we need less memory for each request. With fast memory registration, we can reduce max_send_sge to save memory usage. Also convert the kmalloc_array to kcalloc. Link: https://lore.kernel.org/r/20210621055340.11789-4-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
With fast memory registration in write path, we can reduce the memory consumption by using less max_send_sge, support IO bigger than 116 KB (29 segments * 4 KB) without splitting, and it also make the IO path more symmetric. To avoid some times MR reg failed, waiting for the invalidation to finish before the new mr reg. Introduce a refcount, only finish the request when both local invalidation and io reply are there. Link: https://lore.kernel.org/r/20210621055340.11789-3-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: NMd Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: NDima Stepanov <dmitrii.stepanov@ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
Introduce tail wr, we can send as the last wr, we want to send the local invalidate wr after rdma wr in later patch. While at it, also fix coding style issue. Link: https://lore.kernel.org/r/20210621055340.11789-2-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Devesh Sharma 提交于
Changing ucontext ABI response structure to pass wqe_mode to user library. A flag in comp_mask has been set to indicate presence of wqe_mode. Moved wqe-mode ABI to uapi/rdma/bnxt_re-abi.h Link: https://lore.kernel.org/r/20210616202817.1185276-1-devesh.sharma@broadcom.comSigned-off-by: NDevesh Sharma <devesh.sharma@broadcom.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Anand Khoje 提交于
pahole shows two 4-byte holes in struct ib_port_data after pkey_list_lock and netdev_lock respectively. Shuffling the netdev_lock to be after pkey_list_lock, this shaves off eight bytes from the struct. Link: https://lore.kernel.org/r/20210616154509.1047-3-anand.a.khoje@oracle.comSuggested-by: NHaakon Bugge <haakon.bugge@oracle.com> Signed-off-by: NAnand Khoje <anand.a.khoje@oracle.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Anand Khoje 提交于
Removed port validity check from ib_get_cached_subnet_prefix() as this check is not needed because "port_num" is valid. Link: https://lore.kernel.org/r/20210616154509.1047-2-anand.a.khoje@oracle.comSuggested-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NAnand Khoje <anand.a.khoje@oracle.com> Signed-off-by: NHaakon Bugge <haakon.bugge@oracle.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
Compilation with W=1 produces warnings similar to the below. drivers/infiniband/ulp/ipoib/ipoib_main.c:320: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst All such occurrences were found with the following one line git grep -A 1 "\/\*\*" drivers/infiniband/ Link: https://lore.kernel.org/r/e57d5f4ddd08b7a19934635b44d6d632841b9ba7.1623823612.git.leonro@nvidia.com Reviewed-by: Jack Wang <jinpu.wang@ionos.com> #rtrs Reviewed-by: NDennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Switch xrcd index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-7-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Switch pd index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-6-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Switch mtpt index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-5-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Round-robin (RR) is no longer used in the allocation of the bitmap table, and all the function input parameters that use this mechanism are BITMAP_NO_RR. The code that defines and uses the RR needs to be deleted. Link: https://lore.kernel.org/r/1623325814-55737-4-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-