- 17 6月, 2021 3 次提交
-
-
由 Bob Pearson 提交于
Currently the rdma_rxe driver attempts to protect atomic responder resources by taking a reference to the qp which is only freed when the resource is recycled for a new read or atomic operation. This means that in normal circumstances there is almost always an extra qp reference once an atomic operation has been executed which prevents cleaning up the qp and associated pd and cqs when the qp is destroyed. This patch removes the call to rxe_add_ref() in send_atomic_ack() and the call to rxe_drop_ref() in free_rd_atomic_resource(). If the qp is destroyed while a peer is retrying an atomic op it will cause the operation to fail which is acceptable. Link: https://lore.kernel.org/r/20210604230558.4812-1-rpearsonhpe@gmail.comReported-by: NZhu Yanjun <zyjzyj2000@gmail.com> Fixes: 86af6176 ("IB/rxe: remove unnecessary skb_clone") Signed-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Xi Wang 提交于
All functions of HIP09's ROCEE share on-chip resources for all QPs, the driver needs configure the resource index and number for each function during the init stage. Link: https://lore.kernel.org/r/1622541427-42193-1-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
The mlx5_ib_bind_slave_port() doesn't remove multiport device from the unaffiliated list, but mlx5_ib_unbind_slave_port() did it. This unbalanced flow caused to the situation where mlx5_ib_unaffiliated_port_list was changed during iteration. Fixes: 32f69e4b ("{net, IB}/mlx5: Manage port association for multiport RoCE") Link: https://lore.kernel.org/r/2726e6603b1e6ecfe76aa5a12a063af72173bcf7.1622477058.git.leonro@nvidia.comReported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 10 6月, 2021 2 次提交
-
-
由 Shiraz Saleem 提交于
The level1 PBL info address is stored as u64. This requires casting through a uinptr_t before used as a pointer type. And this leads to sparse warning such as this when uinptr_t is missing: drivers/infiniband/hw/irdma/hw.c: In function 'irdma_destroy_virt_aeq': drivers/infiniband/hw/irdma/hw.c:579:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 579 | dma_addr_t *pg_arr = (dma_addr_t *)aeq->palloc.level1.addr; This can be fixed using an intermediate uintptr_t, but rather it is better to fix the structure irdm_pble_info to store the address as u64* and the VA it is assigned in irdma_chunk as a void*. This greatly reduces the casting on this address. Fixes: 44d9e529 ("RDMA/irdma: Implement device initialization definitions") Link: https://lore.kernel.org/r/20210609234924.938-1-shiraz.saleem@intel.comReported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
It turns out this is only being used to store the LID for SIDR mode to search the RB tree for request de-duplication. Store the LID value directly and don't pretend it is a GID. Link: https://lore.kernel.org/r/2e7c87b6f662c90c642fc1838e363ad3e6ef14a4.1623236345.git.leonro@nvidia.comReviewed-by: NMark Zhang <markzhang@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 09 6月, 2021 13 次提交
-
-
由 Shiraz Saleem 提交于
Use list_last_entry and list_first_entry instead of using prev and next pointers. Link: https://lore.kernel.org/r/20210608211415.680-1-shiraz.saleem@intel.comReported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Baokun Li 提交于
Using list_move() instead of list_del() + list_add(). Link: https://lore.kernel.org/r/20210608031041.2820429-1-libaokun1@huawei.comReported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NBaokun Li <libaokun1@huawei.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-8-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-13-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-12-git-send-email-liweihang@huawei.com Cc: Potnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-11-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-10-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-9-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-6-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-5-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
The member is never used, delete it. Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Increase refcount_t from 0 to 1 is regarded as there is a risk about use-after-free. So it should be set to 1 directly during initialization. Link: https://lore.kernel.org/r/1622194663-2383-3-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-2-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 08 6月, 2021 5 次提交
-
-
由 Kamal Heib 提交于
There is a typo in the returned error code sign from irdma_modify_qp() when the attr_mask is not supported - Fix it. Fixes: b48c24c2 ("RDMA/irdma: Implement device supported verb APIs") Link: https://lore.kernel.org/r/20210607221543.254144-1-kamalheib1@gmail.comSigned-off-by: NKamal Heib <kamalheib1@gmail.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Colin Ian King 提交于
There is a spelling mistake in a literal string. Fix it. Link: https://lore.kernel.org/r/20210607113345.82206-1-colin.king@canonical.comSigned-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Colin Ian King 提交于
The variable val is being initialized with a value that is never read, it is being updated later on. The assignment is redundant and can be removed. Link: https://lore.kernel.org/r/20210605131347.26293-1-colin.king@canonical.com Addresses-Coverity: ("Unused value") Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Colin Ian King 提交于
A single statement is indented one level too deeply, clean up the code by removing the extraneous tab. Link: https://lore.kernel.org/r/20210605130400.25987-1-colin.king@canonical.comSigned-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Colin Ian King 提交于
The shifting of the u8 integer info->map[i] the left will be promoted to a 32 bit signed int and then sign-extended to a u64. In the event that the top bit of the u8 is set then all then all the upper 32 bits of the u64 end up as also being set because of the sign-extension. Fix this by casting the u8 values to a u64 before the left shift. This Link: https://lore.kernel.org/r/20210605122059.25105-1-colin.king@canonical.com Addresses-Coverity: ("Unitentional integer overflow / bad shift operation") Fixes: 3f49d684 ("RDMA/irdma: Implement HW Admin Queue OPs") Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 04 6月, 2021 6 次提交
-
-
由 Jiapeng Chong 提交于
The error code is missing in this code scenario so 0 will be returned. Add the error code '-EINVAL' to the return value 'ret'. Eliminates the follow smatch warning: drivers/infiniband/hw/cxgb4/qp.c:298 create_qp() warn: missing error code 'ret'. Link: https://lore.kernel.org/r/1622545669-20625-1-git-send-email-jiapeng.chong@linux.alibaba.comReported-by: NAbaci Robot <abaci@linux.alibaba.com> Signed-off-by: NJiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Devesh Sharma 提交于
Enabling Atomic operations for Gen P5 devices if the underlying platform supports global atomic ops. Link: https://lore.kernel.org/r/20210603131534.982257-2-devesh.sharma@broadcom.comSigned-off-by: NDevesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Kamal Heib 提交于
To avoid the following failure when trying to load the rdma_rxe module while IPv6 is disabled, add a check for EAFNOSUPPORT and ignore the failure, also delete the needless debug print from rxe_setup_udp_tunnel(). $ modprobe rdma_rxe modprobe: ERROR: could not insert 'rdma_rxe': Operation not permitted Fixes: dfdd6158 ("IB/rxe: Fix kernel panic in udp_setup_tunnel") Link: https://lore.kernel.org/r/20210603090112.36341-1-kamalheib1@gmail.comReported-by: NYi Zhang <yi.zhang@redhat.com> Signed-off-by: NKamal Heib <kamalheib1@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
In order to prevent user space from modifying the index that belongs to the kernel for shared queues let the kernel use a local copy of the index and copy any new values of that index to the shared rxe_queue_bus struct. This adds more switch statements which decreases the performance of the queue API. Move the type into the parameter list for these functions so that the compiler can optimize out the switch statements when the explicit type is known. Modify all the calls in the driver on performance paths to pass in the explicit queue type. Link: https://lore.kernel.org/r/20210527194748.662636-4-rpearsonhpe@gmail.com Link: https://lore.kernel.org/linux-rdma/20210526165239.GP1002214@@nvidia.com/Signed-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
Modify the queue APIs to protect all user space index loads with smp_load_acquire() and all user space index stores with smp_store_release(). Base this on the types of the queues which can be one of ..KERNEL, ..FROM_USER, ..TO_USER. Kernel space indices are protected by locks which also provide memory barriers. Link: https://lore.kernel.org/r/20210527194748.662636-3-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
To create optimal code only want to use smp_load_acquire() and smp_store_release() for user indices in rxe_queue APIs since kernel indices are protected by locks which also act as memory barriers. By adding a type to the queues we can determine which indices need to be protected. Link: https://lore.kernel.org/r/20210527194748.662636-2-rpearsonhpe@gmail.comSigned-off-by: NBob Pearson <rpearsonhpe@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 03 6月, 2021 11 次提交
-
-
由 Shiraz Saleem 提交于
Add Kconfig and Makefile to build irdma driver. Remove i40iw driver and add an alias in irdma. Remove legacy exported symbols i40e_register_client and i40e_unregister_client from i40e as they are no longer used. irdma is the replacement driver that supports X722. Link: https://lore.kernel.org/r/20210602205138.889-16-shiraz.saleem@intel.comSigned-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Michael J. Ruhl 提交于
Add dynamic tracing functionality to debug connection management issues. Link: https://lore.kernel.org/r/20210602205138.889-14-shiraz.saleem@intel.comSigned-off-by: N"Michael J. Ruhl" <michael.j.ruhl@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Add miscellaneous utility functions and headers. Link: https://lore.kernel.org/r/20210602205138.889-13-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Building the WQE descriptors for different verb operations are similar in kernel and user-space. Add these shared libraries. Link: https://lore.kernel.org/r/20210602205138.889-12-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Add the header, data structures and functions to populate the WQE descriptors and issue the Control QP commands that support RoCEv2 UD operations. Link: https://lore.kernel.org/r/20210602205138.889-11-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Implement device supported verb APIs. The supported APIs vary based on the underlying transport the ibdev is registered as (i.e. iWARP or RoCEv2). Link: https://lore.kernel.org/r/20210602205138.889-10-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Implement a Physical Buffer List Entry (PBLE) resource manager to manage a pool of PBLE HMC resource objects. Link: https://lore.kernel.org/r/20210602205138.889-9-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Add connection management (CM) implementation for iWARP including accept, reject, connect, create_listen, destroy_listen and CM utility functions Link: https://lore.kernel.org/r/20210602205138.889-8-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Add definitions for managing the RDMA HW work scheduler (WS) tree. A WS node is created via a control QP operation with the bandwidth allocation, arbitration scheme, and traffic class of the QP specified. The Qset handle returned associates the QoS parameters for the QP. The Qset is registered with the LAN and an equivalent node is created in the LAN packet scheduler tree. Link: https://lore.kernel.org/r/20210602205138.889-7-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
Implement privileged UDA queues to handle iWARP connection packets and receive exceptions. Link: https://lore.kernel.org/r/20210602205138.889-6-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Mustafa Ismail 提交于
HW uses host memory as a backing store for a number of protocol context objects and queue state tracking. The Host Memory Cache (HMC) is a component responsible for managing these objects stored in host memory. Add the functions and data structures to manage the allocation of backing pages used by the HMC for the various objects Link: https://lore.kernel.org/r/20210602205138.889-5-shiraz.saleem@intel.comSigned-off-by: NMustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-