- 27 10月, 2020 6 次提交
-
-
由 Jason Gunthorpe 提交于
uverbs was blocking srq_types the driver doesn't support based on the CREATE_XSRQ cmd_mask. Fix all drivers to check for supported srq_types during create_srq and move CREATE_XSRQ to the core code. Link: https://lore.kernel.org/r/5-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
These functions all depend on the driver providing a specific op: - REREG_MR is rereg_user_mr(). bnxt_re set this without providing the op - ATTACH/DEATCH_MCAST is attach_mcast()/detach_mcast(). usnic set this without providing the op - OPEN_QP doesn't involve the driver but requires a XRCD. qedr provides xrcd but forgot to set it, usnic doesn't provide XRCD but set it anyhow. - OPEN/CLOSE_XRCD are the ops alloc_xrcd()/dealloc_xrcd() - CREATE_SRQ/DESTROY_SRQ are the ops create_srq()/destroy_srq() - QUERY/MODIFY_SRQ is op query_srq()/modify_srq(). hns sets this but sometimes supplies a NULL op. - RESIZE_CQ is op resize_cq(). bnxt_re sets this boes doesn't supply an op - ALLOC/DEALLOC_MW is alloc_mw()/dealloc_mw(). cxgb4 provided an (now deleted) implementation but no userspace All drivers were checked that no drivers provide the op without also setting uverbs_cmd_mask so this should have no functional change. Link: https://lore.kernel.org/r/4-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
This is a step toward eliminating uverbs_cmd_mask. Preset this list in the core code. Only the op reg_user_mr wasn't already being required from the drivers. Link: https://lore.kernel.org/r/3-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
Since a while now the uverbs layer checks if the driver implements a function before allowing the ucmd to proceed. This largely obsoletes the cmd_mask stuff, but there is some tricky bits in drivers preventing it from being removed. Remove the easy elements of uverbs_ex_cmd_mask by pre-setting them in the core code. These are triggered soley based on the related ops function pointer. query_device_ex is not triggered based on an op, but all drivers already implement something compatible with the extension, so enable it globally too. Link: https://lore.kernel.org/r/2-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
This driver never enabled IB_USER_VERBS_CMD_ALLOC_MW so memory windows were not usable from userspace. The kernel side was removed long ago. Drop this dead code. Fixes: feb7c1e3 ("IB: remove in-kernel support for memory windows") Link: https://lore.kernel.org/r/1-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Kamal Heib 提交于
The API for ib_query_qp requires the driver to set cur_qp_state on return, add the missing set. Fixes: 1ac5a404 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Link: https://lore.kernel.org/r/20201021114952.38876-1-kamalheib1@gmail.comSigned-off-by: NKamal Heib <kamalheib1@gmail.com> Acked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 17 10月, 2020 5 次提交
-
-
由 Jann Horn 提交于
The preceding patches have ensured that core dumping properly takes the mmap_lock. Thanks to that, we can now remove mmget_still_valid() and all its users. Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: "Eric W . Biederman" <ebiederm@xmission.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Hugh Dickins <hughd@google.com> Link: http://lkml.kernel.org/r/20200827114932.3572699-8-jannh@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Maor Gottlieb 提交于
ucma_free_ctx() should call to __destroy_id() on all the connection requests that have not been delivered to user space. Currently it calls on the context itself and cause to use after free. Fixes the trace: BUG: Unable to handle kernel data access on write at 0x5deadbeef0000108 Faulting instruction address: 0xc0080000002428f4 Oops: Kernel access of bad area, sig: 11 [#1] Call Trace: [c000000207f2b680] [c00800000024280c] .__destroy_id+0x28c/0x610 [rdma_ucm] (unreliable) [c000000207f2b750] [c0080000002429c4] .__destroy_id+0x444/0x610 [rdma_ucm] [c000000207f2b820] [c008000000242c24] .ucma_close+0x94/0xf0 [rdma_ucm] [c000000207f2b8c0] [c00000000046fbdc] .__fput+0xac/0x330 [c000000207f2b960] [c00000000015d48c] .task_work_run+0xbc/0x110 [c000000207f2b9f0] [c00000000012fb00] .do_exit+0x430/0xc50 [c000000207f2bae0] [c0000000001303ec] .do_group_exit+0x5c/0xd0 [c000000207f2bb70] [c000000000144a34] .get_signal+0x194/0xe30 [c000000207f2bc60] [c00000000001f6b4] .do_notify_resume+0x124/0x470 [c000000207f2bd60] [c000000000032484] .interrupt_exit_user_prepare+0x1b4/0x240 [c000000207f2be20] [c000000000010034] interrupt_return+0x14/0x1c0 Rename listen_ctx to conn_req_ctx as the poor name was the cause of this bug. Fixes: a1d33b70 ("RDMA/ucma: Rework how new connections are passed through event delivery") Link: https://lore.kernel.org/r/20201012045600.418271-4-leon@kernel.orgSigned-off-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
If skb_clone() is unable to allocate memory for a new sk_buff this is not detected by the current code. Check for a NULL return and continue. This is similar to other errors in this loop over QPs attached to the multicast address and consistent with the unreliable UD transport. Fixes: e7ec96fc ("RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt()") Addresses-Coverity-ID: 1497804: Null pointer dereferences (NULL_RETURNS) Link: https://lore.kernel.org/r/20201013184236.5231-1-rpearson@hpe.comSigned-off-by: NBob Pearson <rpearson@hpe.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
RXE was wrongly using an internal kernel enum as part of its uAPI, split this out into a dedicated uAPI enum just for RXE. It only uses the IPv4 and IPv6 values. This was exposed by changing the internal kernel enum definition which broke RXE. Fixes: 1c15b4f2 ("RDMA/core: Modify enum ib_gid_type and enum rdma_network_type") Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
The code in setup_dma_device has become rather convoluted, move all of this to the drivers. Drives now pass in a DMA capable struct device which will be used to setup DMA, or drivers must fully configure the ibdev for DMA and pass in NULL. Other than setting the masks in rvt all drivers were doing this already anyhow. mthca, mlx4 and mlx5 were already setting up maximum DMA segment size for DMA based on their hardweare limits in: __mthca_init_one() dma_set_max_seg_size (1G) __mlx4_init_one() dma_set_max_seg_size (1G) mlx5_pci_init() set_dma_caps() dma_set_max_seg_size (2G) Other non software drivers (except usnic) were extended to UINT_MAX [1, 2] instead of 2G as was before. [1] https://lore.kernel.org/linux-rdma/20200924114940.GE9475@nvidia.com/ [2] https://lore.kernel.org/linux-rdma/20200924114940.GE9475@nvidia.com/ Link: https://lore.kernel.org/r/20201008082752.275846-1-leon@kernel.org Link: https://lore.kernel.org/r/6b2ed339933d066622d5715903870676d8cc523a.1602590106.git.mchehab+huawei@kernel.orgSuggested-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NParav Pandit <parav@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 14 10月, 2020 1 次提交
-
-
由 Heiner Kallweit 提交于
Simplify the code by using new function dev_fetch_sw_netstats(). Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/6cad1a04-f021-d94b-45fd-7cc7cf07367d@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 09 10月, 2020 5 次提交
-
-
由 Håkon Bugge 提交于
Was missed during the initial review of the below patch Fixes: 227a0e14 ("IB/mlx4: Add support for REJ due to timeout") Link: https://lore.kernel.org/r/1602253482-6718-1-git-send-email-haakon.bugge@oracle.comSigned-off-by: NHåkon Bugge <haakon.bugge@oracle.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
Fix a bug in rxe_rcv() that causes all multicast packets to be dropped. Currently rxe_match_dgid() is called for each packet to verify that the destination IP address matches one of the entries in the port source GID table. This is incorrect for IP multicast addresses since they do not appear in the GID table. Add code to detect multicast addresses. Change function name to rxe_chk_dgid() which is clearer. Link: https://lore.kernel.org/r/20201008212753.265249-1-rpearson@hpe.comSigned-off-by: NBob Pearson <rpearson@hpe.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
The changes referenced below replaced sbk_clone)_ by taking additional references, passing the skb along and then freeing the skb. This deleted the packets before they could be processed and additionally passed bad data in each packet. Since pkt is stored in skb->cb changing pkt->qp changed it for all the packets. Replace skb_get() by sbk_clone() in rxe_rcv_mcast_pkt() for cases where multiple QPs are receiving multicast packets on the same address. Delete kfree_skb() because the packets need to live until they have been processed by each QP. They are freed later. Fixes: 86af6176 ("IB/rxe: remove unnecessary skb_clone") Fixes: fe896ceb ("IB/rxe: replace refcount_inc with skb_get") Link: https://lore.kernel.org/r/20201008203651.256958-1-rpearson@hpe.comSigned-off-by: NBob Pearson <rpearson@hpe.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Bob Pearson 提交于
Struct rxe_mem had pd, lkey and rkey values both in itself and in the struct ib_mr which is also included in rxe_mem. Delete these entries and replace references with the ones in ibmr.Add mr_pd, mr_lkey and mr_rkey macros which extract these values from mr. Link: https://lore.kernel.org/r/20201008212818.265303-1-rpearson@hpe.comSigned-off-by: NBob Pearson <rpearson@hpe.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Colin Ian King 提交于
An incorrect sizeof is being used, struct rvt_ibport ** is not correct, it should be struct rvt_ibport *. Note that since ** is the same size as * this is not causing any issues. Improve this fix by using sizeof(*rdi->ports) as this allows us to not even reference the type of the pointer. Also remove line breaks as the entire statement can fit on one line. Link: https://lore.kernel.org/r/20201008095204.82683-1-colin.king@canonical.com Addresses-Coverity: ("Sizeof not portable (SIZEOF_MISMATCH)") Fixes: ff6acd69 ("IB/rdmavt: Add device structure allocation") Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Acked-by: NDennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 07 10月, 2020 2 次提交
-
-
由 Colin Ian King 提交于
An incorrect sizeof is being used, u64 * is not correct, it should be just u64 for a table of umem_pgs number of u64 items in the pbl_tbl. Use the idiom sizeof(*pbl_tbl) to get the object type without the need to explicitly use u64. Link: https://lore.kernel.org/r/20201006114700.537916-1-colin.king@canonical.com Addresses-Coverity: ("Sizeof not portable (SIZEOF_MISMATCH)") Fixes: 1ac5a404 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
This driver is taking the SGL out of the umem and passing it through a struct bnxt_qplib_sg_info. Instead of passing the SGL pass the umem and then use rdma_umem_for_each_dma_block() directly. Move the calls of ib_umem_num_dma_blocks() closer to their actual point of use, npages is only set for non-umem pbl flows. Link: https://lore.kernel.org/r/0-v1-b37437a73f35+49c-bnxt_re_dma_block_jgg@nvidia.comAcked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Tested-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 06 10月, 2020 3 次提交
-
-
由 Ming Lei 提交于
'struct percpu_ref' is often embedded into one user structure, and the instance is usually referenced in fast path, however actually only 'percpu_count_ptr' is needed in fast path. So move other fields into one new structure of 'percpu_ref_data', and allocate it dynamically via kzalloc(), then memory footprint of 'percpu_ref' in fast path is reduced a lot and becomes suitable to put into hot cacheline of user structure. Signed-off-by: NMing Lei <ming.lei@redhat.com> Tested-by: NVeronika Kabatova <vkabatov@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NTejun Heo <tj@kernel.org> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Tejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Bart Van Assche <bvanassche@acm.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Maor Gottlieb 提交于
Remove the implementation of ib_umem_add_sg_table and instead call to __sg_alloc_table_from_pages which already has the logic to merge contiguous pages. Besides that it removes duplicated functionality, it reduces the memory consumption of the SG table significantly. Prior to this patch, the SG table was allocated in advance regardless consideration of contiguous pages. In huge pages system of 2MB page size, without this change, the SG table would contain x512 SG entries. E.g. for 100GB memory registration: Number of entries Size Before 26214400 600.0MB After 51200 1.2MB Link: https://lore.kernel.org/r/20201004154340.1080481-5-leon@kernel.orgSigned-off-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Kamal Heib 提交于
Report the "ipoib pkey", "mode" and "umcast" netlink attributes for every IPoiB interface type, not just children created with 'ip link add'. After setting the rtnl_link_ops for the parent interface, implement the dellink() callback to block users from trying to remove it. Fixes: 862096a8 ("IB/ipoib: Add more rtnl_link_ops callbacks") Link: https://lore.kernel.org/r/20201004132948.26669-1-kamalheib1@gmail.comSigned-off-by: NKamal Heib <kamalheib1@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 02 10月, 2020 11 次提交
-
-
由 Avihai Horon 提交于
Expose the query GID table and entry API to user space by adding two new methods and method handlers to the device object. This API provides a faster way to query a GID table using single call and will be used in libibverbs to improve current approach that requires multiple calls to open, close and read multiple sysfs files for a single GID table entry. Link: https://lore.kernel.org/r/20200923165015.2491894-5-leon@kernel.orgSigned-off-by: NAvihai Horon <avihaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Avihai Horon 提交于
Introduce rdma_query_gid_table which enables querying all the GID tables of a given device and copying the attributes of all valid GID entries to a provided buffer. This API provides a faster way to query a GID table using single call and will be used in libibverbs to improve current approach that requires multiple calls to open, close and read multiple sysfs files for a single GID table entry. Link: https://lore.kernel.org/r/20200923165015.2491894-4-leon@kernel.orgSigned-off-by: NAvihai Horon <avihaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Avihai Horon 提交于
Separate IB_GID_TYPE_IB and IB_GID_TYPE_ROCE to two different values, so enum ib_gid_type will match the gid types of the new query GID table API which will be introduced in the following patches. This change in enum ib_gid_type requires to change also enum rdma_network_type by separating RDMA_NETWORK_IB and RDMA_NETWORK_ROCE_V1 values. Link: https://lore.kernel.org/r/20200923165015.2491894-3-leon@kernel.orgSigned-off-by: NAvihai Horon <avihaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Avihai Horon 提交于
Change the error code returned from rdma_get_gid_attr when the GID entry is invalid but the GID index is in the gid table size range to -ENODATA instead of -EINVAL. This change is done in order to provide a more accurate error reporting to be used by the new GID query API in user space. Nevertheless, -EINVAL is still returned from sysfs in the aforementioned case to maintain compatibility with user space that expects -EINVAL. Link: https://lore.kernel.org/r/20200923165015.2491894-2-leon@kernel.orgSigned-off-by: NAvihai Horon <avihaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Alok Prasad 提交于
Making a change to fix following sparse warnings reported by kbuild bot. CHECK drivers/infiniband/hw/qedr/verbs.c drivers/infiniband/hw/qedr/verbs.c:3872:59: warning: incorrect type in assignment (different base types) drivers/infiniband/hw/qedr/verbs.c:3872:59: expected restricted __le32 [usertype] sge_prod drivers/infiniband/hw/qedr/verbs.c:3872:59: got unsigned int [usertype] sge_prod drivers/infiniband/hw/qedr/verbs.c:3875:59: warning: incorrect type in assignment (different base types) drivers/infiniband/hw/qedr/verbs.c:3875:59: expected restricted __le32 [usertype] wqe_prod drivers/infiniband/hw/qedr/verbs.c:3875:59: got unsigned int [usertype] wqe_prod Link: https://lore.kernel.org/r/20201001100959.19940-1-palok@marvell.comReported-by: Nkbuild test robot <lkp@intel.com> Fixes: acca72e2 ("RDMA/qedr: SRQ's bug fixes") Signed-off-by: NIgor Russkikh <irusskikh@marvell.com> Signed-off-by: NMichal Kalderon <mkalderon@marvell.com> Signed-off-by: NAlok Prasad <palok@marvell.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Rikard Falkeborn 提交于
The only usage of these is to pass their address to sysfs_create_group() and sysfs_remove_group(), both which takes const pointers. Make it const to allow the compiler to put them in read-only memory. Link: https://lore.kernel.org/r/20200930224004.24279-3-rikard.falkeborn@gmail.comSigned-off-by: NRikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Rikard Falkeborn 提交于
The only usage of the pma_table field in the ib_port struct is to pass its address to sysfs_create_group() and sysfs_remove_group(). Make it const to make it possible to constify a couple of static struct attribute_group. This allows the compiler to put them in read-only memory. Link: https://lore.kernel.org/r/20200930224004.24279-2-rikard.falkeborn@gmail.comSigned-off-by: NRikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yishai Hadas 提交于
Sync device with CPU pages upon ODP MR registration. mlx5 already has to zero the HW's version of the PAS list, may as well deliver a PAS list that matches the current CPU page tables configuration. Link: https://lore.kernel.org/r/20200930163828.1336747-5-leon@kernel.orgSigned-off-by: NYishai Hadas <yishaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yishai Hadas 提交于
Extend advice MR to support non faulting mode, this can improve performance by increasing the populated page tables in the device. Link: https://lore.kernel.org/r/20200930163828.1336747-4-leon@kernel.orgSigned-off-by: NYishai Hadas <yishaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yishai Hadas 提交于
Enable ODP sync without faulting, this improves performance by reducing the number of page faults in the system. The gain from this option is that the device page table can be aligned with the presented pages in the CPU page table without causing page faults. As of that, the overhead on data path from hardware point of view to trigger a fault which end-up by calling the driver to bring the pages will be dropped. Link: https://lore.kernel.org/r/20200930163828.1336747-3-leon@kernel.orgSigned-off-by: NYishai Hadas <yishaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yishai Hadas 提交于
Move to use hmm_range_fault() instead of get_user_pags_remote() to improve performance in a few aspects: This includes: - Dropping the need to allocate and free memory to hold its output - No need any more to use put_page() to unpin the pages - The logic to detect contiguous pages is done based on the returned order, no need to run per page and evaluate. In addition, moving to use hmm_range_fault() enables to reduce page faults in the system with it's snapshot mode, this will be introduced in next patches from this series. As part of this, cleanup some flows and use the required data structures to work with hmm_range_fault(). Link: https://lore.kernel.org/r/20200930163828.1336747-2-leon@kernel.orgSigned-off-by: NYishai Hadas <yishaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 01 10月, 2020 3 次提交
-
-
由 Jason Gunthorpe 提交于
This three thread race can result in the work being run once the callback becomes NULL: CPU1 CPU2 CPU3 netevent_callback() process_one_req() rdma_addr_cancel() [..] spin_lock_bh() set_timeout() spin_unlock_bh() spin_lock_bh() list_del_init(&req->list); spin_unlock_bh() req->callback = NULL spin_lock_bh() if (!list_empty(&req->list)) // Skipped! // cancel_delayed_work(&req->work); spin_unlock_bh() process_one_req() // again req->callback() // BOOM cancel_delayed_work_sync() The solution is to always cancel the work once it is completed so any in between set_timeout() does not result in it running again. Cc: stable@vger.kernel.org Fixes: 44e75052 ("RDMA/rdma_cm: Make rdma_addr_cancel into a fence") Link: https://lore.kernel.org/r/20200930072007.1009692-1-leon@kernel.orgReported-by: NDan Aloni <dan@kernelim.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jason Gunthorpe 提交于
Nothing reads this any more, and the reason for its existence has passed due to the deferred fput() scheme. Fixes: 8ea1f989 ("drivers/IB,usnic: reduce scope of mmap_sem") Link: https://lore.kernel.org/r/0-v1-df64ff042436+42-uctx_closing_jgg@nvidia.comReviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Gioh Kim 提交于
list field is not used anywhere Link: https://lore.kernel.org/r/20200930131407.6438-1-gi-oh.kim@clous.ionos.comSigned-off-by: NGioh Kim <gi-oh.kim@cloud.ionos.com> Acked-by: NJack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 30 9月, 2020 4 次提交
-
-
由 Lang Cheng 提交于
Some code was removed but the variables were still there, and some parameters have been changed to be queried from firmware. So the definitions of them are no longer needed. Fixes: 2a3d923f ("RDMA/hns: Replace magic numbers with #defines") Fixes: 82e620d9 ("RDMA/hns: Modify the data structure of hns_roce_av") Fixes: 82547469 ("IB/hns: Implement the add_gid/del_gid and optimize the GIDs management") Fixes: 21b97f53 ("RDMA/hns: Fixup qp release bug") Link: https://lore.kernel.org/r/1601371934-40003-1-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
There is no real need to have an intermediate pointer for the same struct, remove it, and use struct directly. Link: https://lore.kernel.org/r/20200926102450.2966017-11-leon@kernel.orgAcked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
As preparation for the removal of QP allocation logic, we need to ensure that ib_core allocates the right amount of memory before a call to the driver create_qp(). It requires from driver to have the same structs for all types of QPs. Link: https://lore.kernel.org/r/20200926102450.2966017-10-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
GSI QP can't be created from the user space, hence the udata check is always false (udata == NULL). Remove that check and simplify the flow. Link: https://lore.kernel.org/r/20200926102450.2966017-9-leon@kernel.orgReviewed-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-