- 20 2月, 2019 7 次提交
-
-
由 Jason Gunthorpe 提交于
There is no reason to have three allocations of per-port data. Combine them together and make the lifetime for all the per-port data match the struct ib_device. Following patches will require more port-specific data, now there is a good place to put it. Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
We have many loops iterating over all of the end port numbers on a struct ib_device, simplify them with a for_each helper. Reviewed-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Netlink dumpit handshake exchanges the index from which kernel should start to return its value, in current code, this index included not-visible in this PID items too and indirectly revealed the number of entries. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
This patch adds ability to query specific QP based on its LQPN (local QPN), which is assigned by HW and needs special treatment while inserting into restrack DB. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
PD, MR and QP objects have parents objects: contexts and PDs. The exposed parent IDs allow to correlate various objects and simplify debug investigation. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Give to the user space tools unique identifier for PD, MR, CQ and CM_ID objects, so they can be able to query on them with .doit callbacks. QP .doit is not supported yet, till all drivers will be updated to provide their LQPN to be equal to their restrack ID. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
As a preparation to extension of rdma_restrack_root to provide software IDs, which will be per-type too. We convert the rdma_restrack_root from struct with arrays to array of structs. Such conversion allows us to drop rwsem lock in favour of internal XArray lock. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 19 2月, 2019 5 次提交
-
-
由 Leon Romanovsky 提交于
There is no need to expose internals of restrack DB to IB/core. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
XArray uses internal lock for updates to XArray. This means that our external RW lock is needed to ensure that entry is not deleted while we are performing iteration over list. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Implement doit callbacks and ensure that users won't provide port values on resource entry allocated in per-device mode needed for .doit callback. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Add new general helper to get restrack entry given by ID and their respective type. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
The additions of .doit callbacks posses new access pattern to the resource entries by some user visible index. Back then, the legacy DB was implemented as hash because per-index access wasn't needed and XArray wasn't accepted yet. Acceptance of XArray together with per-index access requires the refresh of DB implementation. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 16 2月, 2019 9 次提交
-
-
由 Parav Pandit 提交于
Move core device addition and removal from sysfs.c to device.c as device.c is more appropriate place for device management. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Parav Pandit 提交于
Refactor code for device and port sysfs attributes for reuse. While at it, rename counter part free function to ib_free_port_attrs. Also attribute setup sequence is: (a) port specific init. (b) device stats alloc/init. So for cleanup, follow reverse sequence: (a) device stats dealloc (b) port specific cleanup Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Parav Pandit 提交于
Instead of holding extra reference using get_device() that device_unregister() releases, simplify it as below. device_add() balances with device_del(). device_initialize() balances with put_device(), always via ib_dealloc_device(). Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Ucontext allocation and release aren't async events and don't need kref accounting. The common layer of RDMA subsystem ensures that dealloc ucontext will be called after all other objects are released. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Tested-by: NRaju Rangoju <rajur@chelsio.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
The internal design of RDMA/core ensures that there dealloc ucontext will be called only if alloc_ucontext succeeded, hence there is no need to manage internal variable to mark validity of ucontext. As part of this change, remove redundant memeset too. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Kaike Wan 提交于
The following build warning was produced for the TID RDMA READ patch ("IB/hfi1: Enable TID RDMA READ protocol"): drivers/infiniband/hw/hfi1/qp.c: In function 'hfi1_setup_wqe': drivers/infiniband/hw/hfi1/qp.c:328:3: warning: this statement may fall through [-Wimplicit-fallthrough=] hfi1_setup_tid_rdma_wqe(qp, wqe); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/infiniband/hw/hfi1/qp.c:329:2: note: here case IB_QPT_UC: ^~~~ This patch will fix the issue by adding the "fall through" comment. Fixes: f1ab4efa ("IB/hfi1: Enable TID RDMA READ protocol") Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NKaike Wan <kaike.wan@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
The new output_written block was wrongly placed before the ret=0, causing the error code to be lost. uverbs_output_written is not expected to fail, and even if it does fail it has no significant impact on the userspace flow. Reported-by: NBart Van Assche <bvanassche@acm.org> Fixes: d6f4a21f ("RDMA/uverbs: Mark ioctl responses with UVERBS_ATTR_F_VALID_OUTPUT") Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
-
由 Shamir Rabinovitch 提交于
Now when we have the udata passed to all the ib_xxx object creation APIs and the additional macro 'rdma_udata_to_drv_context' to get the ib_ucontext from ib_udata stored in uverbs_attr_bundle, we can finally start to remove the dependency of the drivers in the ib_xxx->uobject->context. Signed-off-by: NShamir Rabinovitch <shamir.rabinovitch@oracle.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shamir Rabinovitch 提交于
Add ib_ucontext to the uverbs_attr_bundle sent down the iocl and cmd flows as soon as the flow has ib_uobject. In addition, remove rdma_get_ucontext helper function that is only used by ib_umem_get. Signed-off-by: NShamir Rabinovitch <shamir.rabinovitch@oracle.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 15 2月, 2019 7 次提交
-
-
由 Erez Alfasi 提交于
Changed debug statements to use %s and __func__ instead of hard-coded function's name. Signed-off-by: NErez Alfasi <ereza@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 YueHaibing 提交于
Fixes gcc '-Wunused-but-set-variable' warning: drivers/infiniband/core/iwpm_util.c: In function 'iwpm_send_hello': drivers/infiniband/core/iwpm_util.c:811:6: warning: variable 'msg_seq' set but not used [-Wunused-but-set-variable] It never used since introduction in commit b0bad9ad ("RDMA/IWPM: Support no port mapping requirements") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Lijun Ou 提交于
This patch adds new device capability for IB_DEVICE_MEM_MGT_EXTENSIONS to indicate device support for the following features: 1. Fast register memory region. 2. send with remote invalidate by frmr 3. local invalidate memory regsion As well as adds the max depth of frmr page list len. Signed-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NLijun Ou <oulijun@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yixian Liu 提交于
Current all messages printed for aeq subtype event are wrong. Thus, delete them and only the value of subtype event is printed. Signed-off-by: NYixian Liu <liuyixian@huawei.com> Signed-off-by: NLijun Ou <oulijun@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yixian Liu 提交于
The memory allocated for wrid should be initialized to zero. Signed-off-by: NYixian Liu <liuyixian@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yixian Liu 提交于
The state of mr after reregister operation should be set to valid state. Otherwise, it will keep the same as the state before reregistered. Signed-off-by: NYixian Liu <liuyixian@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 chenglang 提交于
This patch modifies the minimum CQ depth specification of hip08 and is consistent with the processing of hip06. Signed-off-by: Nchenglang <chenglang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 14 2月, 2019 3 次提交
-
-
由 Shiraz, Saleem 提交于
Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
rdmavt expects a uniform size on all umem SGEs which is currently at PAGE_SIZE. Adapt to a umem API change which could return non-uniform sized SGEs due to combining contiguous PAGE_SIZE regions into an SGE. Use for_each_sg_page variant to unfold the larger SGEs into a list of PAGE_SIZE elements. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Colin Ian King 提交于
The null check on an allocation failure on pd is currently checking if pd is non-null rather than null. Fix this by adding the missing ! operator. Fixes: 21a428a0 ("RDMA: Handle PD allocations by IB/core") Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 12 2月, 2019 9 次提交
-
-
由 Colin Ian King 提交于
The struct member comp_mask has not been initialized however a bit pattern is being bitwise or'd into the member and hence other bit fields in comp_mask may contain any garbage from the stack. Fix this by making the bitwise or into an assignment. Fixes: 95b86d1c ("RDMA/bnxt_re: Update kernel user abi to pass chip context") Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NDevesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mark Bloch 提交于
Make sure the IB device is freed on failure. Fixes: b5ca15ad ("IB/mlx5: Add proper representors support") Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NBodong Wang <bodong@mellanox.com> Reviewed-by: NHåkon Bugge <haakon.bugge@oracle.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Fix bad flow upon DEVX mkey creation to prevent deleting the indirect mkey from the radix tree in case there was a previous failure to insert it. Fixes: 534fd7aa ("IB/mlx5: Manage indirection mkey upon DEVX flow for ODP") Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Reviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
The driver walks the umem SGL assuming a 1:1 mapping between SGE and system page. Update to use the for_each_sg_page iterator to get individual pages contained in the SGEs. This is a pre-requisite before adding page combining into SGEs while building the scatter table in IB core. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Acked-by: NMichal Kalderon <michal.kalderon@marvell.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Acked-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Shiraz, Saleem 提交于
Use the for_each_sg_dma_page iterator variant to walk the umem DMA-mapped SGL and get the page DMA address. This avoids the extra loop to iterate pages in the SGE when for_each_sg iterator is used. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Signed-off-by: NShiraz, Saleem <shiraz.saleem@intel.com> Acked-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-