- 05 7月, 2018 4 次提交
-
-
由 Dan Carpenter 提交于
The srq->swq[] is allocated in bnxt_qplib_create_srq(). It has srq->hwq.max_elements elements so these tests should be > instead of >= or we might go beyond the end of the array. Fixes: 1ac5a404 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Dan Carpenter 提交于
The sgid_tbl->tbl[] array is allocated in bnxt_qplib_alloc_sgid_tbl(). It has sgid_tbl->max elements. So the > should be >= to prevent accessing one element beyond the end of the array. Fixes: 1ac5a404 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
VMA lookup is supposed to be performed while mmap_sem is held. Fixes: f26c7c83 ("i40iw: Add 2MB page support") Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Tarick Bedeir 提交于
rdma_ah_find_type() can reach into ib_device->port_immutable with a potentially out-of-bounds port number, so check that the port number is valid first. Fixes: 44c58487 ("IB/core: Define 'ib' and 'roce' rdma_ah_attr types") Signed-off-by: NTarick Bedeir <tarick@google.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 04 7月, 2018 5 次提交
-
-
由 Neil Horman 提交于
On repeated module load/unload cycles, its possible for the pvrmda driver to encounter this crash: ... [ 297.032448] RIP: 0010:[<ffffffff839e4620>] [<ffffffff839e4620>] netdev_walk_all_upper_dev_rcu+0x50/0xb0 [ 297.034078] RSP: 0018:ffff95087780bd08 EFLAGS: 00010286 [ 297.034986] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff95087a0c0000 [ 297.036196] RDX: ffff95087a0c0000 RSI: ffffffff839e44e0 RDI: ffff950835d0c000 [ 297.037421] RBP: ffff95087780bd40 R08: ffff95087a0e0ea0 R09: abddacd03f8e0ea0 [ 297.038636] R10: abddacd03f8e0ea0 R11: ffffef5901e9dbc0 R12: ffff95087a0c0000 [ 297.039854] R13: ffffffff839e44e0 R14: ffff95087a0c0000 R15: ffff950835d0c828 [ 297.041071] FS: 0000000000000000(0000) GS:ffff95087fc00000(0000) knlGS:0000000000000000 [ 297.042443] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 297.043429] CR2: ffffffffffffffe8 CR3: 000000007a652000 CR4: 00000000003607f0 [ 297.044674] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 297.045893] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 297.047109] Call Trace: [ 297.047545] [<ffffffff839e4698>] netdev_has_upper_dev_all_rcu+0x18/0x20 [ 297.048691] [<ffffffffc05d31af>] is_eth_port_of_netdev+0x2f/0xa0 [ib_core] [ 297.049886] [<ffffffffc05d3180>] ? is_eth_active_slave_of_bonding_rcu+0x70/0x70 [ib_core] ... This occurs because vmw_pvrdma on probe stores a pointer to the netdev that exists on function 0 of the same bus/device/slot (which represents the vmxnet3 ethernet driver). However, it never removes this pointer if the vmxnet3 module is removed, leading to crashes resulting from use after free dereferencing incidents like the one above. The fix is pretty straightforward. vmw_pvrdma should listen for NETDEV_REGISTER and NETDEV_UNREGISTER events in its event listener code block, and update the stored netdev pointer accordingly. This solution has been tested by myself and the reporter with successful results. This fix also allows the pvrdma driver to find its underlying ethernet device in the event that vmxnet3 is loaded after pvrdma, which it was not able to do before. Signed-off-by: NNeil Horman <nhorman@tuxdriver.com> Reported-by: ruquin@redhat.com Tested-by: NAdit Ranadive <aditr@vmware.com> Acked-by: NAdit Ranadive <aditr@vmware.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Maor Gottlieb 提交于
Currently the driver sets the mask of the gre_protocol to 0xffff without consideration in the user request. Fix it by copy the mask from the verbs spec. Fixes: da2f22ae ("IB/mlx5: Add support for GRE flow specification") Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Reviewed-by: NAriel Levkovich <lariel@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Michael J. Ruhl 提交于
The general interrupt handler is_rcv_avail_int() has two paths, do_interrupt() (callback) and handle_user_interrupt(). The do_interrupt() callback is for the threaded receive handling. is_rcv_avail_int() cannot handle threaded IRQs. If the do_interrupt() path is taken, and the IRQ returns IRQ_WAKE_THREAD, the IRQ behavior will be indeterminate. Remove incorrect call to do_interrupt() from is_rcv_avail_int(), leaving the un-threaded (handle_user_interrupt()) path. Fixes: f4f30031 ("staging/rdma/hfi1: Thread the receive interrupt.") Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NKamenee Arumugam <kamenee.arumugam@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Michael J. Ruhl 提交于
The in_use_ctxts bitmask is for user receive contexts only. Setting it for any other type of receive context is incorrect. Move initial set of in_use_ctxts bits from the general context init to the user context specific init. Having this bit set can allow contexts to be incorrectly identified by some IRQ handlers. This will allow handle_user_interrupt() will now filter user contexts correctly. Clean up redundant is_rcv_urgent_int() user context check. A follow on patch will clean up an incorrect code path in the is_rcv_avail_int(). Fixes: 8737ce95 ("IB/hfi1: Fix an assign/ordering issue with shared context IDs") Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NKamenee Arumugam <kamenee.arumugam@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bart Van Assche 提交于
Avoid that the compiler complains about set-but-not-used variables when building with W=1. This patch does not change any functionality. Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com> Cc: Leon Romanovsky <leonro@mellanox.com> Acked-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 30 6月, 2018 2 次提交
-
-
由 Yishai Hadas 提交于
Improve uverbs_cleanup_ucontext algorithm to work properly when the topology graph of the objects cannot be determined at compile time. This is the case with objects created via the devx interface in mlx5. Typically uverbs objects must be created in a strict topologically sorted order, so that LIFO ordering will generally cause them to be freed properly. There are only a few cases (eg memory windows) where objects can point to things out of the strict LIFO order. Instead of using an explicit ordering scheme where the HW destroy is not allowed to fail, go over the list multiple times and allow the destroy function to fail. If progress halts then a final, desperate, cleanup is done before leaking the memory. This indicates a driver bug. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
The failure in releasing one UAR doesn't mean that we can't continue to release rest of system pages, so don't return too early. As part of cleanup, there is no need to print warning if mlx5_cmd_free_uar() fails because such warning will be printed as part of mlx5_cmd_exec(). Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 28 6月, 2018 1 次提交
-
-
由 Yuval Shaia 提交于
This function is not in use - delete it. Signed-off-by: NYuval Shaia <yuval.shaia@oracle.com> Acked-by: NAdit Ranadive <aditr@vmware.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 27 6月, 2018 1 次提交
-
-
由 Jason Gunthorpe 提交于
Since slave GID's do not exist in the core gid table we can no longer use the core code to help do this without creating inconsistencies. Directly create the AH using mlx4 internal APIs. Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Reviewed-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
- 26 6月, 2018 3 次提交
-
-
由 Jason Gunthorpe 提交于
usnic has a modified version of the core codes' ib_umem_get() and related, and the copy misses many of the bug fixes done over the years: Commit bc3e53f6 ("mm: distinguish between mlocked and pinned pages") Commit 87773dd5 ("IB: ib_umem_release() should decrement mm->pinned_vm from ib_umem_get") Commit 8494057a ("IB/uverbs: Prevent integer overflow in ib_umem_get address arithmetic") Commit 8abaae62 ("IB/core: disallow registering 0-sized memory region") Commit 66578b0b ("IB/core: don't disallow registering region starting at 0x0") Commit 53376fed ("RDMA/core: not to set page dirty bit if it's already set.") Commit 8e907ed4 ("IB/umem: Use the correct mm during ib_umem_release") Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
This patch follows the logic from ib_core but considers the internal device state upon executing the involved commands. Specifically, Upon internal error state modify QP to an error state can be assumed to be success as each in-progress WR going to be flushed in error in any case as expected by that modify command. In addition, As the drain should never fail the driver makes sure that post_send/recv will succeed even if the device is already in an internal error state. As such once the driver will supply the simulated/SW CQEs the CQE for the drain WR will be handled as well. In case of an internal error state the CQE for the drain WR may be completed as part of the main task that handled the error state or by the task that issued the drain WR. As the above depends on scheduling the code takes the relevant locks and actions to make sure that the completion handler for that WR will always be called after that the post_send/recv were issued but not in parallel to the other task that handles the error flow. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
This patch follows the logic from ib_core but considers the internal device state upon executing the involved commands. Specifically, Upon internal error state modify QP to an error state can be assumed to be success as each in-progress WR going to be flushed in error in any case as expected by that modify command. In addition, As the drain should never fail the driver makes sure that post_send/recv will succeed even if the device is already in an internal error state. As such once the driver will supply the simulated/SW CQEs the CQE for the drain WR will be handled as well. In case of an internal error state the CQE for the drain WR may be completed as part of the main task that handled the error state or by the task that issued the drain WR. As the above depends on scheduling the code takes the relevant locks and actions to make sure that the completion handler for that WR will always be called after that the post_send/recv were issued but not in parallel to the other task that handles the error flow. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 22 6月, 2018 8 次提交
-
-
由 Michael J. Ruhl 提交于
The INTx IRQ support does not work for all HF1 IRQ handlers (specifically the receive data IRQs). Remove all supporting code for the INTx IRQ. If the requested MSIx vector request is unsuccessful, do not allow the driver to continue. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NKamenee Arumugam <kamenee.arumugam@intel.com> Reviewed-by: NSadanand Warrier <sadanand.warrier@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
Many fields in ctxtdata are incorrectly sized and the organization of the fields within the structure is a jumble. Fix by: - Correcting oversize fields. - Putting fields common to all contexts at the top with hot fields at the top. - Moving PSM fields to the bottom of the structure. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
Remove the sizeable cache of the chip sizing CSRs and replace with CSR reads as needed. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
Fields in this structure are sized excessively based on hardware limitations and input values. Fix by reducing fields as appropriate and repositioning to close holes in the structure. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
It is only ever written. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
The usage of this ctxt data field is not hot path and the value can be computed on demand to cut down the ctxtdata bloat. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Talat Batheesh 提交于
This patch adds support to query the counter that counts the RoCE packets with corrupted ICRC (Invariant Cyclic Redundancy Code). This counter will be under /sys/class/infiniband/<mlx5-dev>/ports/<port>/hw_counters/ rx_icrc_encapsulated - The number of RoCE packets with ICRC error. Signed-off-by: NTalat Batheesh <talatb@mellanox.com> Reviewed-by: NMark Bloch <markb@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 20 6月, 2018 13 次提交
-
-
由 Leon Romanovsky 提交于
Put all relevant checks for transport domain in the mlx5_ib_alloc/dealloc_transport_domain functions. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
Move some s_flags defines out of rdmavt and into hfi1 because they are hfi1 specific and therefore should remain in the driver instead of bubbling up to rdmavt. Document device specific ranges in rdmavt and remap those in hfi1. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NKaike Wan <kaike.wan@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
The field is based on a constant that can never change. Use the define to assign the register instead. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
This field should be in ctxtdata to allow for better locality of access by eliminating a dd dereference. The new field is now side-by-side with rcvhdrqentsize since the rhf_offset is a function of the rcvhdrqentsize. Both fields are now correctly sized as u8. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mike Marciniszyn 提交于
The current implementation precludes having receive context specific packet type receive handlers. Fix this by adding adding c99 const array for the existing handlers and remove the current 72 bytes of pointers from devdata. A new pointer in hfi1_ctxtdata will point to the const array. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Expose DEVX tree to be used by upper layers. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Return the matching device EQN for a given user vector number via the DEVX interface. Note: EQs are owned by the kernel and shared by all user processes. Basically, a user CQ can point to any EQ. The kernel doesn't enforce any such limitation today either. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Add support to register a memory with the firmware via the DEVX interface. The driver translates a given user address to ib_umem then it will register the physical addresses with the firmware and get a unique id for this registration to be used for this virtual address. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Return a device UAR index for a given user index via the DEVX interface. Security note: The hardware protection mechanism works like this: Each device object that is subject to UAR doorbells (QP/SQ/CQ) gets a UAR ID (called uar_page in the device specification manual) upon its creation. Then upon doorbell, hardware fetches the object context for which the doorbell was rang, and validates that the UAR through which the DB was rang matches the UAR ID of the object. If no match the doorbell is silently ignored by the hardware. Of course, the user cannot ring a doorbell on a UAR that was not mapped to it. Now in devx, as the devx kernel does not manipulate the QP/SQ/CQ command mailboxes (except tagging them with UID), we expose to the user its UAR ID, so it can embed it in these objects in the expected specification format. So the only thing the user can do is hurt itself by creating a QP/SQ/CQ with a UAR ID other than his, and then in this case other users may ring a doorbell on its objects. The consequence of that will be that another user can schedule a QP/SQ of the buggy user for execution (just insert it to the hardware schedule queue or arm its CQ for event generation), no further harm is expected. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Add support in DEVX for modify and query commands, the required lock is taken (i.e. READ/WRITE) by the KABI infrastructure accordingly. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Add support to create and destroy firmware objects via the DEVX interface. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Add support to run general firmware command via the DEVX interface. A command that works on some object (e.g. CQ, WQ, etc.) will be added in next patches while maintaining the required object lock. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Introduce DEVX to enable direct device commands in downstream patches from this series. In that mode of work the firmware manages the isolation between processes' resources and as such a DEVX user id is created and assigned to the given user context upon allocation request. A capability check is done to make sure that this feature is really supported by the firmware prior to creating the DEVX user id. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 19 6月, 2018 3 次提交
-
-
由 Bharat Potnuri 提交于
memcpy() of mapped addresses is done twice in c4iw_create_listen(), removing the duplicate memcpy(). Fixes: 170003c8 ("iw_cxgb4: remove port mapper related code") Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NPotnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Steve Wise 提交于
This patch replaces the ib_device_attr.max_sge with max_send_sge and max_recv_sge. It allows ulps to take advantage of devices that have very different send and recv sge depths. For example cxgb4 has a max_recv_sge of 4, yet a max_send_sge of 16. Splitting out these attributes allows much more efficient use of the SQ for cxgb4 with ulps that use the RDMA_RW API. Consider a large RDMA WRITE that has 16 scattergather entries. With max_sge of 4, the ulp would send 4 WRITE WRs, but with max_sge of 16, it can be done with 1 WRITE WR. Acked-by: NSagi Grimberg <sagi@grimberg.me> Acked-by: NChristoph Hellwig <hch@lst.de> Acked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Acked-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Parav Pandit 提交于
For UD the drivers were doing a sgid_index lookup into the cache to get the attrs, however we can now directly access the same attrs stores in the ib_ah instead and remove the lookup. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-