- 30 11月, 2018 1 次提交
-
-
由 Saeed Mahameed 提交于
Remove the deprecated mlx5_interface->event mlx5_ib callback and use new mlx5 notifier API to subscribe for mlx5 events. For native mlx5_ib devices profiles pf_profile/nic_rep_profile register the notifier callback mlx5_ib_handle_event which treats the notifier context as mlx5_ib_dev. For vport repesentors, don't register any notifier, same as before, they didn't receive any mlx5 events. For slave port (mlx5_ib_multiport_info) register a different notifier callback mlx5_ib_event_slave_port, which knows that the event is coming for mlx5_ib_multiport_info and prepares the event job accordingly. Before this on the event handler work we had to ask mlx5_core if this is a slave port mlx5_core_is_mp_slave(work->dev), now it is not needed anymore. mlx5_ib_multiport_info notifier registration is done on mlx5_ib_bind_slave_port and de-registration is done on mlx5_ib_unbind_slave_port. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 21 11月, 2018 2 次提交
-
-
由 Saeed Mahameed 提交于
Use the new generic EQ API to move all ODP RDMA data structures and logic form mlx5 core driver into mlx5_ib driver. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Acked-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
由 Saeed Mahameed 提交于
Move unnecessary EQ table structures and declaration from the public include/linux/mlx5/driver.h into the private area of mlx5_core and into eq.c/eq.h. Introduce new mlx5 EQ APIs: mlx5_comp_vectors_count(dev); mlx5_comp_irq_get_affinity_mask(dev, vector); And use them from mlx5_ib or mlx5e netdevice instead of direct access to mlx5_core internal structures. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
- 13 11月, 2018 2 次提交
-
-
由 Moni Shoua 提交于
Add and modify debug messages to ODP related error flows. In that context, return code EAGAIN is considered less severe and print level for it is set debug instead of warn. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
由 Moni Shoua 提交于
When page fault event for a WQE arrives, the event data contains the resource (e.g. QP) number which will later be used by the page fault handler to retrieve the resource. Meanwhile, another context can destroy the resource and cause use-after-free. To avoid that, take a reference on the resource when handler starts and release it when it ends. Page fault events for RDMA operations don't need to be protected because the driver doesn't need to access the QP in the page fault handler. Fixes: d9aaed83 ("{net,IB}/mlx5: Refactor page fault handling") Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
- 27 10月, 2018 1 次提交
-
-
由 Michal Hocko 提交于
Revert 5ff7091f ("mm, mmu_notifier: annotate mmu notifiers with blockable invalidate callbacks"). MMU_INVALIDATE_DOES_NOT_BLOCK flags was the only one used and it is no longer needed since 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers"). We now have a full support for per range !blocking behavior so we can drop the stop gap workaround which the per notifier flag was used for. Link: http://lkml.kernel.org/r/20180827112623.8992-4-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 10月, 2018 1 次提交
-
-
由 Tariq Toukan 提交于
Take struct mlx5_frag_buf out of mlx5_frag_buf_ctrl, as it is not needed to manage and control the datapath of the fragmented buffers API. struct mlx5_frag_buf contains control info to manage the allocation and de-allocation of the fragmented buffer. Its fields are not relevant for datapath, so here I take them out of the struct mlx5_frag_buf_ctrl, except for the fragments array itself. In addition, modified mlx5_fill_fbc to initialise the frags pointers as well. This implies that the buffer must be allocated before the function is called. A set of type-specific *_get_byte_size() functions are replaced by a generic one. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 18 10月, 2018 2 次提交
-
-
由 Paul Blakey 提交于
If no-append flag is set, we will add a new FTE, instead of appending the actions of the inserted rule when the same match already exists. While here, move the has_flow_tag boolean indicator to be a flag too. This patch doesn't change any functionality. Signed-off-by: NPaul Blakey <paulb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanmox.com> Reviewed-by: NMark Bloch <markb@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
Currently, when a flow rule is created using the FS core layer, the caller has to pass the entire flow counter object and not just the counter HW handle (ID). This requires both the FS core and the caller to have knowledge about the inner implementation of the FS layer flow counters cache and limits the possible users. Move to use the counter ID across the place when dealing with flows. Doing this decoupling, now can we privatize the inner implementation of the flow counters. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 17 10月, 2018 8 次提交
-
-
由 Yonatan Cohen 提交于
Extended atomic operations cmp&swp and fetch&add is a Mellanox feature extending the standard atomic operation to use, varied operand sizes, as apposed to normal atomic operation that use an 8 byte operand only. Extended atomics allows masking the results and arguments. This patch configures QP to support extended atomic operation with the maximum size possible, as exposed by HCA capabilities. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NGuy Levi <guyle@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Yonatan Cohen 提交于
Requester scatter to CQE is restricted to QPs configured to signal all WRs. This patch adds ability to enable scatter to cqe (force enable) in the requester without sig_all, for users who do not want all WRs signaled but rather just the ones whose data found in the CQE. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NGuy Levi <guyle@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Yonatan Cohen 提交于
Flags sent down from user might not be supported by running driver. This might lead to unwanted bugs. To solve this, added macro to test for unsupported flags. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Yonatan Cohen 提交于
Scatter to CQE is a HW offload that saves PCI writes by scattering the payload to the CQE. This patch extends already existing functionality to support DC transport type. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NGuy Levi <guyle@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
Use rdma_set_device_sysfs_group() to register device attributes and simplify the driver. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Håkon Bugge 提交于
Add said information and make the debug print format consistent. Signed-off-by: NHåkon Bugge <haakon.bugge@oracle.com> Acked-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Håkon Bugge 提交于
IB Subnet Management Packets (SMPs) were excluded from debug prints. Fixed by enabling print even on QP0 MADs. Signed-off-by: NHåkon Bugge <haakon.bugge@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
iov sysfs tree is created under ib device at /sys/class/infiniband/mlx4_0/iov. And, ibdev->ports_parent->parent = &ibdev->dev. Therefore, refer to device's kobject directly instead of indirect access to it. Additionally, iov entries are created under device kobject and deleted before device is removed. There is no need to hold additional reference to device kobject in provider driver. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 16 10月, 2018 18 次提交
-
-
由 Artemy Kovalyov 提交于
Schedule MR cache work only after bucket was initialized. Cc: <stable@vger.kernel.org> # 4.10 Fixes: 49780d42 ("IB/mlx5: Expose MR cache for mlx5_ib") Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com> Reviewed-by: NMajd Dibbiny <majd@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Mark Bloch 提交于
A user can provide a hint which will be attached to the packet and written to the CQE on receive. This can be used as a way to offload operations into the HW, for example parsing a packet which is a tunneled packet, and if so, pass 0x1 as the hint. The software can use that hint to decapsulate the packet and parse only the inner headers thus saving CPU cycles. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Gal Pressman 提交于
Remove double error check from create user RQ error flow. Fixes: 79b20a6c ("IB/mlx5: Add receive Work Queue verbs") Signed-off-by: NGal Pressman <pressmangal@gmail.com> Reviewed-by: NMajd Dibbiny <majd@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yishai Hadas 提交于
Verify that the input DEVX object type matches the created object. As the obj_id in the firmware is not globally unique the object type must be considered upon checking for a valid object id. Once both the type and the id match we know that the lock was taken on the correct object by the uverbs layer. Fixes: e662e14d ("IB/mlx5: Add DEVX support for modify and query commands") Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Reviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yixian Liu 提交于
This patch adds fast register physical memory region (FRMR) support for hip08. Signed-off-by: NYixian Liu <liuyixian@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
In case the NQ alloc/enable fails, free up the already allocated/enabled NQ before reporting failure. Also, track the alloc/enable using proper state checking. Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
Delayed work bnxt_re_worker would be still running even after cancel_delayed_work returns. This causes crash as the driver proceeds with device removal. To make sure that the work is finished before returning, use cancel_delayed_work_sync. Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Devesh Sharma 提交于
Some FW versios return pkey values more than 0xFFFF. pkey_tbl_len of ib_port_attr is 16bit value. So restricting max_pkeys to 0xFFFF. Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Devesh Sharma 提交于
Reports affiliated async event on the qp-async event channel instead of global event channel. Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
Expose out of sequence errors received from FW. This counter is a 32 bit counter and driver has to accumulate the counter. Stores the previous value for calculating the difference in the next query. Also, update the HW statistics structure with new fields. Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
Expose the RoCE discard and drop counters from the HW statistics context Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Somnath Kotur 提交于
crsqe->resp would be NULL in case the host command timed out before getting a response from HW. Check for NULL pointer to avoid a potential crash while printing the error message. Signed-off-by: NSomnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Devesh Sharma 提交于
In some FW versions, RoCE driver also receives an async notification which was directed to L2 driver. RoCE driver does not handle this and print a message to syslog. Drop these notifications silently. Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
In the failure path, nq->bar_reg_iomem gets accessed without initializing. Avoid this by calling the bnxt_qplib_nq_stop_irq only if the initialization is complete. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Fixes: 1ac5a404 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Fixes: 6e04b103 ("RDMA/bnxt_re: Fix broken RoCE driver due to recent L2 driver changes") Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
This is reported by smatch check. rcfw->creq_bar_reg_iomem is accessed in bnxt_qplib_rcfw_stop_irq and this variable check afterwards doesn't make sense. Also, rcfw->creq_bar_reg_iomem will never be NULL. So Removing this check. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Fixes: 6e04b103 ("RDMA/bnxt_re: Fix broken RoCE driver due to recent L2 driver changes") Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
Version macro is not required as the driver is not maintaining the version. Removing the references of this macro too. Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
Fix possible recursive lock warning. Its a false warning as the locks are part of two differnt HW Queue data structure - cmdq and creq. Debug kernel is throwing the following warning and stack trace. [ 783.914967] ============================================ [ 783.914970] WARNING: possible recursive locking detected [ 783.914973] 4.19.0-rc2+ #33 Not tainted [ 783.914976] -------------------------------------------- [ 783.914979] swapper/2/0 is trying to acquire lock: [ 783.914982] 000000002aa3949d (&(&hwq->lock)->rlock){..-.}, at: bnxt_qplib_service_creq+0x232/0x350 [bnxt_re] [ 783.914999] but task is already holding lock: [ 783.915002] 00000000be73920d (&(&hwq->lock)->rlock){..-.}, at: bnxt_qplib_service_creq+0x2a/0x350 [bnxt_re] [ 783.915013] other info that might help us debug this: [ 783.915016] Possible unsafe locking scenario: [ 783.915019] CPU0 [ 783.915021] ---- [ 783.915034] lock(&(&hwq->lock)->rlock); [ 783.915035] lock(&(&hwq->lock)->rlock); [ 783.915037] *** DEADLOCK *** [ 783.915038] May be due to missing lock nesting notation [ 783.915039] 1 lock held by swapper/2/0: [ 783.915040] #0: 00000000be73920d (&(&hwq->lock)->rlock){..-.}, at: bnxt_qplib_service_creq+0x2a/0x350 [bnxt_re] [ 783.915044] stack backtrace: [ 783.915046] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.19.0-rc2+ #33 [ 783.915047] Hardware name: Dell Inc. PowerEdge R730/0599V5, BIOS 1.0.4 08/28/2014 [ 783.915048] Call Trace: [ 783.915049] <IRQ> [ 783.915054] dump_stack+0x90/0xe3 [ 783.915058] __lock_acquire+0x106c/0x1080 [ 783.915061] ? sched_clock+0x5/0x10 [ 783.915063] lock_acquire+0xbd/0x1a0 [ 783.915065] ? bnxt_qplib_service_creq+0x232/0x350 [bnxt_re] [ 783.915069] _raw_spin_lock_irqsave+0x4a/0x90 [ 783.915071] ? bnxt_qplib_service_creq+0x232/0x350 [bnxt_re] [ 783.915073] bnxt_qplib_service_creq+0x232/0x350 [bnxt_re] [ 783.915078] tasklet_action_common.isra.17+0x197/0x1b0 [ 783.915081] __do_softirq+0xcb/0x3a6 [ 783.915084] irq_exit+0xe9/0x100 [ 783.915085] do_IRQ+0x6a/0x120 [ 783.915087] common_interrupt+0xf/0xf [ 783.915088] </IRQ> Use nested notation for the spin_lock to avoid this warning. Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Selvin Xavier 提交于
Add the missing initalization of the cq_lock and qplib.flush_lock. Fixes: 942c9b6c ("RDMA/bnxt_re: Avoid Hard lockup during error CQE processing") Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 11 10月, 2018 3 次提交
-
-
由 Denis Drozdov 提交于
netdev has several interfaces that expect to call alloc_netdev_mqs from the core code, with the driver only providing the arguments. This is incompatible with the rdma_netdev interface that returns the netdev directly. Thus re-organize the API used by ipoib so that the verbs core code calls alloc_netdev_mqs for the driver. This is done by allowing the drivers to provide the allocation parameters via a 'get_params' callback and then initializing an allocated netdev as a second step. Fixes: cd565b4b ("IB/IPoIB: Support acceleration options callbacks") Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NDenis Drozdov <denisd@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Christoph Hellwig 提交于
Only some of these were still used by the cxgb4 driver, and that despite the fact that the driver otherwise uses the generic DMA API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Valentine Fatiev 提交于
The function that puts back the MR in cache also removes the DMA address from the HCA. Therefore we need to call this function before we remove the DMA mapping from MMU. Otherwise the HCA may access a memory that is no longer DMA mapped. Call trace: NMI: IOCK error (debug interrupt?) for reason 71 on CPU 0. CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.19.0-rc6+ #4 Hardware name: HP ProLiant DL360p Gen8, BIOS P71 08/20/2012 RIP: 0010:intel_idle+0x73/0x120 Code: 80 5c 01 00 0f ae 38 0f ae f0 31 d2 65 48 8b 04 25 80 5c 01 00 48 89 d1 0f 60 02 RSP: 0018:ffffffff9a403e38 EFLAGS: 00000046 RAX: 0000000000000030 RBX: 0000000000000005 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffffffff9a5790c0 RDI: 0000000000000000 RBP: 0000000000000030 R08: 0000000000000000 R09: 0000000000007cf9 R10: 000000000000030a R11: 0000000000000018 R12: 0000000000000000 R13: ffffffff9a5792b8 R14: ffffffff9a5790c0 R15: 0000002b48471e4d FS: 0000000000000000(0000) GS:ffff9c6caf400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f5737185000 CR3: 0000000590c0a002 CR4: 00000000000606f0 Call Trace: cpuidle_enter_state+0x7e/0x2e0 do_idle+0x1ed/0x290 cpu_startup_entry+0x6f/0x80 start_kernel+0x524/0x544 ? set_init_arg+0x55/0x55 secondary_startup_64+0xa4/0xb0 DMAR: DRHD: handling fault status reg 2 DMAR: [DMA Read] Request device [04:00.0] fault addr b34d2000 [fault reason 06] PTE Read access is not set DMAR: [DMA Read] Request device [01:00.2] fault addr bff8b000 [fault reason 06] PTE Read access is not set Fixes: f3f134f5 ("RDMA/mlx5: Fix crash while accessing garbage pointer and freed memory") Signed-off-by: NValentine Fatiev <valentinef@mellanox.com> Reviewed-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 04 10月, 2018 2 次提交
-
-
This patch moves ruc_loopback() from hfi1 into rdmavt for code sharing with the qib driver. Reviewed-by: NBrian Welty <brian.welty@intel.com> Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NVenkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: NHarish Chegondi <harish.chegondi@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
Moving send completion code into rdmavt in order to have shared logic between qib and hfi1 drivers. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NBrian Welty <brian.welty@intel.com> Signed-off-by: NVenkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: NHarish Chegondi <harish.chegondi@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-