- 21 5月, 2015 2 次提交
-
-
由 Ira Weiny 提交于
Remove query_protocol callback Use the new Core Capability bits for: rdma_protocol_* rdma_cap_ib_mad rdma_cap_ib_smi rdma_cap_ib_cm rdma_cap_iw_cm rdma_cap_ib_sa rdma_cap_ib_mcast rdma_cap_af_ib rdma_cap_eth_ah Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Ira Weiny 提交于
As of commit 5eb620c8 "IB/core: Add helpers for uncached GID and P_Key searches"; pkey_tbl_len and gid_tbl_len are immutable data which are stored in the ib_device. The per port core capability flags to be added later are also immutable data to be stored in the ib_device object. In preparation for this create a structure for per port immutable data and place the pkey and gid table lengths within this structure. "get_port_immutable" is added as a mandatory device function to allow the drivers to fill in this data. Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 19 5月, 2015 1 次提交
-
-
由 Michael Wang 提交于
Add new callback query_protocol() and implement for each HW. Mapping List: node-type link-layer transport protocol nes RNIC ETH IWARP IWARP amso1100 RNIC ETH IWARP IWARP cxgb3 RNIC ETH IWARP IWARP cxgb4 RNIC ETH IWARP IWARP usnic USNIC_UDP ETH USNIC_UDP USNIC_UDP ocrdma IB_CA ETH IB IBOE mlx4 IB_CA IB/ETH IB IB/IBOE mlx5 IB_CA IB IB IB ehca IB_CA IB IB IB ipath IB_CA IB IB IB mthca IB_CA IB IB IB qib IB_CA IB IB IB Signed-off-by: NMichael Wang <yun.wang@profitbricks.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Tested-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NSean Hefty <sean.hefty@intel.com> Reviewed-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Tested-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 21 2月, 2015 2 次提交
-
-
由 Mike Marciniszyn 提交于
Upstream checkpatch now requires this. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Mike Marciniszyn 提交于
Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 18 2月, 2015 1 次提交
-
-
由 Mike Marciniszyn 提交于
Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 18 3月, 2014 2 次提交
-
-
由 Yann Droneaud 提交于
Commit af061a64 add some code in qib_ib_rcv() which trigger a warning from coccicheck (coccinelle/spatch): $ make C=2 CHECK=scripts/coccicheck drivers/infiniband/hw/qib/ CHECK drivers/infiniband/hw/qib/qib_verbs.c drivers/infiniband/hw/qib/qib_verbs.c:679:5-32: code aligned with following code on line 681 CC [M] drivers/infiniband/hw/qib/qib_verbs.o In fact, according to similar code in qib_kreceive(), qib_ib_rcv() code is correct but improperly indented. This patch fix indentation for the misaligned portion. Link: http://marc.info/?i=cover.1394485254.git.ydroneaud@opteya.com Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: infinipath@intel.com Cc: Julia Lawall <julia.lawall@lip6.fr> Cc: cocci@systeme.lip6.fr Signed-off-by: NYann Droneaud <ydroneaud@opteya.com> Tested-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Acked-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Mike Marciniszyn 提交于
The counters, unicast_xmit, unicast_rcv, multicast_xmit, multicast_rcv are now maintained as percpu variables. The mad code is modified to add a z_ latch so that the percpu counters monotonically increase with appropriate adjustments in the reset, read logic to maintain the z_ latch. This patch also corrects the fact the unitcast_xmit wasn't handled at all for UC and RC QPs. Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 22 6月, 2013 1 次提交
-
-
由 Mike Marciniszyn 提交于
This fix changes the opcode relative counters for receive to per context. Profiling has shown that when mulitple contexts are being used there is a lot of cache activity associated with these counters. The code formerly kept these counters per port, but only provided the interface to read per HCA. This patch converts the read of counters to per HCA and adds the debugfs hooks to be able to read the file as a sequence of opcodes. Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 17 4月, 2013 1 次提交
-
-
由 Mike Marciniszyn 提交于
qib_verbs_register_sysfs() never cleans up from a failure. Additionally, the caller of qib_verbs_register_sysfs() doesn't return the correct "ret" value. This patch resolves both of those issues. Reported-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 23 3月, 2013 1 次提交
-
-
由 Vinit Agnihotri 提交于
These changes modify the qib driver as part of acquiring the InfiniBand assets of QLogic. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NVinit Agnihotri <vinit.abhay.agnihotri@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 01 10月, 2012 1 次提交
-
-
由 Dean Luick 提交于
Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 20 7月, 2012 1 次提交
-
-
由 Mike Marciniszyn 提交于
Profiling has shown that sdma_lock is proving a bottleneck for performance. The situations include: - RDMA reads when krcvqs > 1 - post sends from multiple threads For RDMA read the current global qib_wq mechanism runs on all CPUs and contends for the sdma_lock when multiple RMDA read requests are fielded on differenct CPUs. For post sends, the direct call to qib_do_send() from multiple threads causes the contention. Since the sdma mechanism is per port, this fix converts the existing workqueue to a per port single thread workqueue to reduce the lock contention in the RDMA read case, and for any other case where the QP is scheduled via the workqueue mechanism from more than 1 CPU. For the post send case, This patch modifies the post send code to test for a non empty sdma engine. If the sdma is not idle the (now single thread) workqueue will be used to trigger the send engine instead of the direct call to qib_do_send(). Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 18 7月, 2012 1 次提交
-
-
由 Mike Marciniszyn 提交于
Commit af061a64 ("IB/qib: Use RCU for qpn lookup") introduced sparse warnings. This patch corrects those issues. Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 09 7月, 2012 2 次提交
-
-
由 Mike Marciniszyn 提交于
Profiling indicates that MR validation locking is expensive. The MR table is largely read-only and is a suitable candidate for RCU locking. The patch uses RCU locking during validation to eliminate one lock/unlock during that validation. Reviewed-by: NMike Heinz <michael.william.heinz@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Mike Marciniszyn 提交于
A timing issue can occur where qib_mr_dereg can return -EBUSY if the MR use count is not zero. This can occur if the MR is de-registered while RDMA read response packets are being progressed from the SDMA ring. The suspicion is that the peer sent an RDMA read request, which has already been copied across to the peer. The peer sees the completion of his request and then communicates to the responder that the MR is not needed any longer. The responder tries to de-register the MR, catching some responses remaining in the SDMA ring holding the MR use count. The code now uses a get/put paradigm to track MR use counts and coordinates with the MR de-registration process using a completion when the count has reached zero. A timeout on the delay is in place to catch other EBUSY issues. The reference count protocol is as follows: - The return to the user counts as 1 - A reference from the lk_table or the qib_ibdev counts as 1. - Transient I/O operations increase/decrease as necessary A lot of code duplication has been folded into the new routines init_qib_mregion() and deinit_qib_mregion(). Additionally, explicit initialization of fields to zero is now handled by kzalloc(). Also, duplicated code 'while.*num_sge' that decrements reference counts have been consolidated in qib_put_ss(). Reviewed-by: NRamkrishna Vepa <ramkrishna.vepa@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 04 1月, 2012 1 次提交
-
-
由 Mike Marciniszyn 提交于
The current code locks the QP s_lock, followed by the pending_lock, I guess to to protect against the allocate failing. This patch only locks the pending_lock, assuming that the empty case is an exeception, in which case the pending_lock is dropped, and the original code is executed. This will save a lock of s_lock in the normal case. The observation is that the sdma descriptors will deplete at twice the rate of txreq's, so this should be rare. Signed-off-by: NMike Marciniszyn <mike.marciniszyn@qlogic.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
They had been getting it implicitly via device.h but we can't rely on that for the future, due to a pending cleanup so fix it now. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 22 10月, 2011 1 次提交
-
-
由 Mike Marciniszyn 提交于
The heavy weight spinlock in qib_lookup_qpn() is replaced with RCU. The hash list itself is now accessed via jhash functions instead of mod. The changes should benefit multiple receive contexts in different processors by not contending for the lock just to read the hash structures. The patch also adds a lookaside_qp (pointer) and a lookaside_qpn in the context. The interrupt handler will test the current packet's qpn against lookaside_qpn if the lookaside_qp pointer is non-NULL. The pointer is NULL'ed when the interrupt handler exits. Signed-off-by: NMike Marciniszyn <mike.marciniszyn@qlogic.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 04 8月, 2010 1 次提交
-
-
由 Ralph Campbell 提交于
When transitioning a QP to the error state, in progress RWQEs need to be marked complete. This also involves releasing the reference count to the memory regions referenced in the SGEs. The locking in the receive packet processing wasn't sufficient to prevent qib_error_qp() from modifying the r_sge state at the same time, thus leading to kernel panics. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 24 5月, 2010 1 次提交
-
-
由 Ralph Campbell 提交于
Add a low-level IB driver for QLogic PCIe adapters. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-