- 02 6月, 2014 2 次提交
-
-
由 David S. Miller 提交于
This reverts commit 70a640d0. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuval Atias 提交于
The “affinity hint” mechanism is used by the user space daemon, irqbalancer, to indicate a preferred CPU mask for irqs. Irqbalancer can use this hint to balance the irqs between the cpus indicated by the mask. We wish the HCA to preferentially map the IRQs it uses to numa cores close to it. To accomplish this, we use cpumask_set_cpu_local_first(), that sets the affinity hint according the following policy: First it maps IRQs to “close” numa cores. If these are exhausted, the remaining IRQs are mapped to “far” numa cores. Signed-off-by: NYuval Atias <yuvala@mellanox.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 5月, 2014 1 次提交
-
-
由 Matan Barak 提交于
When we receive a netdev event indicating a netdev change and/or a netdev address change, we must change the MAC index used by the proxy QP1 (in the QP context), otherwise RoCE CM packets sent by the VF will not carry the same source MAC address as the non-CM packets. We use the UPDATE_QP command to perform this change. In order to avoid modifying a QP context based on netdev event, while the driver attempts to destroy this QP (e.g either the mlx4_ib or ib_mad modules are unloaded), we use mutex locking in both flows. Since the relevant mlx4 proxy GSI QP is created indirectly by the mad module when they create their GSI QP, the mlx4 didn't need to keep track on that QP prior to this change. Now, when QP modifications are needed to this QP from within the driver, we added refernece to it. Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 5月, 2014 3 次提交
-
-
由 Sagi Grimberg 提交于
When the target is in stop stage, iSER transport initiates RDMA disconnects. The iSER initiator may wish to establish a new connection over the still existing network portal. In this case iSER transport should not accept and resume new RDMA connections. In order to learn that, iscsi_np is added with enabled flag so the iSER transport can check when deciding weather to accept and resume a new connection request. The iscsi_np is enabled after successful transport setup, and disabled before iscsi_np login threads are cleaned up. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
RDMA CM and iSCSI target flows are asynchronous and completely uncorrelated. Relying on the fact that iscsi_accept_np will be called after CM connection request event and will wait for it is a mistake. When attempting to login to a few targets this flow is racy and unpredictable, but for parallel login to dozens of targets will race and hang every time. The correct synchronizing mechanism in this case is pending on a semaphore rather than a wait_for_event. We keep the pending interruptible for iscsi_np cleanup stage. (Squash patch to remove dead code into parent - nab) Reported-by: NSlava Shwartsman <valyushash@gmail.com> Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
Should be adding list_add_tail($new, $head) and not the other way around. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
- 14 5月, 2014 1 次提交
-
-
由 Wilfried Klaebe 提交于
net: get rid of SET_ETHTOOL_OPS Dave Miller mentioned he'd like to see SET_ETHTOOL_OPS gone. This does that. Mostly done via coccinelle script: @@ struct ethtool_ops *ops; struct net_device *dev; @@ - SET_ETHTOOL_OPS(dev, ops); + dev->ethtool_ops = ops; Compile tested only, but I'd seriously wonder if this broke anything. Suggested-by: NDave Miller <davem@davemloft.net> Signed-off-by: NWilfried Klaebe <w-lkml@lebenslange-mailadresse.de> Acked-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 4月, 2014 4 次提交
-
-
由 Hariprasad S 提交于
Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
The whole db drop avoidance stuff is for T4 only. So we cannot allow that to be enabled for T5 devices. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
This is required to work around a T5 HW issue. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
In cases where the cm calls c4iw_modify_rc_qp() with the endpoint mutex held, they must be called with internal == 1. rx_data() and process_mpa_reply() are not doing this. This causes a deadlock because c4iw_modify_rc_qp() might call c4iw_ep_disconnect() in some !internal cases, and c4iw_ep_disconnect() acquires the endpoint mutex. The design was intended to only do the disconnect for !internal calls. Change rx_data(), FPDU_MODE case, to call c4iw_modify_rc_qp() with internal == 1, and then disconnect only after releasing the mutex. Change process_mpa_reply() to call c4iw_modify_rc_qp(TERMINATE) with internal == 1 and set a new attr flag telling it to send a TERMINATE message. Previously this was implied by !internal. Change process_mpa_reply() to return whether the caller should disconnect after releasing the endpoint mutex. Now rx_data() will do the disconnect in the cases where process_mpa_reply() wants to disconnect after the TERMINATE is sent. Change c4iw_modify_rc_qp() RTS->TERM to only disconnect if !internal, and to send a TERMINATE message if attrs->send_term is 1. Change abort_connection() to not aquire the ep mutex for setting the state, and make all calls to abort_connection() do so with the mutex held. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 12 4月, 2014 11 次提交
-
-
由 Mike Marciniszyn 提交于
The code was incorrectly using sg_dma_address() and sg_dma_len() instead of ib_sg_dma_address() and ib_sg_dma_len(). This prevents srpt from functioning with the Intel HCA and indeed will corrupt memory badly. Cc: Bart Van Assche <bvanassche@acm.org> Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Tested-by: NVinod Kumar <vinod.kumar@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Cc: stable@vger.kernel.org # 3.3+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Steve Wise 提交于
Need to get the endpoint reference before calling rdma_fini(), which might fail causing us to not get the reference. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Hariprasad Shenai 提交于
Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
The max depth of a fastreg mr depends on whether the device supports DSGL or not. So compute it dynamically based on the device support and the module use_dsgl option. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
There is a race when moving a QP from RTS->CLOSING where a SQ work request could be posted after the FW receives the RDMA_RI/FINI WR. The SQ work request will never get processed, and should be completed with FLUSHED status. Function c4iw_flush_sq(), however was dropping the oldest SQ work request when in CLOSING or IDLE states, instead of completing the pending work request. If that oldest pending work request was actually complete and has a CQE in the CQ, then when that CQE is proceessed in poll_cq, we'll BUG_ON() due to the inconsistent SQ/CQ state. This is a very small timing hole and has only been hit once so far. The fix is two-fold: 1) c4iw_flush_sq() MUST always flush all non-completed WRs with FLUSHED status regardless of the QP state. 2) In c4iw_modify_rc_qp(), always set the "in error" bit on the queue before moving the state out of RTS. This ensures that the state transition will not happen while another thread is in post_rc_send(), because set_state() and post_rc_send() both aquire the qp spinlock. Also, once we transition the state out of RTS, subsequent calls to post_rc_send() will fail because the "in error" bit is set. I don't think this fully closes the race where the FW can get a FINI followed a SQ work request being posted (because they are posted to differente EQs), but the #1 fix will handle the issue by flushing the SQ work request. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
Some HW platforms can reorder read operations, so we must rmb() after we see a valid gen bit in a CQE but before we read any other fields from the CQE. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
1) timedout endpoint processing can be starved. If there are continual CPL messages flowing into the driver, the endpoint timeout processing can be starved. This condition exposed the other bugs below. Solution: In process_work(), call process_timedout_eps() after each CPL is processed. 2) Connection events can be processed even though the endpoint is on the timeout list. If the endpoint is scheduled for timeout processing, then we must ignore MPA Start Requests and Replies. Solution: Change stop_ep_timer() to return 1 if the ep has already been queued for timeout processing. All the callers of stop_ep_timer() need to check this and act accordingly. There are just a few cases where the caller needs to do something different if stop_ep_timer() returns 1: 1) in process_mpa_reply(), ignore the reply and process_timeout() will abort the connection. 2) in process_mpa_request, ignore the request and process_timeout() will abort the connection. It is ok for callers of stop_ep_timer() to abort the connection since that will leave the state in ABORTING or DEAD, and process_timeout() now ignores timeouts when the ep is in these states. 3) Double insertion on the timeout list. Since the endpoint timers are used for connection setup and teardown, we need to guard against the possibility that an endpoint is already on the timeout list. This is a rare condition and only seen under heavy load and in the presense of the above 2 bugs. Solution: In ep_timeout(), don't queue the endpoint if it is already on the queue. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Steve Wise 提交于
Signed-off-by: NSteve Wise <swise@opengridcomputing.com> [ Fix cast from u64* to integer. - Roland ] Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 11 4月, 2014 3 次提交
-
-
由 Eli Cohen 提交于
Add support for the block multicast loopback QP creation flag along the proper firmware API for that. Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Alexander Gordeev 提交于
As result of the deprecation of the MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block(), all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Alexander Gordeev 提交于
As result of the deprecation of the MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block(), all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() and pci_enable_msix_range() interfaces. Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Acked-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 07 4月, 2014 13 次提交
-
-
由 Nicholas Bellinger 提交于
In order to support local WRITE_INSERT + READ_STRIP operations for non PI enabled fabrics, the fabric driver needs to be able signal what protection offload operations are supported. This is done at session initialization time so the modes can be signaled by individual se_wwn + se_portal_group endpoints, as well as optionally across different transports on the same endpoint. For iser-target, set TARGET_PROT_ALL if the underlying ib_device has already signaled PI offload support, and allow this to be exposed via a new iscsit_transport->iscsit_get_sup_prot_ops() callback. For loopback, set TARGET_PROT_ALL to signal SCSI initiator mode operation. For all other drivers, set TARGET_PROT_NORMAL to disable fabric level PI. Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Cc: Quinn Tran <quinn.tran@qlogic.com> Cc: Giridhar Malavali <giridhar.malavali@qlogic.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
Fastreg is mandatory for signature, so if the device doesn't support it we don't need to use it. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Nicholas Bellinger 提交于
This patch fixes a bug where outstanding RDMA_READs with WRITE_PENDING status require an extra target_put_sess_cmd() in isert_put_cmd() code when called from isert_cq_tx_comp_err() + isert_cq_drain_comp_llist() context during session shutdown. The extra kref PUT is required so that transport_generic_free_cmd() invokes the last target_put_sess_cmd() -> target_release_cmd_kref(), which will complete(&se_cmd->cmd_wait_comp) the outstanding se_cmd descriptor with WRITE_PENDING status, and awake the completion in target_wait_for_sess_cmds() to invoke TFO->release_cmd(). The bug was manifesting itself in target_wait_for_sess_cmds() where a se_cmd descriptor with WRITE_PENDING status would end up sleeping indefinately. Acked-by: NSagi Grimberg <sagig@mellanox.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Cc: <stable@vger.kernel.org> #3.10+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Nicholas Bellinger 提交于
Now that TASK_ABORTED status is not generated for all cases by TMR ABORT_TASK + LUN_RESET, a new TFO->abort_task() caller is necessary in order to give fabric drivers a chance to unmap hardware / software resources before the se_cmd descriptor is released via the normal TFO->release_cmd() codepath. This patch adds TFO->aborted_task() in core_tmr_abort_task() in place of the original transport_send_task_abort(), and also updates all fabric drivers to implement this caller. The fabric drivers that include changes to perform cleanup via ->aborted_task() are: - iscsi-target - iser-target - srpt - tcm_qla2xxx The fabric drivers that currently set ->aborted_task() to NOPs are: - loopback - tcm_fc - usb-gadget - sbp-target - vhost-scsi For the latter five, there appears to be no additional cleanup required before invoking TFO->release_cmd() to release the se_cmd descriptor. v2 changes: - Move ->aborted_task() call into transport_cmd_finish_abort (Alex) Cc: Alex Leung <amleung21@yahoo.com> Cc: Mark Rustad <mark.d.rustad@intel.com> Cc: Roland Dreier <roland@kernel.org> Cc: Vu Pham <vu@mellanox.com> Cc: Chris Boot <bootc@bootc.net> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Giridhar Malavali <giridhar.malavali@qlogic.com> Cc: Saurav Kashyap <saurav.kashyap@qlogic.com> Cc: Quinn Tran <quinn.tran@qlogic.com> Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Nicholas Bellinger 提交于
This patch changes isert_conn_create_fastreg_pool() to follow logic in iscsi_target_locate_portal() for determining how many FRMR descriptors to allocate based upon the number of possible per-session command slots that are available. This addresses an OOPs in isert_reg_rdma() where due to the use of ISCSI_DEF_XMIT_CMDS_MAX could end up returning a bogus fast_reg_descriptor when the number of active tags exceeded the original hardcoded max. Note this also includes moving isert_conn_create_fastreg_pool() from isert_connect_request() to isert_put_login_tx() before posting the final Login Response PDU in order to determine the se_nacl->queue_depth (eg: number of tags) per session the target will be enforcing. v2 changes: - Move isert_conn->conn_fr_pool list_head init into isert_conn_request() v3 changes: - Drop unnecessary list_empty() check in isert_reg_rdma() (Sagi) Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Cc: <stable@vger.kernel.org> #3.12+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
If during data-transfer a data-integrity error was detected we must fail the command with CHECK_CONDITION and not execute the command. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reported-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
Remove code duplication from RDMA_READ and RDMA_WRITE completions that do basically the same check. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
If protection is involved, iSER target must wait for completion of RDMA_READ before sending SCSI response. So we must consider that when calculating post_send_buf_count additions, also when processing good/error completions. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
As REG_SIG_MR work request and it's LOCAL_INVALIDATE are not accounted in post_send_buf_count we must color these with ISER_FASTREG_LI_WRID in order to process their error completions when the QP flushes. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
In case the Target core passed transport T10 protection operation: 1. Register data buffer (data memory region) 2. Register protection buffer if exsists (prot memory region) 3. Register signature region (signature memory region) - use work request IB_WR_REG_SIG_MR 4. Execute RDMA 5. Upon RDMA completion check the signature status - if succeeded send good SCSI response - if failed send SCSI bad response with appropriate sense buffer (Fix up compile error in isert_reg_sig_mr, and fix up incorrect se_cmd->prot_type -> TARGET_PROT_NORMAL comparision - nab) (Fix failed sector assignment in isert_completion_rdma_* - Sagi + nab) (Fix enum assignements for protection type - Sagi) (Fix devision on 32-bit in isert_completion_rdma_* - Sagi + Fengguang) (Fix context change for v3.14-rc6 code - nab) (Fix iscsit_build_rsp_pdu inc_statsn flag usage - nab) Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
In case of protected transactions, we will need to check the protection status of the transaction before sending SCSI response. So be ready for RDMA_WRITE completions. currently we don't ask for these completions, but for T10-PI we will. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
Introduce pi_context to hold relevant RDMA protection resources. We eliminate data_key_valid boolean and replace it with indicators container to indicate: - Is the descriptor protected (registered via signature MR) - Is the data_mr key valid (can spare LOCAL_INV WR) - Is the prot_mr key valid (can spare LOCAL_INV WR) - Is the sig_mr key valid (can spare LOCAL_INV WR) Upon connection establishment check if network portal is T10-PI enabled and allocate T10-PI resources if necessary, allocate signature enabled memory regions and mark connection queue-pair as signature enabled. (Fix context change for v3.14-rc6 code - nab) Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sagi Grimberg 提交于
export map/unmap data buffer to a routine that may be used in various places in the code and keep the mapping data in a designated descriptor. Also, let isert_fast_reg_mr to decide weather to use global MR or do fast registration. This commit does not change any functionality. (Fix context change for v3.14-rc6 code - nab) Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
- 03 4月, 2014 2 次提交
-
-
由 Selvin Xavier 提交于
Unregister the inet notifier during ocrdma unload to avoid a panic after driver unload. Signed-off-by: NSelvin Xavier <selvin.xavier@emulex.com> Signed-off-by: NDevesh Sharma <devesh.sharma@emulex.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Roland Dreier 提交于
We should cast pointers to and from unsigned long to turn them into ints. Signed-off-by: NRoland Dreier <roland@purestorage.com>
-