- 23 2月, 2018 2 次提交
-
-
由 James Smart 提交于
Up until now, all SLI-4 devices had the same doorbells at the same bar locations. With newer hardware, there are now independent EQ and CQ doorbells and the bar locations differ. Prepare the code for new hardware by separating the eq/cq doorbell into separate components. The components can be set based on if_type. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Up until now, an SLI-4 device had no variance in the way it handled its EQs and CQs. With newer hardware, there are now differences in doorbells and some differences in how entries are valid. Prepare the code for new hardware by creating a sli4-based callout table that can be set based on if_type. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 13 2月, 2018 19 次提交
-
-
由 James Smart 提交于
Updated Copyright in files updated 11.4.0.7 Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Update the driver version to 11.4.0.7 Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
In a test that is doing large numbers of cable swaps on the target, the nvme controllers wouldn't reconnect. During the cable swaps, the targets n_port_id would change. This information was passed to the nvme-fc transport, in the new remoteport registration. However, the nvme-fc transport didn't update the n_port_id value in the remoteport struct when it reused an existing structure. Later, when a new association was attempted on the remoteport, the driver's NVME LS routine would use the stale n_port_id from the remoteport struct to address the LS. As the device is no longer at that address, the LS would go into never never land. Separately, the nvme-fc transport will be corrected to update the n_port_id value on a re-registration. However, for now, there's no reason to use the transports values. The private pointer points to the drivers node structure and the node structure is up to date. Therefore, revise the LS routine to use the drivers data structures for the LS. Augmented the debug message for better debugging in the future. Also removed a duplicate if check that seems to have slipped in. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Currently, write underruns (mismatch of amount transferred vs scsi status and its residual) detected by the adapter are not being flagged as an error. Its expected the target controls the data transfer and would appropriately set the RSP values. Only read underruns are treated as errors. Revise the SCSI error handling to treat write underruns as an error as well. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The driver was inappropriately pulling in the nvme host's nvme.h header. What it really needed was the standard <linux/nvme.h> header. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
When using the special option to suppress the response iu, ensure the adapter fully supports the feature by checking feature flags from the adapter and validating the support when formatting the WQE. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
During SCSI error handling escalation to host reset, the SCSI io routines were moved off the txcmplq, but the individual io's ON_CMPLQ flag wasn't cleared. Thus, a background thread saw the io and attempted to access it as if on the txcmplq. Clear the flag upon removal. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Revise the NVME PRLI to indicate CONF support. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The driver ignored checks on whether the link should be kept administratively down after a link bounce. Correct the checks. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
During link bounce testing in a point-to-point topology, the host may enter a soft lockup on the lpfc_worker thread: Call Trace: lpfc_work_done+0x1f3/0x1390 [lpfc] lpfc_do_work+0x16f/0x180 [lpfc] kthread+0xc7/0xe0 ret_from_fork+0x3f/0x70 The driver was simultaneously setting a combination of flags that caused lpfc_do_work()to effectively spin between slow path work and new event data, causing the lockup. Ensure in the typical wq completions, that new event data flags are set if the slow path flag is running. The slow path will eventually reschedule the wq handling. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Make the attribute writeable. Remove the ramp up to logic as its unnecessary, simply set depth. Add debug message if depth changed, possibly reducing limit, yet our outstanding count has yet to catch up with it. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
When nvme target deferred receive logic waits for exchange resources, the corresponding receive buffer is not replenished with the hardware. This can result in a lack of asynchronous receive buffer resources in the hardware, resulting in a "2885 Port Status Event: ... error 1=0x52004a01 ..." message. Correct by replenishing the buffer whenenver the deferred logic kicks in. Update corresponding debug messages and statistics as well. [mkp: applied by hand] Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
A stress test repeatedly resetting the adapter while performing io would eventually report I/O failures and missing nvme namespaces. The driver was setting the nvmefc_fcp_req->private pointer to NULL during the IO completion routine before upcalling done(). If the transport was also running an abort for that IO, the driver would fail the abort with message 6140. Failing the abort is not allowed by the nvme-fc transport, as it mandates that the io must be returned back to the transport. As that does not happen, the transport controller delete has an outstanding reference and can't complete teardown. The NULL-ing of the private pointer should be done only when the io is considered complete. It's complete when the adapter returns the exchange with the "exchange busy" flag clear. Move the NULL'ing of the structure to the done case. This leaves the io contexts set while it is busy and until the subsequent XRI_ABORTED completion which returns the exchange is received. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The lpfc driver does not discover a target when the topology changes from switched-fabric to direct-connect. The target rejects the PRLI from the initiator in direct-connect as the driver is using the old S_ID from the switched topology. The driver was inappropriately clearing the VP bit to register the VPI, which is what is associated with the S_ID. Fix by leaving the VP bit set (it was set earlier) and as the VFI is being re-registered, set the UPDT bit. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
I/O conditions on the nvme target may have the driver submitting to a full hardware wq. The hardware wq is a shared resource among all nvme controllers. When the driver hit a full wq, it failed the io posting back to the nvme-fc transport, which then escalated it into errors. Correct by maintaining a sideband queue within the driver that is added to when the WQ full condition is hit, and drained from as soon as new WQ space opens up. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Existing code was using the wrong field for the completion status when comparing whether to increment abort statistics Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Ensure nvme localports/targetports are torn down before dismantling the adapter sli interface on driver detachment. This aids leaving interfaces live while nvme may be making callbacks to abort it. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Increased CQ and WQ sizes for SCSI FCP, matching those used for NVMe development. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The driver controls when the hardware sends completions that communicate consumption of elements from the WQ. This is done by setting a WQEC bit on a WQE. The current driver sets it on every Nth WQE posting. However, the driver isn't clearing the bit if the WQE is reused. Thus, if the queue depth isn't evenly divisible by N, with enough time, it can be set on every element, creating a lot of overhead and risking CQ full conditions. Correct by clearing the bit when not setting it on an Nth element. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 09 1月, 2018 3 次提交
-
-
由 Joe Perches 提交于
Convert DEVICE_ATTR uses to DEVICE_ATTR_WO where possible. Done with perl script: $ git grep -w --name-only DEVICE_ATTR | \ xargs perl -i -e 'local $/; while (<>) { s/\bDEVICE_ATTR\s*\(\s*(\w+)\s*,\s*\(?(?:\s*S_IWUSR\s*|\s*0200\s*)\)?\s*,\s*NULL\s*,\s*\s_store\s*\)/DEVICE_ATTR_WO(\1)/g; print;}' Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Joe Perches 提交于
Convert DEVICE_ATTR uses to DEVICE_ATTR_RO where possible. Done with perl script: $ git grep -w --name-only DEVICE_ATTR | \ xargs perl -i -e 'local $/; while (<>) { s/\bDEVICE_ATTR\s*\(\s*(\w+)\s*,\s*\(?(?:\s*S_IRUGO\s*|\s*0444\s*)\)?\s*,\s*\1_show\s*,\s*NULL\s*\)/DEVICE_ATTR_RO(\1)/g; print;}' Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NRobert Jarzmik <robert.jarzmik@free.fr> Acked-by: NSagi Grimberg <sagi@grimberg.me> Acked-by: NZhang Rui <rui.zhang@intel.com> Acked-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Acked-by: NJani Nikula <jani.nikula@intel.com> Acked-by: NCorey Minyard <cminyard@mvista.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Joe Perches 提交于
Convert DEVICE_ATTR uses to DEVICE_ATTR_RW where possible. Done with perl script: $ git grep -w --name-only DEVICE_ATTR | \ xargs perl -i -e 'local $/; while (<>) { s/\bDEVICE_ATTR\s*\(\s*(\w+)\s*,\s*\(?(\s*S_IRUGO\s*\|\s*S_IWUSR|\s*S_IWUSR\s*\|\s*S_IRUGO\s*|\s*0644\s*)\)?\s*,\s*\1_show\s*,\s*\1_store\s*\)/DEVICE_ATTR_RW(\1)/g; print;}' Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NFelipe Balbi <felipe.balbi@linux.intel.com> Acked-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Acked-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Acked-by: NZhang Rui <rui.zhang@intel.com> Acked-by: NJarkko Nikula <jarkko.nikula@bitmer.com> Acked-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 04 1月, 2018 2 次提交
-
-
由 Colin Ian King 提交于
Several statements are indented too far, fix these Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 Colin Ian King 提交于
localport is being dereferenced to assign lport and then immediately afterwards localport is being sanity checked to see if it is null. Fix this by only dereferencing localport until after it has been null checked. Detected by CoverityScan, CID#1463038 ("Dereference before null check") Fixes: 3a8cefbfc5ee ("scsi: lpfc: Beef up stat counters for debug") Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 21 12月, 2017 10 次提交
-
-
由 James Smart 提交于
Prior patch mixed up what argument in the macro was what, so min value was placed as the "default" argument, and the default value was placed as the "min" argument. Thus, when the default was applied, it looked like the default was smaller than the allowed min. Swap argument postions to correct. [mkp: fixed checkpatch warning] Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Update the driver version to 11.4.0.6 Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
If log verbose in not turned on, its hard to tell when certain error paths get hit. Add stats counters and corresponding logic to debugfs/sysfs to aid understanding what paths were traversed. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
When unregistering a remote port the lpfc driver would eventually wait for the remoteport_unreg done callback. But the driver never completed the io aborts that would allow the connections to terminate thus the unreg done callback was never issued. Turns out the coding style of the driver allowed for the wait to occur on the same cpu that the deferred isr is called on. The blocking for the wait, blocked the isr, and as the isr didn't run, the io aborts wouldn't finish. Turns out there was never a good reason to block waiting for the unreg done in the first place. The driver can continue execution and the ref counting within the driver will do the right thing. Resolve by removing the wait and patching up a few cases where the ref counting didn't look right - mainly cases where the remote port comes back before the aborts had completed and the unreg done had been called. Additionally, a few places which used pointer values to guide driver actions weren't protected by lock, so correct those. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
In the lpfc discovery engine, when as a nvme target, where the driver was performing mailbox io with the adapter for port login when a NVME PRLI is received from the host. Rather than queue and eventually get back to sending a response after the mailbox traffic, the driver rejected the io with an error response. Turns out this particular initiator didn't like the rejection values (unable to process command/command in progress) so it never attempted a retry of the PRLI. Thus the host never established nvme connectivity with the lpfc target. By changing the rejection values (to Logical Busy/nothing more), the initiator accepted the response and would retry the PRLI, resulting in nvme connectivity. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
When enabled for both SCSI and NVME support, and connected pt2pt to a SCSI only target, the driver nodelist entry for the remote port is left in PRLI_ISSUE state and no SCSI LUNs are discovered. Works fine if only configured for SCSI support. Error was due to some of the prli points still reflecting the need to send only 1 PRLI. On a lot of fabric configs, targets were NVME only, which meant the fabric-reported protocol attributes were only telling the driver one protocol or the other. Thus things worked fine. With pt2pt, the driver must send a PRLI for both protocols as there are no hints on what the target supports. Thus pt2pt targets were hitting the multiple PRLI issues. Complete the dual PRLI support. Track explicitly whether scsi (fcp) or nvme prli's have been sent. Accurately track protocol support detected on each node as reported by the fabric or probed by PRLI traffic. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Increased the sizes of the SCSI WQ's and CQ's so that SCSI operation is similar to that used by NVME. However, size increase restricted only to those newer adapters that can support the larger WQE size, thus bigger queue sizes. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Handling a rcv'ed PRLI incorrectly can cause the ndlp to end up in the wrong state or the driver to ACC and PRLI when it should send LS_RJT. The cause was due to the driver not properly looking at the PRLI type and taking the multiple protocol support into consideration. Resolved by adding checks in the various PRLI receive points to validate PRLI type and reject if not valid for the enabled protocols and mode (host vs target). Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The driver is all set to handle the defer_rcv api for the nvmet_fc transport, yet didn't properly recognize the return status when the defer_rcv occurred. The driver treated it simply as an error and aborted the io. Several residual issues occurred at that point. Finish the defer_rcv support: recognize the return status when the io request is being handled in a deferred style. This stops the rogue aborts; Replenish the async cmd rcv buffer in the deferred receive if needed. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
NVME targets appear to randomly disconnect from the initiator when running heavy IO. The error is due to the host aggregate (across all controllers) io load was beyond the maximum exchange count for nvme on the adapter. The driver was properly returning a resource busy status, but the io load was so great heartbeat commands would be bounced and not have a successful retry within the fuzz amount for the nvme heartbeat (yes, a very high io load!). Thus the target was terminating the controller due to a keep alive failure. Resolve by reserving a few exchanges (by counters) which can be used when the adapter is out of normal exchanges and the command is a NVME heartbeat command. As counters are used, while the reserved command is outstanding, as soon as any other exchange completes, the counters are adjusted and the reserved count is replenished. The heartbeat completes execution in a normal fashion. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 05 12月, 2017 4 次提交
-
-
由 James Smart 提交于
Update the driver version to 11.4.0.5 Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The logic for sg_seg_cnt is a bit convoluted. This patch tries to clean up a couple of areas, especially around the +2 and +1 logic. This patch: - Cleans up the lpfc_sg_seg_cnt attribute to specify a real minimum rather than making the minimum be whatever the default is. - Removes the hardcoding of +2 (for the number of elements we use in a sgl for cmd iu and rsp iu) and +1 (an additional entry to compensate for nvme's reduction of io size based on a possible partial page) logic in sg list initialization. In the case where the +1 logic is referenced in host and target io checks, use the values set in the transport template as that value was properly set. There can certainly be more done in this area and it will be addressed in combined host/target driver effort. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
During driver unload, the driver may crash due to NULL pointers. The NULL pointers were due to the driver not protecting itself sufficiently during some of the teardown paths. Additionally, the driver was not waiting for and cleanup up nvme io resources. As such, the driver wasn't making the callbacks to the transport, stalling the transports association teardown. This patch waits for io clean up before tearding down and adds checks for possible NULL pointers. Cc: <stable@vger.kernel.org> # 4.12+ Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
When the driver is unloading, the nvme transport could be in the process of submitting new requests, will send abort requests to terminate associations, or may make LS-related requests. The driver's abort and request entry points currently is ignorant of the unloading state and is starting the requests even though the infrastructure to complete them continues to teardown. Change the entry points for new requests to check whether unloading and if so, reject the requests. Abort routines check unloading, and if so, noop the request. An abort is noop'd as the teardown paths are already aborting/terminating the io outstanding at the time the teardown initiated. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-