- 21 6月, 2019 1 次提交
-
-
由 James Smart 提交于
This patch updates RSCN receive processing to check for the remote port being an NVME port, and if so, invoke the nvme_fc callback to rescan the remote port. The rescan will generate a discovery udev event. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Reviewed-by: NArun Easi <aeasi@marvell.com> Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 09 4月, 2019 2 次提交
-
-
由 Bart Van Assche 提交于
This patch avoids that the following compiler warning is reported with CONFIG_NVME_FC=n: drivers/scsi/lpfc/lpfc_nvme.c:2140:1: warning: 'lpfc_nvme_lport_unreg_wait' defined but not used [-Wunused-function] lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport, ^~~~~~~~~~~~~~~~~~~~~~~~~~ Fixes: 3999df75 ("scsi: lpfc: Declare local functions static") Cc: James Smart <james.smart@broadcom.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 Gustavo A. R. Silva 提交于
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
- 04 4月, 2019 3 次提交
-
-
由 Bart Van Assche 提交于
This patch avoids that a kernel warning appears when smp_processor_id() is called with preempt debugging enabled. Cc: James Smart <james.smart@broadcom.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 Bart Van Assche 提交于
This patch avoids that the compiler warns about missing fall-through annotation when building with W=1. Cc: James Smart <james.smart@broadcom.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 Bart Van Assche 提交于
This patch avoids that the compiler complains about missing declarations when building with W=1. Cc: James Smart <james.smart@broadcom.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 26 3月, 2019 1 次提交
-
-
由 Arnd Bergmann 提交于
clang -Wuninitialized incorrectly sees a variable being used without initialization: drivers/scsi/lpfc/lpfc_nvme.c:2102:37: error: variable 'localport' is uninitialized when used here [-Werror,-Wuninitialized] lport = (struct lpfc_nvme_lport *)localport->private; ^~~~~~~~~ drivers/scsi/lpfc/lpfc_nvme.c:2059:38: note: initialize the variable 'localport' to silence this warning struct nvme_fc_local_port *localport; ^ = NULL 1 error generated. This is clearly in dead code, as the condition leading up to it is always false when CONFIG_NVME_FC is disabled, and the variable is always initialized when nvme_fc_register_localport() got called successfully. Change the preprocessor conditional to the equivalent C construct, which makes the code more readable and gets rid of the warning. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 20 3月, 2019 1 次提交
-
-
由 James Smart 提交于
Update the 6072 log message string to print the whole 32 bits of the extended status. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 07 3月, 2019 1 次提交
-
-
由 Arnd Bergmann 提交于
The newly introduced 'cpu' variable is only used inside of an optional block, so we get a warning without CONFIG_SCSI_LPFC_DEBUG_FS: drivers/scsi/lpfc/lpfc_nvme.c: In function 'lpfc_nvme_io_cmd_wqe_cmpl': drivers/scsi/lpfc/lpfc_nvme.c:968:30: error: unused variable 'cpu' [-Werror=unused-variable] uint32_t code, status, idx, cpu; Move the declaration into the same block to avoid the warning. Fixes: 63df6d63 ("scsi: lpfc: Adapt cpucheck debugfs logic to Hardware Queues") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 06 2月, 2019 13 次提交
-
-
由 James Smart 提交于
For files modified as part of 12.2.0.0 patches, update copyright to 2019 Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
A scsi host lock is taken on every io completion to check whether the abort handler is waiting on the io completion. This is an expensive lock to take on all completion when rarely in an abort condition. Replace scsi host lock with command-specific lock. Synchronize completion and abort paths by new cmd lock. Ensure all flag changing and nulling of context pointers taken under lock. When adding lock to task management abort, realized it was missing other synchronization locks. Added that synchronization to match normal paths. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
So far MSIX vector allocation assumed it would be 1:1 with hardware queues. However, there are several reasons why fewer MSIX vectors may be allocated than hardware queues such as the platform being out of vectors or adapter limits being less than cpu count. This patch reworks the MSIX/EQ relationships with the per-cpu hardware queues so they can function independently. MSIX vectors will be equitably split been cpu sockets/cores and then the per-cpu hardware queues will be mapped to the vectors most efficient for them. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Default behavior is to use the information from the upper IO stacks to select the hardware queue to use for IO submission. Which typically has good cpu affinity. However, the driver, when used on some variants of the upstream kernel, has found queuing information to be suboptimal for FCP or IO completion locked on particular cpus. For command submission situations, the lpfc_fcp_io_sched module parameter can be set to specify a hardware queue selection policy that overrides the os stack information. For IO completion situations, rather than queing cq processing based on the cpu servicing the interrupting event, schedule the cq processing on the cpu associated with the hardware queue's cq. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The XRI get/put lists were partitioned per hardware queue. However, the adapter rarely had sufficient resources to give a large number of resources per queue. As such, it became common for a cpu to encounter a lack of XRI resource and request the upper io stack to retry after returning a BUSY condition. This occurred even though other cpus were idle and not using their resources. Create as efficient a scheme as possible to move resources to the cpus that need them. Each cpu maintains a small private pool which it allocates from for io. There is a watermark that the cpu attempts to keep in the private pool. The private pool, when empty, pulls from a global pool from the cpu. When the cpu's global pool is empty it will pull from other cpu's global pool. As there many cpu global pools (1 per cpu or hardware queue count) and as each cpu selects what cpu to pull from at different rates and at different times, it creates a radomizing effect that minimizes the number of cpu's that will contend with each other when the steal XRI's from another cpu's global pool. On io completion, a cpu will push the XRI back on to its private pool. A watermark level is maintained for the private pool such that when it is exceeded it will move XRI's to the CPU global pool so that other cpu's may allocate them. On NVME, as heartbeat commands are critical to get placed on the wire, a single expedite pool is maintained. When a heartbeat is to be sent, it will allocate an XRI from the expedite pool rather than the normal cpu private/global pools. On any io completion, if a reduction in the expedite pools is seen, it will be replenished before the XRI is placed on the cpu private pool. Statistics are added to aid understanding the XRI levels on each cpu and their behaviors. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
SLI4 nvme functions are passing the SLI3 ring number when posting wqe to hardware. This should be indicating the hardware queue to use, not the ring number. Replace ring number with the hardware queue that should be used. Note: SCSI avoided this issue as it utilized an older lfpc_issue_iocb routine that properly adapts. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Many io statistics were being sampled and saved using adapter-based data structures. This was creating a lot of contention and cache thrashing in the I/O path. Move the statistics to the hardware queue data structures. Given the per-queue data structures, use of atomic types is lessened. Add new sysfs and debugfs stat routines to collate the per hardware queue values and report at an adapter level. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Similar to the io execution path that reports cpu context information, the debugfs routines for cpu information needs to be aligned with new hardware queue implementation. Convert debugfs cnd nvme cpucheck statistics to report information per Hardware Queue. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Once the IO buff allocations were made shared, there was a single XRI buffer list shared by all hardware queues. A single list isn't great for performance when shared across the per-cpu hardware queues. Create a separate XRI IO buffer get/put list for each Hardware Queue. As SGLs and associated IO buffers get allocated/posted to the firmware; round robin their assignment across all available hardware Queues so that there is an equitable assignment. Modify SCSI and NVME IO submit code paths to use the Hardware Queue logic for XRI allocation. Add a debugfs interface to display hardware queue statistics Added new empty_io_bufs counter to track if a cpu runs out of XRIs. Replace common_ variables/names with io_ to make meanings clearer. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Currently, both nvme and fcp each have their own concept of an io_channel, which is a combination wq/cq and associated msix. Different cpus would share an io_channel. The driver is now moving to per-cpu wq/cq pairs and msix vectors. The driver will still use separate wq/cq pairs per protocol on each cpu, but the protocols will share the msix vector. Given the elimination of the nvme and fcp io channels, the module parameters will be removed. A new parameter, lpfc_hdw_queue is added which allows the wq/cq pair allocation per cpu to be overridden and allocated to lesser value. If lpfc_hdw_queue is zero, the number of pairs allocated will be based on the number of cpus. If non-zero, the parameter specifies the number of queues to allocate. At this time, the maximum non-zero value is 64. To manage this new paradigm, a new hardware queue structure is created to track queue activity and relationships. As MSIX vector allocation must be known before setting up the relationships, msix allocation now occurs before queue datastructures are allocated. If the number of vectors allocated is less than the desired hardware queues, the hardware queue counts will be reduced to the number of vectors Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Currently, both NVME and SCSI get their IO buffers from separate pools. XRI's are associated 1:1 with IO buffers, so XRI's are also split between protocols. Eliminate the independent pools and use a single pool. Each buffer structure now has a common section and a protocol section. Per protocol routines for SGL initialization are removed and replaced by common routines. Initialization of the buffers is only done on the common area. All other fields, which are protocol specific, are initialized when the buffer is allocated for use in the per-protocol allocation routine. In the past, the SCSI side allocated IO buffers as part of slave_alloc calls until the maximum XRIs for SCSI was reached. As all XRIs are now common and may be used for either protocol, allocation for everything is done as part of adapter initialization and the scsi side has no action in slave alloc. As XRI's are no longer split, the lpfc_xri_split module parameter is removed. Adapters based on SLI3 will continue to use the older scsi_buf_list_get/put routines. All SLI4 adapters utilize the new IO buffer scheme Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
lpfc_nvme_prep_io_cmd() checks for null pnode, but caller lpfc_nvme_fcp_io_submit() has already ensured it's non-null. Remove the pnode null check. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
An hba-wide lock is taken in the nvme io completion routine. The lock covers null'ing of the nrport pointer in the cmd structure. The nrport member isn't necessary. After extracting the pointer from the command, the pointer was dereferenced to get the fc discovery node pointer. But the fc discovery node pointer is alrady in the command structure so the dereferrence was unnecessary. Eliminated the nrport structure member and its use, which also eliminates the port-wide lock. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 23 1月, 2019 1 次提交
-
-
由 Ewan D. Milne 提交于
We cannot wait on a completion object in the lpfc_nvme_lport structure in the _destroy_localport() code path because the NVMe/fc transport will free that structure immediately after the .localport_delete() callback. This results in a use-after-free, and a hang if slub_debug=FZPU is enabled. Fix this by putting the completion on the stack. Signed-off-by: NEwan D. Milne <emilne@redhat.com> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 08 12月, 2018 1 次提交
-
-
由 James Smart 提交于
Driver is setting bits in word 10 of the SLI4 ABORT WQE (the wqid). The field was a carry over from a prior SLI revision. The field does not exist in SLI4, and the action may result in an overlap with future definition of the WQE. Remove the setting of WQID in the ABORT WQE. Also cleaned up WQE field settings - initialize to zero, don't bother to set fields to zero. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 21 9月, 2018 1 次提交
-
-
由 James Smart 提交于
The driver currently uses the ndlp to get the local rport which is then used to get the nvme transport remoteport pointer. There can be cases where a stale remoteport pointer is obtained as synchronization isn't done through the different dereferences. Correct by using locks to synchronize the dereferences. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 17 9月, 2018 1 次提交
-
-
由 YueHaibing 提交于
Fixes gcc '-Wunused-but-set-variable' warning: drivers/scsi/lpfc/lpfc_nvme.c: In function 'lpfc_new_nvme_buf': drivers/scsi/lpfc/lpfc_nvme.c:2238:24: warning: variable 'sgl_size' set but not used [-Wunused-but-set-variable] int bcnt, num_posted, sgl_size; ^ Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 12 9月, 2018 2 次提交
-
-
由 James Smart 提交于
Message 6408 is displayed for each entry in an array, but the cpu and queue numbers were incorrect for the entry. Message 6001 includes an extraneous character. Resolve both issues Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
The driver allocates a sg list per io struture based on a fixed maximum size. When it registers with the protocol transports and indicates the max sg list size it supports, the driver manipulates the fixed value to report a lesser amount so that it has reserved space for sg elements that are used for DIF. The driver initialization path sets the cfg_sg_seg_cnt field to the manipulated value for scsi. NVME initialization ran afterward and capped it's maximum by the manipulated value for SCSI. This erroneously made NVME report the SCSI-reduce-for-DIF value that reduced the max io size for nvme and wasted sg elements. Rework the driver so that cfg_sg_seg_cnt becomes the overall maximum size and allow the max size to be tunable. A separate (new) scsi sg count is then setup with the scsi-modified reduced value. NVME then initializes based off the overall maximum. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 03 8月, 2018 2 次提交
-
-
由 James Smart 提交于
Performance is affected when target queue depth is tracked. An atomic counter is incremented on the submission path which competes with it being decremented on the completion path. In addition, multiple CPUs can simultaniously be manipulating this counter for the same ndlp. Reduce the overhead by only performing the target increment/decrement when the target queue depth is less than the overall adapter depth, thus is actually meaningful. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
During remote port loss fault testing, the driver crashed with the following trace: general protection fault: 0000 [#1] SMP RIP: ... lpfc_nvme_register_port+0x250/0x480 [lpfc] Call Trace: lpfc_nlp_state_cleanup+0x1b3/0x7a0 [lpfc] lpfc_nlp_set_state+0xa6/0x1d0 [lpfc] lpfc_cmpl_prli_prli_issue+0x213/0x440 lpfc_disc_state_machine+0x7e/0x1e0 [lpfc] lpfc_cmpl_els_prli+0x18a/0x200 [lpfc] lpfc_sli_sp_handle_rspiocb+0x3b5/0x6f0 [lpfc] lpfc_sli_handle_slow_ring_event_s4+0x161/0x240 [lpfc] lpfc_work_done+0x948/0x14c0 [lpfc] lpfc_do_work+0x16f/0x180 [lpfc] kthread+0xc9/0xe0 ret_from_fork+0x55/0x80 After registering a new remoteport, the driver is pulling an ndlp pointer from the lpfc rport associated with the private area of a newly registered remoteport. The private area is uninitialized, so it's garbage. Correct by pulling the the lpfc rport pointer from the entering ndlp point, then ndlp value from at rport. Note the entering ndlp may be replacing by the rport->ndlp due to an address change swap. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 11 7月, 2018 2 次提交
-
-
由 James Smart 提交于
The PBDE optimizations aren't supported in all firmware revs. Make optimizations configurable in case there's a side effect on old firmware. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
System crashes when the lpfc module is unloaded after making the port offline The nvme queue pointers were freed during port offline, but were later accessed in pci remove path. Validate the pointers in pci remove path before accessing them. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 29 5月, 2018 2 次提交
-
-
由 James Smart 提交于
modprobe -r lpfc produces the following: Call Trace: __blk_mq_run_hw_queue+0xa2/0xb0 __blk_mq_delay_run_hw_queue+0x9d/0xb0 ? blk_mq_hctx_has_pending+0x32/0x80 blk_mq_run_hw_queue+0x50/0xd0 blk_mq_sched_insert_request+0x110/0x1b0 blk_execute_rq_nowait+0x76/0x180 nvme_keep_alive_work+0x8a/0xd0 [nvme_core] process_one_work+0x17f/0x440 worker_thread+0x126/0x3c0 ? manage_workers.isra.24+0x2a0/0x2a0 kthread+0xd1/0xe0 ? insert_kthread_work+0x40/0x40 ret_from_fork_nospec_begin+0x21/0x21 ? insert_kthread_work+0x40/0x40 However, rmmod lpfc would run correctly. When an nvme remoteport is unregistered with the host nvme transport, it needs to set the remoteport->dev_loss_tmo value 0 to indicate an immediate termination of device loss and prevent any further keep alives to that rport. The driver was never setting dev_loss_tmo causing the nvme transport to continue to send the keep alive. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Under large configurations, the driver would start to log message 6065 - NVME out of buffers (exchanges). The driver is using the ndlp cmd_qdepth value when determining the max outstanding ios for an adapter. This value, by default, is set to 65536, which exceeds the maximum exchange counts supported on an adapter. The ndlp cmd_qdepth has no relevance and outstanding io count should be capped at the max exchange count with IO requests beyond that level getting bounced back with an EBUSY status so that they are retried by the block layer. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 08 5月, 2018 3 次提交
-
-
由 James Smart 提交于
Fix small formatting and wording nits in Broadcom copyright header Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
Fix up log messages and add an fcp error stat counter in the IO submit code path to make diagnosing problems easier Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
I/O submission paths in the lpfc nvme path are rejecting the io with an error code that reflects back to the callee as a hard io failure. Many of these conditions are transient and would likely resolve if retried. Correct by returning -EBUSY, which the FC transport triggers off of to return busy status codes to the blk-mq layer. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 19 4月, 2018 3 次提交
-
-
由 James Smart 提交于
Points referencing local port structures didn't accommodate cases where the localport may not be registered yet. Add NULL pointer checks to logic. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
On tests adding and removing a remote port, calls to nvme_info would eventually show fewer target ports discovered than were present in the san. Additionally, the following error messages were seen: 6031 RemotePort Registration failed err: -116, DID x471301 There is a race condition that exists between the driver and the nvme transport on remote port unregister vs the confirmed deletion. It's possible that the driver may rediscover the remote port and reregister the remote port before a prior unregister delete callback was made (as it rebinded to the prior remoteport structure). However, the driver was coded to expect the callback before seeing the remote port again thus a new registration. The logic results in the driver having an invalid remoteport pointer set. Correct by tracking when waiting for the delete callback. In cases where the ndlp remoteport pointer is updated, it is only cleared when the wait has not been superceded by a prior registration. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
由 James Smart 提交于
During target-side port faults, the driver would not recover all target port logins. This resulted in a loss of nvme device discovery. The driver is coded to wait for all GID_FT requests to complete before restarting discovery. A fault is seen where the outstanding GIT_FT counts are not properly decremented, thus discovery would never start. Another fault was found in the clearing of the gidft_inp counter that would be skipped in this condition. And a third fault found with lpfc_nvme_register_port that would remove a reverence on the ndlp which then allows a node swap on a port address change to prematurely remove the reference and release the ndlp. The following changes are made: - Correct the decrementing of the outstanding GID_FT counters. - In RSCN handling, no longer zero the counter before calling to issue another GID_FT. - No longer remove the reference on the dlp when the ndlp->nrport value is not yet null. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-