1. 25 8月, 2021 6 次提交
  2. 27 7月, 2021 2 次提交
  3. 19 7月, 2021 2 次提交
  4. 10 6月, 2021 1 次提交
  5. 13 4月, 2021 2 次提交
  6. 17 11月, 2020 2 次提交
  7. 27 10月, 2020 1 次提交
  8. 01 9月, 2020 1 次提交
  9. 03 7月, 2020 2 次提交
  10. 08 5月, 2020 1 次提交
  11. 18 2月, 2020 1 次提交
    • J
      scsi: lpfc: add RDF registration and Link Integrity FPIN logging · df3fe766
      James Smart 提交于
      This patch modifies lpfc to register for Link Integrity events via the use
      of an RDF ELS and to perform Link Integrity FPIN logging.
      
      Specifically, the driver was modified to:
      
       - Format and issue the RDF ELS immediately following SCR registration.
         This registers the ability of the driver to receive FPIN ELS.
      
       - Adds decoding of the FPIN els into the received descriptors, with
         logging of the Link Integrity event information. After decoding, the ELS
         is delivered to the scsi fc transport to be delivered to any user-space
         applications.
      
       - To aid in logging, simple helpers were added to create enum to name
         string lookup functions that utilize the initialization helpers from the
         fc_els.h header.
      
       - Note: base header definitions for the ELS's don't populate the
         descriptor payloads. As such, lpfc creates it's own version of the
         structures, using the base definitions (mostly headers) and additionally
         declaring the descriptors that will complete the population of the ELS.
      
      Link: https://lore.kernel.org/r/20200210173155.547-3-jsmart2021@gmail.comSigned-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      df3fe766
  12. 22 12月, 2019 1 次提交
  13. 06 11月, 2019 1 次提交
  14. 29 10月, 2019 1 次提交
  15. 25 10月, 2019 4 次提交
  16. 01 10月, 2019 2 次提交
  17. 20 8月, 2019 3 次提交
    • J
      scsi: lpfc: Add NVMe sequence level error recovery support · 0d8af096
      James Smart 提交于
      FC-NVMe-2 added support for sequence level error recovery in the FC-NVME
      protocol. This allows for the detection of errors and lost frames and
      immediate retransmission of data to avoid exchange termination, which
      escalates into NVMeoFC connection and association failures. A significant
      RAS improvement.
      
      The driver is modified to indicate support for SLER in the NVMe PRLI is
      issues and to check for support in the PRLI response.  When both sides
      support it, the driver will set a bit in the WQE to enable the recovery
      behavior on the exchange. The adapter will take care of all detection and
      retransmission.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      0d8af096
    • J
      scsi: lpfc: Support dynamic unbounded SGL lists on G7 hardware. · d79c9e9d
      James Smart 提交于
      Typical SLI-4 hardware supports up to 2 4KB pages to be registered per XRI
      to contain the exchanges Scatter/Gather List. This caps the number of SGL
      elements that can be in the SGL. There are not extensions to extend the
      list out of the 2 pages.
      
      The G7 hardware adds a SGE type that allows the SGL to be vectored to a
      different scatter/gather list segment. And that segment can contain a SGE
      to go to another segment and so on.  The initial segment must still be
      pre-registered for the XRI, but it can be a much smaller amount (256Bytes)
      as it can now be dynamically grown.  This much smaller allocation can
      handle the SG list for most normal I/O, and the dynamic aspect allows it to
      support many MB's if needed.
      
      The implementation creates a pool which contains "segments" and which is
      initially sized to hold the initial small segment per xri. If an I/O
      requires additional segments, they are allocated from the pool.  If the
      pool has no more segments, the pool is grown based on what is now
      needed. After the I/O completes, the additional segments are returned to
      the pool for use by other I/Os. Once allocated, the additional segments are
      not released under the assumption of "if needed once, it will be needed
      again". Pools are kept on a per-hardware queue basis, which is typically
      1:1 per cpu, but may be shared by multiple cpus.
      
      The switch to the smaller initial allocation significantly reduces the
      memory footprint of the driver (which only grows if large ios are
      issued). Based on the several K of XRIs for the adapter, the 8KB->256B
      reduction can conserve 32MBs or more.
      
      It has been observed with per-cpu resource pools that allocating a resource
      on CPU A, may be put back on CPU B. While the get routines are distributed
      evenly, only a limited subset of CPUs may be handling the put routines.
      This can put a strain on the lpfc_put_cmd_rsp_buf_per_cpu routine because
      all the resources are being put on a limited subset of CPUs.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      d79c9e9d
    • J
      scsi: lpfc: Add MDS driver loopback diagnostics support · e62245d9
      James Smart 提交于
      Added code to support driver loopback with MDS Diagnostics.  This style of
      diagnostics passes frames from the fabric to the driver who then echo them
      back out the link.  SEND_FRAME WQEs are used to transmit the frames.  Added
      the SOF and EOF field location definitions for use by SEND_FRAME.
      
      Also ensure that enable_mds_diags is a RW parameter.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      e62245d9
  18. 04 4月, 2019 1 次提交
  19. 20 3月, 2019 1 次提交
  20. 06 2月, 2019 4 次提交
    • J
      scsi: lpfc: Update 12.2.0.0 file copyrights to 2019 · 0d041215
      James Smart 提交于
      For files modified as part of 12.2.0.0 patches, update copyright to 2019
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      0d041215
    • J
      scsi: lpfc: Rework EQ/CQ processing to address interrupt coalescing · 32517fc0
      James Smart 提交于
      When driving high iop counts, auto_imax coalescing kicks in and drives the
      performance to extremely small iops levels.
      
      There are two issues:
      
       1) auto_imax is enabled by default. The auto algorithm, when iops gets
          high, divides the iops by the hdwq count and uses that value to
          calculate EQ_Delay. The EQ_Delay is set uniformly on all EQs whether
          they have load or not. The EQ_delay is only manipulated every 5s (a
          long time). Thus there were large 5s swings of no interrupt delay
          followed by large/maximum delay, before repeating.
      
       2) When processing a CQ, the driver got mixed up on the rate of when
          to ring the doorbell to keep the chip appraised of the eqe or cqe
          consumption as well as how how long to sit in the thread and
          process queue entries. Currently, the driver capped its work at
          64 entries (very small) and exited/rearmed the CQ.  Thus, on heavy
          loads, additional overheads were taken to exit and re-enter the
          interrupt handler. Worse, if in the large/maximum coalescing
          windows,k it could be a while before getting back to servicing.
      
      The issues are corrected by the following:
      
       - A change in defaults. Auto_imax is turned OFF and fcp_imax is set
         to 0. Thus all interrupts are immediate.
      
       - Cleanup of field names and their meanings. Existing names were
         non-intuitive or used for duplicate things.
      
       - Added max_proc_limit field, to control the length of time the
         handlers would service completions.
      
       - Reworked EQ handling:
          Added common routine that walks eq, applying notify interval and max
            processing limits. Use queue_claimed to claim ownership of the queue
            while processing. Always rearm the queue whenever the common routine
            is called.
          Rework queue element processing, namely to eliminate hba_index vs
            host_index. Only one index is necessary. The queue entry can be
            marked invalid and the host_index updated immediately after eqe
            processing.
          After rework, xx_release routines are now DB write functions. Renamed
            the routines as such.
          Moved lpfc_sli4_eq_flush(), which does similar action, to same area.
          Replaced the 2 individual loops that walk an eq with a call to the
            common routine.
          Slightly revised lpfc_sli4_hba_handle_eqe() calling syntax.
          Added per-cpu counters to detect interrupt rates and scale
            interrupt coalescing values.
      
       - Reworked CQ handling:
          Added common routine that walks cq, applying notify interval and max
            processing limits. Use queue_claimed to claim ownership of the queue
            while processing. Always rearm the queue whenever the common routine
            is called.
          Rework queue element processing, namely to eliminate hba_index vs
            host_index. Only one index is necessary. The queue entry can be
            marked invalid and the host_index updated immediately after cqe
            processing.
          After rework, xx_release routines are now DB write functions.  Renamed
            the routines as such.
          Replaced the 3 individual loops that walk a cq with a call to the
            common routine.
          Redefined lpfc_sli4_sp_handle_mcqe() to commong handler definition with
            queue reference. Add increment for mbox completion to handler.
      
       - Added a new module/sysfs attribute: lpfc_cq_max_proc_limit To allow
         dynamic changing of the CQ max_proc_limit value being used.
      
      Although this leaves an EQ as an immediate interrupt, that interrupt will
      only occur if a CQ bound to it is in an armed state and has cqe's to
      process.  By staying in the cq processing routine longer, high loads will
      avoid generating more interrupts as they will only rearm as the processing
      thread exits. The immediately interrupt is also beneficial to idle or
      lower-processing CQ's as they get serviced immediately without being
      penalized by sharing an EQ with a more loaded CQ.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      32517fc0
    • J
      scsi: lpfc: Support non-uniform allocation of MSIX vectors to hardware queues · 6a828b0f
      James Smart 提交于
      So far MSIX vector allocation assumed it would be 1:1 with hardware
      queues. However, there are several reasons why fewer MSIX vectors may be
      allocated than hardware queues such as the platform being out of vectors or
      adapter limits being less than cpu count.
      
      This patch reworks the MSIX/EQ relationships with the per-cpu hardware
      queues so they can function independently. MSIX vectors will be equitably
      split been cpu sockets/cores and then the per-cpu hardware queues will be
      mapped to the vectors most efficient for them.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      6a828b0f
    • J
      scsi: lpfc: Allow override of hardware queue selection policies · 45aa312e
      James Smart 提交于
      Default behavior is to use the information from the upper IO stacks to
      select the hardware queue to use for IO submission.  Which typically has
      good cpu affinity.
      
      However, the driver, when used on some variants of the upstream kernel, has
      found queuing information to be suboptimal for FCP or IO completion locked
      on particular cpus.
      
      For command submission situations, the lpfc_fcp_io_sched module parameter
      can be set to specify a hardware queue selection policy that overrides the
      os stack information.
      
      For IO completion situations, rather than queing cq processing based on the
      cpu servicing the interrupting event, schedule the cq processing on the cpu
      associated with the hardware queue's cq.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      45aa312e
  21. 20 12月, 2018 1 次提交
    • J
      scsi: lpfc: Adding ability to reset chip via pci bus reset · 5021267a
      James Smart 提交于
      This patch adds a "pci_bus_reset" option to the board_mode sysfs attribute.
      This option uses the pci_reset_bus() api to reset the PCIe link the adapter
      is on, which will reset the chip/adapter.  Prior to issuing this option,
      all functions on the same chip must be placed in the offline state by the
      admin. After the reset, all of the instances may be brought online again.
      
      The primary purpose of this functionality is to support cases where
      firmware update required a chip reset but the admin did not want to reboot
      the machine in order to instantiate the firmware update.
      
      Sanity checks take place prior to the reset to ensure the adapter is the
      sole entity on the PCIe bus and that all functions are in the offline
      state.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      5021267a