1. 05 12月, 2017 2 次提交
    • J
      scsi: lpfc: small sg cnt cleanup · 81e6a637
      James Smart 提交于
      The logic for sg_seg_cnt is a bit convoluted. This patch tries to clean
      up a couple of areas, especially around the +2 and +1 logic.
      
      This patch:
      
      - Cleans up the lpfc_sg_seg_cnt attribute to specify a real minimum
        rather than making the minimum be whatever the default is.
      
      - Removes the hardcoding of +2 (for the number of elements we use in a
        sgl for cmd iu and rsp iu) and +1 (an additional entry to compensate
        for nvme's reduction of io size based on a possible partial page)
        logic in sg list initialization. In the case where the +1 logic is
        referenced in host and target io checks, use the values set in the
        transport template as that value was properly set.
      
      There can certainly be more done in this area and it will be addressed
      in combined host/target driver effort.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      81e6a637
    • J
      scsi: lpfc: Adjust default value of lpfc_nvmet_mrq · bcb24f65
      James Smart 提交于
      The current default for async hw receive queues is 1, which presents
      issues under heavy load as number of queues influence the available
      async receive buffer limits.
      
      Raise the default to the either the current hw limit (16) or the number
      of hw qs configured (io channel value).
      
      Revise the attribute definition for mrq to better reflect what we do for
      hw queues. E.g. 0 means default to optimal (# of cpus), non-zero
      specifies a specific limit. Before this change, mrq=0 meant target mode
      was disabled. As 0 now has a different meaning, rework the if tests to
      use the better nvmet_support check.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      bcb24f65
  2. 11 11月, 2017 1 次提交
  3. 03 10月, 2017 2 次提交
  4. 25 8月, 2017 2 次提交
  5. 10 8月, 2017 1 次提交
    • J
      lpfc: support nvmet_fc defer_rcv callback · 50738420
      James Smart 提交于
      Currently, calls to nvmet_fc_rcv_fcp_req() always copied the
      FC-NVME cmd iu to a temporary buffer before returning, allowing
      the driver to immediately repost the buffer to the hardware.
      
      To address timing conditions on queue element structures vs async
      command reception, the nvmet_fc transport occasionally may need to
      hold on to the command iu buffer for a short period. In these cases,
      the nvmet_fc_rcv_fcp_req() will return a special return code
      (-EOVERFLOW). In these cases, the LLDD must delay until the new
      defer_rcv lldd callback is called before recycling the buffer back
      to the hw.
      
      This patch adds support for the new nvmet_fc transport defer_rcv
      callback and recognition of the new error code when passing commands
      to the transport.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      50738420
  6. 20 6月, 2017 2 次提交
  7. 13 6月, 2017 6 次提交
  8. 17 5月, 2017 4 次提交
  9. 24 4月, 2017 2 次提交
    • J
      Add Fabric assigned WWN support. · aeb3c817
      James Smart 提交于
      Adding support for Fabric assigned WWPN and WWNN.
      
      Firmware sends first FLOGI to fabric with vendor version changes.
      On link up driver gets updated service parameter with FAWWN assigned port
      name.  Driver sends 2nd FLOGI with updated fawwpn and modifies the
      vport->fc_portname in driver.
      
      Note:
      Soft wwpn will not be allowed when fawwpn is enabled.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      aeb3c817
    • J
      Fix nvme initiator handling when not enabled. · 4410a67a
      James Smart 提交于
      Fix nvme initiator handline when CONFIG_LPFC_NVME_INITIATOR is not enabled.
      
      With update nvme upstream driver sources, loading
      the driver with nvme enabled resulting in this Oops.
      
       BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
       IP: lpfc_nvme_update_localport+0x23/0xd0 [lpfc]
       PGD 0
       Oops: 0000 [#1] SMP
       CPU: 0 PID: 10256 Comm: lpfc_worker_0 Tainted
       Hardware name: ...
       task: ffff881028191c40 task.stack: ffff880ffdf00000
       RIP: 0010:lpfc_nvme_update_localport+0x23/0xd0 [lpfc]
       RSP: 0018:ffff880ffdf03c20 EFLAGS: 00010202
      
      Cause: As the initiator driver completes discovery at different stages,
      it call lpfc_nvme_update_localport to hint that the DID and role may have
      changed.  In the implementation of lpfc_nvme_update_localport, the driver
      was not validating the localport or the lport during the execution
      of the update_localport routine.  With the recent upstream additions to
      the driver, the create_localport routine didn't run and so the localport
      was NULL causing the page-fault Oops.
      
      Fix: Add the CONFIG_LPFC_NVME_INITIATOR preprocessor inclusions to
      lpfc_nvme_update_localport to turn off all routine processing when
      the running kernel does not have NVME configured.  Add NULL pointer
      checks on the localport and lport in lpfc_nvme_update_localport and
      dump messages if they are NULL and just exit.
      Also one alingment issue fixed.
      Repalces the ifdef with the IS_ENABLED macro.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      4410a67a
  10. 19 4月, 2017 1 次提交
  11. 16 3月, 2017 1 次提交
    • J
      scsi: lpfc: Finalize Kconfig options for nvme · 7d708033
      James Smart 提交于
      Reviewing the result of what was just added for Kconfig, we made a poor
      choice. It worked well for full kernel builds, but not so much for how
      it would be deployed on a distro.
      
      Here's the final result:
      - lpfc will compile in NVME initiator and/or NVME target support based
        on whether the kernel has the corresponding subsystem support.
        Kconfig is not used to drive this specifically for lpfc.
      - There is a module parameter, lpfc_enable_fc4_type, that indicates
        whether the ports will do FCP-only or FCP & NVME (NVME-only not yet
        possible due to dependency on fc transport). As FCP & NVME divvys up
        exchange resources, and given NVME will not be often initially, the
        default is changed to FCP only.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      7d708033
  12. 07 3月, 2017 2 次提交
    • J
      scsi: lpfc: Rename LPFC_MAX_EQ_DELAY to LPFC_MAX_EQ_DELAY_EQID_CNT · 43140ca6
      James Smart 提交于
      Without apriori understanding of what the define is, the name gives
      a very different impression of what it is (a max delay value
      for an EQ).  Rename the define so it reflects what it is: the number
      of EQ IDs that can be set in one instance of the MODIFY_EQ_DELAY
      mbx command.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      43140ca6
    • J
      scsi: lpfc: Fix eh_deadline setting for sli3 adapters. · 96418b5e
      James Smart 提交于
      A previous change unilaterally removed the hba reset entry point
      from the sli3 host template. This was done to allow tape devices
      being used for back up from being removed. Why was this done ?
      When there was non-responding device on the fabric, the error
      escalation policy would escalate to the reset handler. When the
      reset handler was called, it would reset the adapter, dropping
      link, thus logging out and terminating all i/o's - on any target.
      If there was a tape device on the same adapter that wasn't in
      error, it would kill the tape i/o's, effectively killing the
      tape device state.  With the reset point removed, the adapter
      reset avoided the fabric logout, allowing the other devices to
      continue to operate unaffected. A hack - yes. Hint: we really
      need a transport I_T nexus reset callback added to the eh process
      (in between the SCSI target reset and hba reset points), so a
      fc logout could occur to the one bad target only and stop the error
      escalation process.
      
      This patch commonizes the approach so it can be used for sli3 and sli4
      adapters, but mandates the admin, via module parameter, specifically
      identify which adapters the resets are to be removed for. Additionally,
      bus_reset, which sends Target Reset TMFs to all targets, is also removed
      from the template as it too has the same effect as the adapter reset.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NLaurence Oberman <loberman@redhat.com>
      Tested-by: NLaurence Oberman <loberman@redhat.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      96418b5e
  13. 28 2月, 2017 1 次提交
  14. 23 2月, 2017 5 次提交
  15. 21 1月, 2017 1 次提交
  16. 06 1月, 2017 1 次提交
  17. 05 1月, 2017 4 次提交
  18. 09 11月, 2016 2 次提交