1. 28 8月, 2018 1 次提交
  2. 03 8月, 2018 2 次提交
  3. 11 7月, 2018 2 次提交
  4. 27 6月, 2018 1 次提交
  5. 29 5月, 2018 1 次提交
  6. 08 5月, 2018 2 次提交
  7. 19 4月, 2018 4 次提交
  8. 13 3月, 2018 1 次提交
  9. 23 2月, 2018 3 次提交
  10. 13 2月, 2018 4 次提交
  11. 09 1月, 2018 3 次提交
  12. 04 1月, 2018 1 次提交
  13. 21 12月, 2017 2 次提交
  14. 05 12月, 2017 2 次提交
    • J
      scsi: lpfc: small sg cnt cleanup · 81e6a637
      James Smart 提交于
      The logic for sg_seg_cnt is a bit convoluted. This patch tries to clean
      up a couple of areas, especially around the +2 and +1 logic.
      
      This patch:
      
      - Cleans up the lpfc_sg_seg_cnt attribute to specify a real minimum
        rather than making the minimum be whatever the default is.
      
      - Removes the hardcoding of +2 (for the number of elements we use in a
        sgl for cmd iu and rsp iu) and +1 (an additional entry to compensate
        for nvme's reduction of io size based on a possible partial page)
        logic in sg list initialization. In the case where the +1 logic is
        referenced in host and target io checks, use the values set in the
        transport template as that value was properly set.
      
      There can certainly be more done in this area and it will be addressed
      in combined host/target driver effort.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      81e6a637
    • J
      scsi: lpfc: Adjust default value of lpfc_nvmet_mrq · bcb24f65
      James Smart 提交于
      The current default for async hw receive queues is 1, which presents
      issues under heavy load as number of queues influence the available
      async receive buffer limits.
      
      Raise the default to the either the current hw limit (16) or the number
      of hw qs configured (io channel value).
      
      Revise the attribute definition for mrq to better reflect what we do for
      hw queues. E.g. 0 means default to optimal (# of cpus), non-zero
      specifies a specific limit. Before this change, mrq=0 meant target mode
      was disabled. As 0 now has a different meaning, rework the if tests to
      use the better nvmet_support check.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      bcb24f65
  15. 11 11月, 2017 1 次提交
  16. 03 10月, 2017 2 次提交
  17. 25 8月, 2017 2 次提交
  18. 10 8月, 2017 1 次提交
    • J
      lpfc: support nvmet_fc defer_rcv callback · 50738420
      James Smart 提交于
      Currently, calls to nvmet_fc_rcv_fcp_req() always copied the
      FC-NVME cmd iu to a temporary buffer before returning, allowing
      the driver to immediately repost the buffer to the hardware.
      
      To address timing conditions on queue element structures vs async
      command reception, the nvmet_fc transport occasionally may need to
      hold on to the command iu buffer for a short period. In these cases,
      the nvmet_fc_rcv_fcp_req() will return a special return code
      (-EOVERFLOW). In these cases, the LLDD must delay until the new
      defer_rcv lldd callback is called before recycling the buffer back
      to the hw.
      
      This patch adds support for the new nvmet_fc transport defer_rcv
      callback and recognition of the new error code when passing commands
      to the transport.
      Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      50738420
  19. 20 6月, 2017 2 次提交
  20. 13 6月, 2017 3 次提交