1. 22 12月, 2020 1 次提交
  2. 01 12月, 2020 1 次提交
  3. 07 10月, 2020 1 次提交
  4. 06 10月, 2020 1 次提交
  5. 03 9月, 2020 2 次提交
  6. 21 1月, 2020 2 次提交
  7. 25 10月, 2019 14 次提交
  8. 11 9月, 2019 5 次提交
  9. 08 8月, 2019 4 次提交
  10. 21 6月, 2019 1 次提交
  11. 19 6月, 2019 1 次提交
  12. 31 5月, 2019 1 次提交
  13. 13 4月, 2019 1 次提交
    • J
      scsi: hisi_sas: Fix for setting the PHY linkrate when disconnected · c63b88cc
      John Garry 提交于
      In commit efdcad62 ("scsi: hisi_sas: Set PHY linkrate when
      disconnected"), we use the sas_phy_data.enable flag to track whether the
      PHY was enabled or not, so that we know if we should set the PHY negotiated
      linkrate at SAS_LINK_RATE_UNKNOWN or SAS_PHY_DISABLED.
      
      However, it is not proper to use sas_phy_data.enable, since it is only set
      when libsas attempts to set the PHY disabled/enabled; hence, it may not
      even have an initial value.
      
      As a solution to this problem, introduce hisi_sas_phy.enable to track
      whether the PHY is enabled or not, so that we can set the negotiated
      linkrate properly when the PHY comes down.
      Signed-off-by: NJohn Garry <john.garry@huawei.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      c63b88cc
  14. 07 3月, 2019 2 次提交
  15. 09 2月, 2019 3 次提交
    • X
      scsi: hisi_sas: Use pci_irq_get_affinity() for v3 hw as experimental · 4fefe5bb
      Xiang Chen 提交于
      For auto-control irq affinity mode, choose the dq to deliver IO according
      to the current CPU.
      
      Then it decreases the performance regression that fio and CQ interrupts are
      processed on different node.
      
      For user control irq affinity mode, keep it as before.
      
      To realize it, also need to distinguish the usage of dq lock and sas_dev
      lock.
      
      We mark as experimental due to ongoing discussion on managed MSI IRQ
      during hotplug:
      https://marc.info/?l=linux-scsi&m=154876335707751&w=2
      
      We're almost at the point where we can expose multiple queues to the upper
      layer for SCSI MQ, but we need to sort out the per-HBA tags performance
      issue.
      Signed-off-by: NXiang Chen <chenxiang66@hisilicon.com>
      Signed-off-by: NJohn Garry <john.garry@huawei.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      4fefe5bb
    • J
      scsi: hisi_sas: Issue internal abort on all relevant queues · 795f25a3
      John Garry 提交于
      To support queue mapped to a CPU, it needs to be ensured that issuing an
      internal abort is safe, in that it is guaranteed that an internal abort is
      processed for a single IO or a device after all the relevant command(s)
      which it is attempting to abort have been processed by the controller.
      
      Currently we only deliver commands for any device on a single queue to
      solve this problem, as we know that commands issued on the same queue will
      be processed in order, and we will not have a scenario where the internal
      abort is racing against a command(s) which it is trying to abort.
      
      To enqueue commands on queue mapped to a CPU, choosing a queue for an
      command is based on the associated queue for the current CPU, so this is
      not safe for internal abort since it would definitely not be guaranteed
      that commands for the command devices are issued on the same queue.
      
      To solve this issue, we take a bludgeoning approach, and issue a separate
      internal abort on any queue(s) relevant to the command or device, in that
      we will be guaranteed that at least one of these internal aborts will be
      received last in the controller.
      
      So, for aborting a single command, we can just force the internal abort to
      be issued on the same queue as the command which we are trying to abort.
      
      For aborting all commands associated with a device, we issue a separate
      internal abort on all relevant queues. Issuing multiple internal aborts in
      this fashion would have not side affect.
      Signed-off-by: NJohn Garry <john.garry@huawei.com>
      Signed-off-by: NXiang Chen <chenxiang66@hisilicon.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      795f25a3
    • X
      scsi: hisi_sas: change queue depth from 512 to 4096 · 1273d65f
      Xiang Chen 提交于
      If sending IOs to many disks from single queue, it is possible that the
      queue may be full. To avoid the situation, change queue depth from 512 to
      4096 which is the max number of IOs for v3 hw.
      Signed-off-by: NXiang Chen <chenxiang66@hisilicon.com>
      Signed-off-by: NJohn Garry <john.garry@huawei.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      1273d65f