1. 26 3月, 2020 2 次提交
    • K
      nvme-pci: Remove tag from process cq · bf392a5d
      Keith Busch 提交于
      The only user for tagged completion was for timeout handling. That user,
      though, really only cares if the timed out command is completed, which
      we can safely check within the timeout handler.
      
      Remove the tag check to simplify completion handling.
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <kbusch@kernel.org>
      bf392a5d
    • A
      nvme-pci: slimmer CQ head update · e2a366a4
      Alexey Dobriyan 提交于
      Update CQ head with pre-increment operator. This saves subtraction of 1
      and a few registers.
      
      Also update phase with "^= 1". This generates only one RMW instruction.
      
      	ffffffff815ba150 <nvme_update_cq_head>:
      	ffffffff815ba150:       0f b7 47 70             movzx  eax,WORD PTR [rdi+0x70]
      	ffffffff815ba154:       83 c0 01                add    eax,0x1
      	ffffffff815ba157:       66 89 47 70             mov    WORD PTR [rdi+0x70],ax
      	ffffffff815ba15b:       66 3b 47 68             cmp    ax,WORD PTR [rdi+0x68]
      	ffffffff815ba15f:       74 01                   je     ffffffff815ba162 <nvme_update_cq_head+0x12>
      	ffffffff815ba161:       c3                      ret
      	ffffffff815ba162:       31 c0                   xor    eax,eax
      	ffffffff815ba164:       80 77 74 01      ===>   xor    BYTE PTR [rdi+0x74],0x1
      	ffffffff815ba168:       66 89 47 70             mov    WORD PTR [rdi+0x70],ax
      	ffffffff815ba16c:       c3                      ret
      
      	add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-119 (-119)
      	Function                                     old     new   delta
      	nvme_poll                                    690     678     -12
      	nvme_dev_disable                            1230    1177     -53
      	nvme_irq                                     613     559     -54
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      e2a366a4
  2. 28 2月, 2020 1 次提交
  3. 19 2月, 2020 2 次提交
  4. 15 2月, 2020 1 次提交
  5. 04 2月, 2020 1 次提交
  6. 07 12月, 2019 3 次提交
  7. 03 12月, 2019 1 次提交
  8. 27 11月, 2019 1 次提交
  9. 22 11月, 2019 1 次提交
  10. 05 11月, 2019 3 次提交
  11. 18 10月, 2019 1 次提交
  12. 14 10月, 2019 3 次提交
  13. 05 10月, 2019 1 次提交
    • A
      nvme: retain split access workaround for capability reads · 3a8ecc93
      Ard Biesheuvel 提交于
      Commit 7fd8930f
      
        "nvme: add a common helper to read Identify Controller data"
      
      has re-introduced an issue that we have attempted to work around in the
      past, in commit a310acd7 ("NVMe: use split lo_hi_{read,write}q").
      
      The problem is that some PCIe NVMe controllers do not implement 64-bit
      outbound accesses correctly, which is why the commit above switched
      to using lo_hi_[read|write]q for all 64-bit BAR accesses occuring in
      the code.
      
      In the mean time, the NVMe subsystem has been refactored, and now calls
      into the PCIe support layer for NVMe via a .reg_read64() method, which
      fails to use lo_hi_readq(), and thus reintroduces the problem that the
      workaround above aimed to address.
      
      Given that, at the moment, .reg_read64() is only used to read the
      capability register [which is known to tolerate split reads], let's
      switch .reg_read64() to lo_hi_readq() as well.
      
      This fixes a boot issue on some ARM boxes with NVMe behind a Synopsys
      DesignWare PCIe host controller.
      
      Fixes: 7fd8930f ("nvme: add a common helper to read Identify Controller data")
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      3a8ecc93
  14. 26 9月, 2019 2 次提交
  15. 12 9月, 2019 1 次提交
  16. 30 8月, 2019 8 次提交
  17. 21 8月, 2019 1 次提交
  18. 16 8月, 2019 2 次提交
  19. 12 8月, 2019 1 次提交
    • R
      nvme-pci: Allow PCI bus-level PM to be used if ASPM is disabled · 4eaefe8c
      Rafael J. Wysocki 提交于
      One of the modifications made by commit d916b1be ("nvme-pci: use
      host managed power state for suspend") was adding a pci_save_state()
      call to nvme_suspend() so as to instruct the PCI bus type to leave
      devices handled by the nvme driver in D0 during suspend-to-idle.
      That was done with the assumption that ASPM would transition the
      device's PCIe link into a low-power state when the device became
      inactive.  However, if ASPM is disabled for the device, its PCIe
      link will stay in L0 and in that case commit d916b1be is likely
      to cause the energy used by the system while suspended to increase.
      
      Namely, if the device in question works in accordance with the PCIe
      specification, putting it into D3hot causes its PCIe link to go to
      L1 or L2/L3 Ready, which is lower-power than L0.  Since the energy
      used by the system while suspended depends on the state of its PCIe
      link (as a general rule, the lower-power the state of the link, the
      less energy the system will use), putting the device into D3hot
      during suspend-to-idle should be more energy-efficient that leaving
      it in D0 with disabled ASPM.
      
      For this reason, avoid leaving NVMe devices with disabled ASPM in D0
      during suspend-to-idle.  Instead, shut them down entirely and let
      the PCI bus type put them into D3.
      
      Fixes: d916b1be ("nvme-pci: use host managed power state for suspend")
      Link: https://lore.kernel.org/linux-pm/2763495.NmdaWeg79L@kreacher/T/#tSigned-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      4eaefe8c
  20. 05 8月, 2019 1 次提交
  21. 01 8月, 2019 1 次提交
  22. 23 7月, 2019 2 次提交
    • Y
      Revert "nvme-pci: don't create a read hctx mapping without read queues" · 8fe34be1
      yangerkun 提交于
      This reverts commit 0298d543.
      
      With this patch, set 'poll_queues > hard queues' will lead to 'nr_read_queues = 0'
      in nvme_calc_irq_sets. Then poll_queues setting can fail since dev->tagset.nr_maps
      equals to 2 and nvme_pci_map_queues will not do map for poll queues.
      Signed-off-by: Nyangerkun <yangerkun@huawei.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8fe34be1
    • M
      nvme: ignore subnqn for ADATA SX6000LNP · 08b903b5
      Misha Nasledov 提交于
      The ADATA SX6000LNP NVMe SSDs have the same subnqn and, due to this, a
      system with more than one of these SSDs will only have one usable.
      
      [ 0.942706] nvme nvme1: ignoring ctrl due to duplicate subnqn (nqn.2018-05.com.example:nvme:nvm-subsystem-OUI00E04C).
      [ 0.943017] nvme nvme1: Removing after probe failure status: -22
      
      02:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5762] (rev 01)
      71:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5762] (rev 01)
      
      There are no firmware updates available from the vendor, unfortunately.
      Applying the NVME_QUIRK_IGNORE_DEV_SUBNQN quirk for these SSDs resolves
      the issue, and they all work after this patch:
      
      /dev/nvme0n1     2J1120050420         ADATA SX6000LNP [...]
      /dev/nvme1n1     2J1120050540         ADATA SX6000LNP [...]
      Signed-off-by: NMisha Nasledov <misha@nasledov.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      08b903b5