1. 07 12月, 2019 2 次提交
  2. 03 12月, 2019 1 次提交
  3. 27 11月, 2019 1 次提交
  4. 22 11月, 2019 1 次提交
  5. 05 11月, 2019 3 次提交
  6. 18 10月, 2019 1 次提交
  7. 14 10月, 2019 3 次提交
  8. 05 10月, 2019 1 次提交
    • A
      nvme: retain split access workaround for capability reads · 3a8ecc93
      Ard Biesheuvel 提交于
      Commit 7fd8930f
      
        "nvme: add a common helper to read Identify Controller data"
      
      has re-introduced an issue that we have attempted to work around in the
      past, in commit a310acd7 ("NVMe: use split lo_hi_{read,write}q").
      
      The problem is that some PCIe NVMe controllers do not implement 64-bit
      outbound accesses correctly, which is why the commit above switched
      to using lo_hi_[read|write]q for all 64-bit BAR accesses occuring in
      the code.
      
      In the mean time, the NVMe subsystem has been refactored, and now calls
      into the PCIe support layer for NVMe via a .reg_read64() method, which
      fails to use lo_hi_readq(), and thus reintroduces the problem that the
      workaround above aimed to address.
      
      Given that, at the moment, .reg_read64() is only used to read the
      capability register [which is known to tolerate split reads], let's
      switch .reg_read64() to lo_hi_readq() as well.
      
      This fixes a boot issue on some ARM boxes with NVMe behind a Synopsys
      DesignWare PCIe host controller.
      
      Fixes: 7fd8930f ("nvme: add a common helper to read Identify Controller data")
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      3a8ecc93
  9. 26 9月, 2019 2 次提交
  10. 12 9月, 2019 1 次提交
  11. 30 8月, 2019 8 次提交
  12. 21 8月, 2019 1 次提交
  13. 16 8月, 2019 2 次提交
  14. 12 8月, 2019 1 次提交
    • R
      nvme-pci: Allow PCI bus-level PM to be used if ASPM is disabled · 4eaefe8c
      Rafael J. Wysocki 提交于
      One of the modifications made by commit d916b1be ("nvme-pci: use
      host managed power state for suspend") was adding a pci_save_state()
      call to nvme_suspend() so as to instruct the PCI bus type to leave
      devices handled by the nvme driver in D0 during suspend-to-idle.
      That was done with the assumption that ASPM would transition the
      device's PCIe link into a low-power state when the device became
      inactive.  However, if ASPM is disabled for the device, its PCIe
      link will stay in L0 and in that case commit d916b1be is likely
      to cause the energy used by the system while suspended to increase.
      
      Namely, if the device in question works in accordance with the PCIe
      specification, putting it into D3hot causes its PCIe link to go to
      L1 or L2/L3 Ready, which is lower-power than L0.  Since the energy
      used by the system while suspended depends on the state of its PCIe
      link (as a general rule, the lower-power the state of the link, the
      less energy the system will use), putting the device into D3hot
      during suspend-to-idle should be more energy-efficient that leaving
      it in D0 with disabled ASPM.
      
      For this reason, avoid leaving NVMe devices with disabled ASPM in D0
      during suspend-to-idle.  Instead, shut them down entirely and let
      the PCI bus type put them into D3.
      
      Fixes: d916b1be ("nvme-pci: use host managed power state for suspend")
      Link: https://lore.kernel.org/linux-pm/2763495.NmdaWeg79L@kreacher/T/#tSigned-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      4eaefe8c
  15. 05 8月, 2019 1 次提交
  16. 01 8月, 2019 1 次提交
  17. 23 7月, 2019 2 次提交
    • Y
      Revert "nvme-pci: don't create a read hctx mapping without read queues" · 8fe34be1
      yangerkun 提交于
      This reverts commit 0298d543.
      
      With this patch, set 'poll_queues > hard queues' will lead to 'nr_read_queues = 0'
      in nvme_calc_irq_sets. Then poll_queues setting can fail since dev->tagset.nr_maps
      equals to 2 and nvme_pci_map_queues will not do map for poll queues.
      Signed-off-by: Nyangerkun <yangerkun@huawei.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8fe34be1
    • M
      nvme: ignore subnqn for ADATA SX6000LNP · 08b903b5
      Misha Nasledov 提交于
      The ADATA SX6000LNP NVMe SSDs have the same subnqn and, due to this, a
      system with more than one of these SSDs will only have one usable.
      
      [ 0.942706] nvme nvme1: ignoring ctrl due to duplicate subnqn (nqn.2018-05.com.example:nvme:nvm-subsystem-OUI00E04C).
      [ 0.943017] nvme nvme1: Removing after probe failure status: -22
      
      02:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5762] (rev 01)
      71:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5762] (rev 01)
      
      There are no firmware updates available from the vendor, unfortunately.
      Applying the NVME_QUIRK_IGNORE_DEV_SUBNQN quirk for these SSDs resolves
      the issue, and they all work after this patch:
      
      /dev/nvme0n1     2J1120050420         ADATA SX6000LNP [...]
      /dev/nvme1n1     2J1120050540         ADATA SX6000LNP [...]
      Signed-off-by: NMisha Nasledov <misha@nasledov.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      08b903b5
  18. 10 7月, 2019 5 次提交
  19. 21 6月, 2019 3 次提交