1. 29 3月, 2022 1 次提交
    • S
      nvme: allow duplicate NSIDs for private namespaces · 5974ea7c
      Sungup Moon 提交于
      A NVMe subsystem with multiple controller can have private namespaces
      that use the same NSID under some conditions:
      
       "If Namespace Management, ANA Reporting, or NVM Sets are supported, the
        NSIDs shall be unique within the NVM subsystem. If the Namespace
        Management, ANA Reporting, and NVM Sets are not supported, then NSIDs:
         a) for shared namespace shall be unique; and
         b) for private namespace are not required to be unique."
      
      Reference: Section 6.1.6 NSID and Namespace Usage; NVM Express 1.4c spec.
      
      Make sure this specific setup is supported in Linux.
      
      Fixes: 9ad1927a ("nvme: always search for namespace head")
      Signed-off-by: NSungup Moon <sungup.moon@samsung.com>
      [hch: refactored and fixed the controller vs subsystem based naming
            conflict]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      5974ea7c
  2. 23 3月, 2022 1 次提交
    • P
      nvme: fix the read-only state for zoned namespaces with unsupposed features · 726be2c7
      Pankaj Raghav 提交于
      commit 2f4c9ba2 ("nvme: export zoned namespaces without Zone Append
      support read-only") marks zoned namespaces without append support
      read-only.  It does iso by setting NVME_NS_FORCE_RO in ns->flags in
      nvme_update_zone_info and checking for that flag later in
      nvme_update_disk_info to mark the disk as read-only.
      
      But commit 73d90386 ("nvme: cleanup zone information initialization")
      rearranged nvme_update_disk_info to be called before
      nvme_update_zone_info and thus not marking the disk as read-only.
      The call order cannot be just reverted because nvme_update_zone_info sets
      certain queue parameters such as zone_write_granularity that depend on the
      prior call to nvme_update_disk_info.
      
      Remove the call to set_disk_ro in nvme_update_disk_info. and call
      set_disk_ro after nvme_update_zone_info and nvme_update_disk_info to set
      the permission for ZNS drives correctly. The same applies to the
      multipath disk path.
      
      Fixes: 73d90386 ("nvme: cleanup zone information initialization")
      Signed-off-by: NPankaj Raghav <p.raghav@samsung.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      726be2c7
  3. 16 3月, 2022 3 次提交
  4. 28 2月, 2022 12 次提交
  5. 23 12月, 2021 3 次提交
  6. 08 12月, 2021 1 次提交
    • R
      nvme: fix use after free when disconnecting a reconnecting ctrl · 8b77fa6f
      Ruozhu Li 提交于
      A crash happens when trying to disconnect a reconnecting ctrl:
      
       1) The network was cut off when the connection was just established,
          scan work hang there waiting for some IOs complete.  Those I/Os were
          retried because we return BLK_STS_RESOURCE to blk in reconnecting.
       2) After a while, I tried to disconnect this connection.  This
          procedure also hangs because it tried to obtain ctrl->scan_lock.
          It should be noted that now we have switched the controller state
          to NVME_CTRL_DELETING.
       3) In nvme_check_ready(), we always return true when ctrl->state is
          NVME_CTRL_DELETING, so those retrying I/Os were issued to the bottom
          device which was already freed.
      
      To fix this, when ctrl->state is NVME_CTRL_DELETING, issue cmd to bottom
      device only when queue state is live.  If not, return host path error to
      the block layer
      Signed-off-by: NRuozhu Li <liruozhu@huawei.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8b77fa6f
  7. 06 12月, 2021 2 次提交
  8. 29 11月, 2021 1 次提交
  9. 24 11月, 2021 2 次提交
  10. 09 11月, 2021 1 次提交
  11. 21 10月, 2021 3 次提交
  12. 20 10月, 2021 6 次提交
  13. 19 10月, 2021 1 次提交
  14. 18 10月, 2021 2 次提交
  15. 14 10月, 2021 1 次提交