1. 20 5月, 2022 1 次提交
  2. 11 5月, 2022 1 次提交
  3. 15 4月, 2022 1 次提交
  4. 29 3月, 2022 2 次提交
    • A
      nvme-multipath: fix hang when disk goes live over reconnect · a4a6f3c8
      Anton Eidelman 提交于
      nvme_mpath_init_identify() invoked from nvme_init_identify() fetches a
      fresh ANA log from the ctrl.  This is essential to have an up to date
      path states for both existing namespaces and for those scan_work may
      discover once the ctrl is up.
      
      This happens in the following cases:
        1) A new ctrl is being connected.
        2) An existing ctrl is successfully reconnected.
        3) An existing ctrl is being reset.
      
      While in (1) ctrl->namespaces is empty, (2 & 3) may have namespaces, and
      nvme_read_ana_log() may call nvme_update_ns_ana_state().
      
      This result in a hang when the ANA state of an existing namespace changes
      and makes the disk live: nvme_mpath_set_live() issues IO to the namespace
      through the ctrl, which does NOT have IO queues yet.
      
      See sample hang below.
      
      Solution:
      - nvme_update_ns_ana_state() to call set_live only if ctrl is live
      - nvme_read_ana_log() call from nvme_mpath_init_identify()
        therefore only fetches and parses the ANA log;
        any erros in this process will fail the ctrl setup as appropriate;
      - a separate function nvme_mpath_update()
        is called in nvme_start_ctrl();
        this parses the ANA log without fetching it.
        At this point the ctrl is live,
        therefore, disks can be set live normally.
      
      Sample failure:
          nvme nvme0: starting error recovery
          nvme nvme0: Reconnecting in 10 seconds...
          block nvme0n6: no usable path - requeuing I/O
          INFO: task kworker/u8:3:312 blocked for more than 122 seconds.
                Tainted: G            E     5.14.5-1.el7.elrepo.x86_64 #1
          Workqueue: nvme-wq nvme_tcp_reconnect_ctrl_work [nvme_tcp]
          Call Trace:
           __schedule+0x2a2/0x7e0
           schedule+0x4e/0xb0
           io_schedule+0x16/0x40
           wait_on_page_bit_common+0x15c/0x3e0
           do_read_cache_page+0x1e0/0x410
           read_cache_page+0x12/0x20
           read_part_sector+0x46/0x100
           read_lba+0x121/0x240
           efi_partition+0x1d2/0x6a0
           bdev_disk_changed.part.0+0x1df/0x430
           bdev_disk_changed+0x18/0x20
           blkdev_get_whole+0x77/0xe0
           blkdev_get_by_dev+0xd2/0x3a0
           __device_add_disk+0x1ed/0x310
           device_add_disk+0x13/0x20
           nvme_mpath_set_live+0x138/0x1b0 [nvme_core]
           nvme_update_ns_ana_state+0x2b/0x30 [nvme_core]
           nvme_update_ana_state+0xca/0xe0 [nvme_core]
           nvme_parse_ana_log+0xac/0x170 [nvme_core]
           nvme_read_ana_log+0x7d/0xe0 [nvme_core]
           nvme_mpath_init_identify+0x105/0x150 [nvme_core]
           nvme_init_identify+0x2df/0x4d0 [nvme_core]
           nvme_init_ctrl_finish+0x8d/0x3b0 [nvme_core]
           nvme_tcp_setup_ctrl+0x337/0x390 [nvme_tcp]
           nvme_tcp_reconnect_ctrl_work+0x24/0x40 [nvme_tcp]
           process_one_work+0x1bd/0x360
           worker_thread+0x50/0x3d0
      Signed-off-by: NAnton Eidelman <anton@lightbitslabs.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a4a6f3c8
    • S
      nvme: allow duplicate NSIDs for private namespaces · 5974ea7c
      Sungup Moon 提交于
      A NVMe subsystem with multiple controller can have private namespaces
      that use the same NSID under some conditions:
      
       "If Namespace Management, ANA Reporting, or NVM Sets are supported, the
        NSIDs shall be unique within the NVM subsystem. If the Namespace
        Management, ANA Reporting, and NVM Sets are not supported, then NSIDs:
         a) for shared namespace shall be unique; and
         b) for private namespace are not required to be unique."
      
      Reference: Section 6.1.6 NSID and Namespace Usage; NVM Express 1.4c spec.
      
      Make sure this specific setup is supported in Linux.
      
      Fixes: 9ad1927a ("nvme: always search for namespace head")
      Signed-off-by: NSungup Moon <sungup.moon@samsung.com>
      [hch: refactored and fixed the controller vs subsystem based naming
            conflict]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      5974ea7c
  5. 16 3月, 2022 2 次提交
  6. 08 3月, 2022 2 次提交
  7. 28 2月, 2022 4 次提交
  8. 23 12月, 2021 2 次提交
  9. 08 12月, 2021 1 次提交
    • R
      nvme: fix use after free when disconnecting a reconnecting ctrl · 8b77fa6f
      Ruozhu Li 提交于
      A crash happens when trying to disconnect a reconnecting ctrl:
      
       1) The network was cut off when the connection was just established,
          scan work hang there waiting for some IOs complete.  Those I/Os were
          retried because we return BLK_STS_RESOURCE to blk in reconnecting.
       2) After a while, I tried to disconnect this connection.  This
          procedure also hangs because it tried to obtain ctrl->scan_lock.
          It should be noted that now we have switched the controller state
          to NVME_CTRL_DELETING.
       3) In nvme_check_ready(), we always return true when ctrl->state is
          NVME_CTRL_DELETING, so those retrying I/Os were issued to the bottom
          device which was already freed.
      
      To fix this, when ctrl->state is NVME_CTRL_DELETING, issue cmd to bottom
      device only when queue state is live.  If not, return host path error to
      the block layer
      Signed-off-by: NRuozhu Li <liruozhu@huawei.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8b77fa6f
  10. 21 10月, 2021 1 次提交
  11. 20 10月, 2021 2 次提交
  12. 19 10月, 2021 1 次提交
  13. 28 9月, 2021 1 次提交
  14. 06 9月, 2021 2 次提交
  15. 17 8月, 2021 2 次提交
  16. 16 8月, 2021 1 次提交
  17. 15 8月, 2021 1 次提交
  18. 21 7月, 2021 1 次提交
  19. 01 7月, 2021 2 次提交
  20. 17 6月, 2021 1 次提交
  21. 03 6月, 2021 2 次提交
  22. 12 5月, 2021 1 次提交
  23. 04 5月, 2021 2 次提交
  24. 22 4月, 2021 2 次提交
  25. 15 4月, 2021 2 次提交