1. 06 9月, 2021 1 次提交
    • H
      nvme-multipath: revalidate paths during rescan · e7d65803
      Hannes Reinecke 提交于
      When triggering a rescan due to a namespace resize we will be
      receiving AENs on every controller, triggering a rescan of all
      attached namespaces. If multipath is active only the current path and
      the ns_head disk will be updated, the other paths will still refer to
      the old size until AENs for the remaining controllers are received.
      
      If I/O comes in before that it might be routed to one of the old
      paths, triggering an I/O failure with 'access beyond end of device'.
      With this patch the old paths are skipped from multipath path
      selection until the controller serving these paths has been rescanned.
      Signed-off-by: NHannes Reinecke <hare@suse.de>
      [dwagner: - introduce NVME_NS_READY flag instead of NVME_NS_INVALIDATE
                - use 'revalidate' instead of 'invalidate' which
      	    follows the zoned device code path.
      	  - clear NVME_NS_READY before clearing current_path]
      Signed-off-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      e7d65803
  2. 17 8月, 2021 2 次提交
  3. 16 8月, 2021 1 次提交
  4. 15 8月, 2021 1 次提交
  5. 21 7月, 2021 1 次提交
  6. 01 7月, 2021 2 次提交
  7. 17 6月, 2021 1 次提交
  8. 03 6月, 2021 2 次提交
  9. 12 5月, 2021 1 次提交
  10. 04 5月, 2021 2 次提交
  11. 22 4月, 2021 2 次提交
  12. 15 4月, 2021 5 次提交
  13. 06 4月, 2021 1 次提交
    • K
      nvme: implement non-mdts command limits · 5befc7c2
      Keith Busch 提交于
      Commands that access LBA contents without a data transfer between the
      host historically have not had a spec defined upper limit. The driver
      set the queue constraints for such commands to the max data transfer
      size just to be safe, but this artificial constraint frequently limits
      devices below their capabilities.
      
      The NVMe Workgroup ratified TP4040 defines how a controller may
      advertise their non-MDTS limits. Use these if provided and default to
      the current constraints if not. Since the Dataset Management command
      limits are defined in logical blocks, but without a namespace to tell us
      the logical block size, the code defaults to the safe 512b size.
      Signed-off-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      5befc7c2
  14. 03 4月, 2021 3 次提交
  15. 10 2月, 2021 3 次提交
  16. 02 2月, 2021 1 次提交
  17. 06 1月, 2021 2 次提交
  18. 02 12月, 2020 5 次提交
    • J
      nvme: export zoned namespaces without Zone Append support read-only · 2f4c9ba2
      Javier González 提交于
      Allow ZNS NVMe SSDs to present a read-only namespace when append is not
      supported, instead of rejecting the namespace directly.
      
      This allows (i) the namespace to be used in read-only mode, which is not
      a problem as the append command only affects the write path, and (ii) to
      use standard management tools such as nvme-cli to choose a different
      format or firmware slot that is compatible with the Linux zoned block
      device.
      Signed-off-by: NJavier González <javier.gonz@samsung.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      2f4c9ba2
    • V
      nvme-fabrics: reject I/O to offline device · 8c4dfea9
      Victor Gladkov 提交于
      Commands get stuck while Host NVMe-oF controller is in reconnect state.
      The controller enters into reconnect state when it loses connection with
      the target.  It tries to reconnect every 10 seconds (default) until
      a successful reconnect or until the reconnect time-out is reached.
      The default reconnect time out is 10 minutes.
      
      Applications are expecting commands to complete with success or error
      within a certain timeout (30 seconds by default).  The NVMe host is
      enforcing that timeout while it is connected, but during reconnect the
      timeout is not enforced and commands may get stuck for a long period or
      even forever.
      
      To fix this long delay due to the default timeout, introduce new
      "fast_io_fail_tmo" session parameter.  The timeout is measured in seconds
      from the controller reconnect and any command beyond that timeout is
      rejected.  The new parameter value may be passed during 'connect'.
      The default value of -1 means no timeout (similar to current behavior).
      Signed-off-by: NVictor Gladkov <victor.gladkov@kioxia.com>
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NChao Leng <lengchao@huawei.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8c4dfea9
    • C
      nvme: split nvme_alloc_request() · 39dfe844
      Chaitanya Kulkarni 提交于
      Right now nvme_alloc_request() allocates a request from block layer
      based on the value of the qid. When qid set to NVME_QID_ANY it used
      blk_mq_alloc_request() else blk_mq_alloc_request_hctx().
      
      The function nvme_alloc_request() is called from different context, The
      only place where it uses non NVME_QID_ANY value is for fabrics connect
      commands :-
      
      nvme_submit_sync_cmd()		NVME_QID_ANY
      nvme_features()			NVME_QID_ANY
      nvme_sec_submit()		NVME_QID_ANY
      nvmf_reg_read32()		NVME_QID_ANY
      nvmf_reg_read64()		NVME_QID_ANY
      nvmf_reg_write32()		NVME_QID_ANY
      nvmf_connect_admin_queue()	NVME_QID_ANY
      nvme_submit_user_cmd()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_keep_alive()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_timeout()			NVME_QID_ANY
      	nvme_alloc_request()
      nvme_delete_queue()		NVME_QID_ANY
      	nvme_alloc_request()
      nvmet_passthru_execute_cmd()	NVME_QID_ANY
      	nvme_alloc_request()
      nvmf_connect_io_queue() 	QID
      	__nvme_submit_sync_cmd()
      		nvme_alloc_request()
      
      With passthru nvme_alloc_request() now falls into the I/O fast path such
      that blk_mq_alloc_request_hctx() is never gets called and that adds
      additional branch check in fast path.
      
      Split the nvme_alloc_request() into nvme_alloc_request() and
      nvme_alloc_request_qid().
      
      Replace each call of the nvme_alloc_request() with NVME_QID_ANY param
      with a call to newly added nvme_alloc_request() without NVME_QID_ANY.
      
      Replace a call to nvme_alloc_request() with QID param with a call to
      newly added nvme_alloc_request() and nvme_alloc_request_qid()
      based on the qid value set in the __nvme_submit_sync_cmd().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      39dfe844
    • C
      nvme: use consistent macro name for timeout · dc96f938
      Chaitanya Kulkarni 提交于
      This is purely a clenaup patch, add prefix NVME to the ADMIN_TIMEOUT to
      make consistent with NVME_IO_TIMEOUT.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dc96f938
    • B
      nvme: simplify nvme_req_qid() · 84115d6d
      Baolin Wang 提交于
      Use the request's '->mq_hctx->queue_num' directly to simplify the
      nvme_req_qid() function.
      Signed-off-by: NBaolin Wang <baolin.wang@linux.alibaba.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      84115d6d
  19. 14 11月, 2020 1 次提交
  20. 03 11月, 2020 1 次提交
  21. 22 10月, 2020 1 次提交
  22. 07 10月, 2020 1 次提交