1. 01 7月, 2021 1 次提交
  2. 17 6月, 2021 1 次提交
  3. 03 6月, 2021 2 次提交
  4. 12 5月, 2021 1 次提交
  5. 04 5月, 2021 2 次提交
  6. 22 4月, 2021 2 次提交
  7. 15 4月, 2021 5 次提交
  8. 06 4月, 2021 1 次提交
    • K
      nvme: implement non-mdts command limits · 5befc7c2
      Keith Busch 提交于
      Commands that access LBA contents without a data transfer between the
      host historically have not had a spec defined upper limit. The driver
      set the queue constraints for such commands to the max data transfer
      size just to be safe, but this artificial constraint frequently limits
      devices below their capabilities.
      
      The NVMe Workgroup ratified TP4040 defines how a controller may
      advertise their non-MDTS limits. Use these if provided and default to
      the current constraints if not. Since the Dataset Management command
      limits are defined in logical blocks, but without a namespace to tell us
      the logical block size, the code defaults to the safe 512b size.
      Signed-off-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      5befc7c2
  9. 03 4月, 2021 3 次提交
  10. 10 2月, 2021 3 次提交
  11. 02 2月, 2021 1 次提交
  12. 06 1月, 2021 2 次提交
  13. 02 12月, 2020 5 次提交
    • J
      nvme: export zoned namespaces without Zone Append support read-only · 2f4c9ba2
      Javier González 提交于
      Allow ZNS NVMe SSDs to present a read-only namespace when append is not
      supported, instead of rejecting the namespace directly.
      
      This allows (i) the namespace to be used in read-only mode, which is not
      a problem as the append command only affects the write path, and (ii) to
      use standard management tools such as nvme-cli to choose a different
      format or firmware slot that is compatible with the Linux zoned block
      device.
      Signed-off-by: NJavier González <javier.gonz@samsung.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      2f4c9ba2
    • V
      nvme-fabrics: reject I/O to offline device · 8c4dfea9
      Victor Gladkov 提交于
      Commands get stuck while Host NVMe-oF controller is in reconnect state.
      The controller enters into reconnect state when it loses connection with
      the target.  It tries to reconnect every 10 seconds (default) until
      a successful reconnect or until the reconnect time-out is reached.
      The default reconnect time out is 10 minutes.
      
      Applications are expecting commands to complete with success or error
      within a certain timeout (30 seconds by default).  The NVMe host is
      enforcing that timeout while it is connected, but during reconnect the
      timeout is not enforced and commands may get stuck for a long period or
      even forever.
      
      To fix this long delay due to the default timeout, introduce new
      "fast_io_fail_tmo" session parameter.  The timeout is measured in seconds
      from the controller reconnect and any command beyond that timeout is
      rejected.  The new parameter value may be passed during 'connect'.
      The default value of -1 means no timeout (similar to current behavior).
      Signed-off-by: NVictor Gladkov <victor.gladkov@kioxia.com>
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NChao Leng <lengchao@huawei.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8c4dfea9
    • C
      nvme: split nvme_alloc_request() · 39dfe844
      Chaitanya Kulkarni 提交于
      Right now nvme_alloc_request() allocates a request from block layer
      based on the value of the qid. When qid set to NVME_QID_ANY it used
      blk_mq_alloc_request() else blk_mq_alloc_request_hctx().
      
      The function nvme_alloc_request() is called from different context, The
      only place where it uses non NVME_QID_ANY value is for fabrics connect
      commands :-
      
      nvme_submit_sync_cmd()		NVME_QID_ANY
      nvme_features()			NVME_QID_ANY
      nvme_sec_submit()		NVME_QID_ANY
      nvmf_reg_read32()		NVME_QID_ANY
      nvmf_reg_read64()		NVME_QID_ANY
      nvmf_reg_write32()		NVME_QID_ANY
      nvmf_connect_admin_queue()	NVME_QID_ANY
      nvme_submit_user_cmd()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_keep_alive()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_timeout()			NVME_QID_ANY
      	nvme_alloc_request()
      nvme_delete_queue()		NVME_QID_ANY
      	nvme_alloc_request()
      nvmet_passthru_execute_cmd()	NVME_QID_ANY
      	nvme_alloc_request()
      nvmf_connect_io_queue() 	QID
      	__nvme_submit_sync_cmd()
      		nvme_alloc_request()
      
      With passthru nvme_alloc_request() now falls into the I/O fast path such
      that blk_mq_alloc_request_hctx() is never gets called and that adds
      additional branch check in fast path.
      
      Split the nvme_alloc_request() into nvme_alloc_request() and
      nvme_alloc_request_qid().
      
      Replace each call of the nvme_alloc_request() with NVME_QID_ANY param
      with a call to newly added nvme_alloc_request() without NVME_QID_ANY.
      
      Replace a call to nvme_alloc_request() with QID param with a call to
      newly added nvme_alloc_request() and nvme_alloc_request_qid()
      based on the qid value set in the __nvme_submit_sync_cmd().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      39dfe844
    • C
      nvme: use consistent macro name for timeout · dc96f938
      Chaitanya Kulkarni 提交于
      This is purely a clenaup patch, add prefix NVME to the ADMIN_TIMEOUT to
      make consistent with NVME_IO_TIMEOUT.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dc96f938
    • B
      nvme: simplify nvme_req_qid() · 84115d6d
      Baolin Wang 提交于
      Use the request's '->mq_hctx->queue_num' directly to simplify the
      nvme_req_qid() function.
      Signed-off-by: NBaolin Wang <baolin.wang@linux.alibaba.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      84115d6d
  14. 14 11月, 2020 1 次提交
  15. 03 11月, 2020 1 次提交
  16. 22 10月, 2020 1 次提交
  17. 07 10月, 2020 2 次提交
  18. 27 9月, 2020 2 次提交
  19. 22 9月, 2020 1 次提交
  20. 09 9月, 2020 1 次提交
    • J
      nvme: Revert: Fix controller creation races with teardown flow · b63de840
      James Smart 提交于
      The indicated patch introduced a barrier in the sysfs_delete attribute
      for the controller that rejects the request if the controller isn't
      created. "Created" is defined as at least 1 call to nvme_start_ctrl().
      
      This is problematic in error-injection testing.  If an error occurs on
      the initial attempt to create an association and the controller enters
      reconnect(s) attempts, the admin cannot delete the controller until
      either there is a successful association created or ctrl_loss_tmo
      times out.
      
      Where this issue is particularly hurtful is when the "admin" is the
      nvme-cli, it is performing a connection to a discovery controller, and
      it is initiated via auto-connect scripts.  With the FC transport, if the
      first connection attempt fails, the controller enters a normal reconnect
      state but returns control to the cli thread that created the controller.
      In this scenario, the cli attempts to read the discovery log via ioctl,
      which fails, causing the cli to see it as an empty log and then proceeds
      to delete the discovery controller. The delete is rejected and the
      controller is left live. If the discovery controller reconnect then
      succeeds, there is no action to delete it, and it sits live doing nothing.
      
      Cc: <stable@vger.kernel.org> # v5.7+
      Fixes: ce151813 ("nvme: Fix controller creation races with teardown flow")
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      CC: Israel Rukshin <israelr@mellanox.com>
      CC: Max Gurtovoy <maxg@mellanox.com>
      CC: Christoph Hellwig <hch@lst.de>
      CC: Keith Busch <kbusch@kernel.org>
      CC: Sagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      b63de840
  21. 02 9月, 2020 2 次提交