1. 13 4月, 2021 1 次提交
  2. 26 1月, 2021 1 次提交
  3. 25 1月, 2021 3 次提交
  4. 02 12月, 2020 2 次提交
    • C
      nvme: split nvme_alloc_request() · 39dfe844
      Chaitanya Kulkarni 提交于
      Right now nvme_alloc_request() allocates a request from block layer
      based on the value of the qid. When qid set to NVME_QID_ANY it used
      blk_mq_alloc_request() else blk_mq_alloc_request_hctx().
      
      The function nvme_alloc_request() is called from different context, The
      only place where it uses non NVME_QID_ANY value is for fabrics connect
      commands :-
      
      nvme_submit_sync_cmd()		NVME_QID_ANY
      nvme_features()			NVME_QID_ANY
      nvme_sec_submit()		NVME_QID_ANY
      nvmf_reg_read32()		NVME_QID_ANY
      nvmf_reg_read64()		NVME_QID_ANY
      nvmf_reg_write32()		NVME_QID_ANY
      nvmf_connect_admin_queue()	NVME_QID_ANY
      nvme_submit_user_cmd()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_keep_alive()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_timeout()			NVME_QID_ANY
      	nvme_alloc_request()
      nvme_delete_queue()		NVME_QID_ANY
      	nvme_alloc_request()
      nvmet_passthru_execute_cmd()	NVME_QID_ANY
      	nvme_alloc_request()
      nvmf_connect_io_queue() 	QID
      	__nvme_submit_sync_cmd()
      		nvme_alloc_request()
      
      With passthru nvme_alloc_request() now falls into the I/O fast path such
      that blk_mq_alloc_request_hctx() is never gets called and that adds
      additional branch check in fast path.
      
      Split the nvme_alloc_request() into nvme_alloc_request() and
      nvme_alloc_request_qid().
      
      Replace each call of the nvme_alloc_request() with NVME_QID_ANY param
      with a call to newly added nvme_alloc_request() without NVME_QID_ANY.
      
      Replace a call to nvme_alloc_request() with QID param with a call to
      newly added nvme_alloc_request() and nvme_alloc_request_qid()
      based on the qid value set in the __nvme_submit_sync_cmd().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      39dfe844
    • C
      nvme: centralize setting the timeout in nvme_alloc_request · 0d2e7c84
      Chaitanya Kulkarni 提交于
      The function nvme_alloc_request() is called from different context
      (I/O and Admin queue) where callers do not consider the I/O timeout when
      called from I/O queue context.
      
      Update nvme_alloc_request() to set the default I/O and Admin timeout
      value based on whether the queuedata is set or not.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0d2e7c84
  5. 08 7月, 2020 1 次提交
  6. 27 5月, 2020 2 次提交
  7. 06 8月, 2019 2 次提交
  8. 21 6月, 2019 1 次提交
  9. 07 5月, 2019 1 次提交
  10. 20 2月, 2019 1 次提交
  11. 13 12月, 2018 1 次提交
  12. 12 12月, 2018 4 次提交
  13. 09 10月, 2018 3 次提交
    • J
      lightnvm: do no update csecs and sos on 1.2 · 6fd05cad
      Javier González 提交于
      1.2 devices exposes their data and metadata size through the separate
      identify command. Make sure that the NVMe LBA format does not override
      these values.
      Signed-off-by: NJavier González <javier@cnexlabs.com>
      Signed-off-by: NMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6fd05cad
    • J
      lightnvm: use internal allocation for chunk log page · 090ee26f
      Javier González 提交于
      The lightnvm subsystem provides helpers to retrieve chunk metadata,
      where the target needs to provide a buffer to store the metadata. An
      implicit assumption is that this buffer is contiguous and can be used to
      retrieve the data from the device. If the device exposes too many
      chunks, then kmalloc might fail, thus failing instance creation.
      
      This patch removes this assumption by implementing an internal buffer in
      the lightnvm subsystem to retrieve chunk metadata. Targets can then
      use virtual memory allocations. Since this is a target API change, adapt
      pblk accordingly.
      Signed-off-by: NJavier González <javier@cnexlabs.com>
      Reviewed-by: NHans Holmberg <hans.holmberg@cnexlabs.com>
      Signed-off-by: NMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      090ee26f
    • M
      lightnvm: move bad block and chunk state logic to core · aff3fb18
      Matias Bjørling 提交于
      pblk implements two data paths for recovery line state. One for 1.2
      and another for 2.0, instead of having pblk implement these, combine
      them in the core to reduce complexity and make available to other
      targets.
      
      The new interface will adhere to the 2.0 chunk definition,
      including managing open chunks with an active write pointer. To provide
      this interface, a 1.2 device recovers the state of the chunks by
      manually detecting if a chunk is either free/open/close/offline, and if
      open, scanning the flash pages sequentially to find the next writeable
      page. This process takes on average ~10 seconds on a device with 64 dies,
      1024 blocks and 60us read access time. The process can be parallelized
      but is left out for maintenance simplicity, as the 1.2 specification is
      deprecated. For 2.0 devices, the logic is maintained internally in the
      drive and retrieved through the 2.0 interface.
      Signed-off-by: NMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      aff3fb18
  14. 28 9月, 2018 1 次提交
  15. 06 8月, 2018 1 次提交
  16. 28 7月, 2018 1 次提交
  17. 13 7月, 2018 2 次提交
  18. 30 3月, 2018 12 次提交