1. 02 2月, 2021 7 次提交
  2. 26 1月, 2021 1 次提交
  3. 25 1月, 2021 4 次提交
  4. 21 1月, 2021 2 次提交
  5. 19 1月, 2021 5 次提交
    • C
      nvmet: set right status on error in id-ns handler · bffcd507
      Chaitanya Kulkarni 提交于
      The function nvmet_execute_identify_ns() doesn't set the status if call
      to nvmet_find_namespace() fails. In that case we set the status of the
      request to the value return by the nvmet_copy_sgl().
      
      Set the status to NVME_SC_INVALID_NS and adjust the code such that
      request will have the right status on nvmet_find_namespace() failure.
      
      Without this patch :-
      NVME Identify Namespace 3:
      nsze    : 0
      ncap    : 0
      nuse    : 0
      nsfeat  : 0
      nlbaf   : 0
      flbas   : 0
      mc      : 0
      dpc     : 0
      dps     : 0
      nmic    : 0
      rescap  : 0
      fpi     : 0
      dlfeat  : 0
      nawun   : 0
      nawupf  : 0
      nacwu   : 0
      nabsn   : 0
      nabo    : 0
      nabspf  : 0
      noiob   : 0
      nvmcap  : 0
      mssrl   : 0
      mcl     : 0
      msrc    : 0
      nsattr	: 0
      nvmsetid: 0
      anagrpid: 0
      endgid  : 0
      nguid   : 00000000000000000000000000000000
      eui64   : 0000000000000000
      lbaf  0 : ms:0   lbads:0  rp:0 (in use)
      
      With this patch-series :-
      feb3b88b501e (HEAD -> nvme-5.11) nvmet: remove extra variable in identify ns
      6302aa67210a nvmet: remove extra variable in id-desclist
      ed57951da453 nvmet: remove extra variable in smart log nsid
      be384b8c24dc nvmet: set right status on error in id-ns handler
      
      NVMe status: INVALID_NS: The namespace or the format of that namespace is invalid(0xb)
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      bffcd507
    • K
      nvme-pci: allow use of cmb on v1.4 controllers · 20d3bb92
      Klaus Jensen 提交于
      Since NVMe v1.4 the Controller Memory Buffer must be explicitly enabled
      by the host.
      Signed-off-by: NKlaus Jensen <k.jensen@samsung.com>
      [hch: avoid a local variable and add a comment]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      20d3bb92
    • C
      nvme-tcp: avoid request double completion for concurrent nvme_tcp_timeout · 9ebbfe49
      Chao Leng 提交于
      Each name space has a request queue, if complete request long time,
      multi request queues may have time out requests at the same time,
      nvme_tcp_timeout will execute concurrently. Multi requests in different
      request queues may be queued in the same tcp queue, multi
      nvme_tcp_timeout may call nvme_tcp_stop_queue at the same time.
      The first nvme_tcp_stop_queue will clear NVME_TCP_Q_LIVE and continue
      stopping the tcp queue(cancel io_work), but the others check
      NVME_TCP_Q_LIVE is already cleared, and then directly complete the
      requests, complete request before the io work is completely canceled may
      lead to a use-after-free condition.
      Add a multex lock to serialize nvme_tcp_stop_queue.
      Signed-off-by: NChao Leng <lengchao@huawei.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9ebbfe49
    • C
      nvme-rdma: avoid request double completion for concurrent nvme_rdma_timeout · 7674073b
      Chao Leng 提交于
      A crash happens when inject completing request long time(nearly 30s).
      Each name space has a request queue, when inject completing request long
      time, multi request queues may have time out requests at the same time,
      nvme_rdma_timeout will execute concurrently. Multi requests in different
      request queues may be queued in the same rdma queue, multi
      nvme_rdma_timeout may call nvme_rdma_stop_queue at the same time.
      The first nvme_rdma_timeout will clear NVME_RDMA_Q_LIVE and continue
      stopping the rdma queue(drain qp), but the others check NVME_RDMA_Q_LIVE
      is already cleared, and then directly complete the requests, complete
      request before the qp is fully drained may lead to a use-after-free
      condition.
      
      Add a multex lock to serialize nvme_rdma_stop_queue.
      Signed-off-by: NChao Leng <lengchao@huawei.com>
      Tested-by: NIsrael Rukshin <israelr@nvidia.com>
      Reviewed-by: NIsrael Rukshin <israelr@nvidia.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      7674073b
    • R
      nvme: check the PRINFO bit before deciding the host buffer length · 4d6b1c95
      Revanth Rajashekar 提交于
      According to NVMe spec v1.4, section 8.3.1, the PRINFO bit and
      the metadata size play a vital role in deteriming the host buffer size.
      
      If PRIFNO bit is set and MS==8, the host doesn't add the metadata buffer,
      instead the controller adds it.
      Signed-off-by: NRevanth Rajashekar <revanth.rajashekar@intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      4d6b1c95
  6. 15 1月, 2021 4 次提交
  7. 06 1月, 2021 8 次提交
  8. 08 12月, 2020 1 次提交
  9. 05 12月, 2020 1 次提交
  10. 02 12月, 2020 7 次提交