1. 28 7月, 2018 3 次提交
  2. 23 7月, 2018 2 次提交
    • S
      nvmet-rdma: support max(16KB, PAGE_SIZE) inline data · 0d5ee2b2
      Steve Wise 提交于
      The patch enables inline data sizes using up to 4 recv sges, and capping
      the size at 16KB or at least 1 page size.  So on a 4K page system, up to
      16KB is supported, and for a 64K page system 1 page of 64KB is supported.
      
      We avoid > 0 order page allocations for the inline buffers by using
      multiple recv sges, one for each page.  If the device cannot support
      the configured inline data size due to lack of enough recv sges, then
      log a warning and reduce the inline size.
      
      Add a new configfs port attribute, called param_inline_data_size,
      to allow configuring the size of inline data for a given nvmf port.
      The maximum size allowed is still enforced by nvmet-rdma with
      NVMET_RDMA_MAX_INLINE_DATA_SIZE, which is now max(16KB, PAGE_SIZE).
      And the default size, if not specified via configfs, is still PAGE_SIZE.
      This preserves the existing behavior, but allows larger inline sizes
      for small page systems.  If the configured inline data size exceeds
      NVMET_RDMA_MAX_INLINE_DATA_SIZE, a warning is logged and the size is
      reduced.  If param_inline_data_size is set to 0, then inline data is
      disabled for that nvmf port.
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0d5ee2b2
    • C
      nvmet: add buffered I/O support for file backed ns · 55eb942e
      Chaitanya Kulkarni 提交于
      Add a new "buffered_io" attribute, which disabled direct I/O and thus
      enables page cache based caching when enabled.   The attribute can only
      be changed when the namespace is disabled as the file has to be reopend
      for the change to take effect.
      
      The possibly blocking read/write are deferred to a newly introduced
      global workqueue.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      55eb942e
  3. 20 6月, 2018 1 次提交
  4. 01 6月, 2018 4 次提交
  5. 25 5月, 2018 3 次提交
  6. 26 3月, 2018 1 次提交
  7. 01 3月, 2018 1 次提交
  8. 16 1月, 2018 1 次提交
  9. 08 1月, 2018 2 次提交
  10. 11 11月, 2017 1 次提交
  11. 20 10月, 2017 1 次提交
  12. 19 10月, 2017 1 次提交
  13. 26 9月, 2017 1 次提交
  14. 25 9月, 2017 1 次提交
  15. 29 8月, 2017 2 次提交
  16. 20 7月, 2017 1 次提交
  17. 15 6月, 2017 1 次提交
    • J
      nvmet: implement namespace identify descriptor list · 637dc0f3
      Johannes Thumshirn 提交于
      A NVMe Identify NS command with a CNS value of '3' is expecting a list
      of Namespace Identification Descriptor structures to be returned to
      the host for the namespace requested in the namespace identify
      command.
      
      This Namespace Identification Descriptor structure consists of the
      type of the namespace identifier, the length of the identifier and the
      actual identifier.
      
      Valid types are NGUID and UUID which we have saved in our nvme_ns
      structure if they have been configured via configfs. If no value has
      been assigened to one of these we return an "invalid opcode" back to
      the host to maintain backward compatibiliy with older implementations
      without Namespace Identify Descriptor list support.
      
      Also as the Namespace Identify Descriptor list is the only mandatory
      feature change between 1.2.1 and 1.3 we can bump the advertised
      version as well.
      Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      637dc0f3
  18. 21 5月, 2017 1 次提交
  19. 04 4月, 2017 3 次提交
  20. 17 3月, 2017 1 次提交
  21. 02 3月, 2017 1 次提交
  22. 23 2月, 2017 1 次提交
  23. 26 1月, 2017 3 次提交
  24. 06 12月, 2016 1 次提交
    • S
      nvmet: Fix possible infinite loop triggered on hot namespace removal · e4fcf07c
      Solganik Alexander 提交于
      When removing a namespace we delete it from the subsystem namespaces
      list with list_del_init which allows us to know if it is enabled or
      not.
      
      The problem is that list_del_init initialize the list next and does
      not respect the RCU list-traversal we do on the IO path for locating
      a namespace. Instead we need to use list_del_rcu which is allowed to
      run concurrently with the _rcu list-traversal primitives (keeps list
      next intact) and guarantees concurrent nvmet_find_naespace forward
      progress.
      
      By changing that, we cannot rely on ns->dev_link for knowing if the
      namspace is enabled, so add enabled indicator entry to nvmet_ns for
      that.
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NSolganik Alexander <sashas@lightbitslabs.com>
      Cc: <stable@vger.kernel.org> # v4.8+
      e4fcf07c
  25. 14 11月, 2016 1 次提交
  26. 11 11月, 2016 1 次提交
    • C
      nvme: introduce struct nvme_request · d49187e9
      Christoph Hellwig 提交于
      This adds a shared per-request structure for all NVMe I/O.  This structure
      is embedded as the first member in all NVMe transport drivers request
      private data and allows to implement common functionality between the
      drivers.
      
      The first use is to replace the current abuse of the SCSI command
      passthrough fields in struct request for the NVMe command passthrough,
      but it will grow a field more fields to allow implementing things
      like common abort handlers in the future.
      
      The passthrough commands are handled by having a pointer to the SQE
      (struct nvme_command) in struct nvme_request, and the union of the
      possible result fields, which had to be turned from an anonymous
      into a named union for that purpose.  This avoids having to pass
      a reference to a full CQE around and thus makes checking the result
      a lot more lightweight.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d49187e9