1. 17 6月, 2021 3 次提交
  2. 03 6月, 2021 1 次提交
  3. 12 5月, 2021 1 次提交
    • C
      nvmet: fix inline bio check for bdev-ns · 608a9690
      Chaitanya Kulkarni 提交于
      When handling rw commands, for inline bio case we only consider
      transfer size. This works well when req->sg_cnt fits into the
      req->inline_bvec, but it will result in the warning in
      __bio_add_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
      
      Consider an I/O size 32768 and first page is not aligned to the page
      boundary, then I/O is split in following manner :-
      
      [ 2206.256140] nvmet: sg->length 3440 sg->offset 656
      [ 2206.256144] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256148] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256152] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256155] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256159] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256163] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256166] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256170] nvmet: sg->length 656 sg->offset 0
      
      Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
      the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
      This will result in the following warning message :-
      
      nvmet_bdev_execute_rw()
      	bio_add_page()
      		__bio_add_page()
      			WARN_ON_ONCE(bio_full(bio, len));
      
      This scenario is very hard to reproduce on the nvme-loop transport only
      with rw commands issued with the passthru IOCTL interface from the host
      application and the data buffer is allocated with the malloc() and not
      the posix_memalign().
      
      Fixes: 73383adf ("nvmet: don't split large I/Os unconditionally")
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      608a9690
  4. 27 2月, 2021 1 次提交
  5. 10 2月, 2021 1 次提交
  6. 02 2月, 2021 1 次提交
  7. 28 1月, 2021 1 次提交
  8. 24 8月, 2020 1 次提交
  9. 27 5月, 2020 4 次提交
  10. 22 5月, 2020 1 次提交
  11. 10 5月, 2020 1 次提交
  12. 04 2月, 2020 1 次提交
  13. 05 11月, 2019 4 次提交
  14. 24 9月, 2019 1 次提交
  15. 10 7月, 2019 1 次提交
  16. 05 6月, 2019 1 次提交
    • M
      nvmet: fix data_len to 0 for bdev-backed write_zeroes · 3562f5d9
      Minwoo Im 提交于
      The WRITE ZEROES command has no data transfer so that we need to
      initialize the struct (nvmet_req *req)->data_len to 0x0.  While
      (nvmet_req *req)->transfer_len is initialized in nvmet_req_init(),
      data_len will be initialized by nowhere which might cause the failure
      with status code NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR randomly.  It's
      because nvmet_req_execute() checks like:
      
      	if (unlikely(req->data_len != req->transfer_len)) {
      		req->error_loc = offsetof(struct nvme_common_command, dptr);
      		nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR);
      	} else
      		req->execute(req);
      
      This patch fixes req->data_len not to be a randomly assigned by
      initializing it to 0x0 when preparing the command in
      nvmet_bdev_parse_io_cmd().
      
      nvmet_file_parse_io_cmd() which is for file-backed I/O has already
      initialized the data_len field to 0x0, though.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
      Signed-off-by: NMinwoo Im <minwoo.im.dev@gmail.com>
      Reviewed-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      3562f5d9
  17. 05 4月, 2019 1 次提交
  18. 14 3月, 2019 1 次提交
  19. 20 2月, 2019 1 次提交
  20. 13 12月, 2018 2 次提交
  21. 26 11月, 2018 1 次提交
  22. 18 10月, 2018 1 次提交
    • L
      nvmet: Optionally use PCI P2P memory · c6925093
      Logan Gunthorpe 提交于
      Create a configfs attribute in each nvme-fabrics namespace to enable P2P
      memory use.  The attribute may be enabled (with a boolean) or a specific
      P2P device may be given (with the device's PCI name).
      
      When enabled, the namespace will ensure the underlying block device
      supports P2P and is compatible with any specified P2P device.  If no device
      was specified it will ensure there is compatible P2P memory somewhere in
      the system.  Enabling a namespace with P2P memory will fail with EINVAL
      (and an appropriate dmesg error) if any of these conditions are not met.
      
      Once a controller is set up on a specific port, the P2P device to use for
      each namespace will be found and stored in a radix tree by namespace ID.
      When memory is allocated for a request, the tree is used to look up the P2P
      device to allocate memory against.  If no device is in the tree (because no
      appropriate device was found), or if allocation of P2P memory fails, fall
      back to using regular memory.
      Signed-off-by: NStephen Bates <sbates@raithlin.com>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      [hch: partial rewrite of the initial code]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      c6925093
  23. 02 10月, 2018 1 次提交
  24. 08 8月, 2018 1 次提交
    • C
      nvmet: add ns write protect support · dedf0be5
      Chaitanya Kulkarni 提交于
      This patch implements the Namespace Write Protect feature described in
      "NVMe TP 4005a Namespace Write Protect". In this version, we implement
      No Write Protect and Write Protect states for target ns which can be
      toggled by set-features commands from the host side.
      
      For write-protect state transition, we need to flush the ns specified
      as a part of command so we also add helpers for carrying out synchronous
      flush operations.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      [hch: fixed an incorrect endianess conversion, minor cleanups]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dedf0be5
  25. 25 5月, 2018 2 次提交
  26. 12 4月, 2018 1 次提交
  27. 14 2月, 2018 1 次提交
  28. 11 11月, 2017 1 次提交
  29. 04 11月, 2017 1 次提交
  30. 24 8月, 2017 1 次提交
    • C
      block: replace bi_bdev with a gendisk pointer and partitions index · 74d46992
      Christoph Hellwig 提交于
      This way we don't need a block_device structure to submit I/O.  The
      block_device has different life time rules from the gendisk and
      request_queue and is usually only available when the block device node
      is open.  Other callers need to explicitly create one (e.g. the lightnvm
      passthrough code, or the new nvme multipathing code).
      
      For the actual I/O path all that we need is the gendisk, which exists
      once per block device.  But given that the block layer also does
      partition remapping we additionally need a partition index, which is
      used for said remapping in generic_make_request.
      
      Note that all the block drivers generally want request_queue or
      sometimes the gendisk, so this removes a layer of indirection all
      over the stack.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      74d46992