1. 23 5月, 2017 1 次提交
  2. 02 5月, 2017 1 次提交
  3. 21 4月, 2017 1 次提交
    • C
      nvme: split nvme status from block req->errors · 27fa9bc5
      Christoph Hellwig 提交于
      We want our own clearly defined error field for NVMe passthrough commands,
      and the request errors field is going away in its current form.
      
      Just store the status and result field in the nvme_request field from
      hardirq completion context (using a new helper) and then generate a
      Linux errno for the block layer only when we actually need it.
      
      Because we can't overload the status value with a negative error code
      for cancelled command we now have a flags filed in struct nvme_request
      that contains a bit for this condition.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      27fa9bc5
  4. 10 4月, 2017 1 次提交
  5. 04 4月, 2017 8 次提交
  6. 31 3月, 2017 1 次提交
  7. 22 3月, 2017 1 次提交
  8. 28 2月, 2017 1 次提交
  9. 23 2月, 2017 3 次提交
  10. 01 2月, 2017 2 次提交
  11. 25 1月, 2017 1 次提交
  12. 14 1月, 2017 1 次提交
  13. 12 1月, 2017 1 次提交
  14. 15 12月, 2016 1 次提交
  15. 09 12月, 2016 1 次提交
    • C
      block: improve handling of the magic discard payload · f9d03f96
      Christoph Hellwig 提交于
      Instead of allocating a single unused biovec for discard requests, send
      them down without any payload.  Instead we allow the driver to add a
      "special" payload using a biovec embedded into struct request (unioned
      over other fields never used while in the driver), and overloading
      the number of segments for this case.
      
      This has a couple of advantages:
      
       - we don't have to allocate the bio_vec
       - the amount of special casing for discard requests in the block
         layer is significantly reduced
       - using this same scheme for other request types is trivial,
         which will be important for implementing the new WRITE_ZEROES
         op on devices where it actually requires a payload (e.g. SCSI)
       - we can get rid of playing games with the request length, as
         we'll never touch it and completions will work just fine
       - it will allow us to support ranged discard operations in the
         future by merging non-contiguous discard bios into a single
         request
       - last but not least it removes a lot of code
      
      This patch is the common base for my WIP series for ranges discards and to
      remove discard_zeroes_data in favor of always using REQ_OP_WRITE_ZEROES,
      so it would be good to get it in quickly.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f9d03f96
  16. 06 12月, 2016 4 次提交
  17. 16 11月, 2016 1 次提交
  18. 14 11月, 2016 2 次提交
    • S
      nvme-rdma: stop and free io queues on connect failure · c8dbc37c
      Steve Wise 提交于
      While testing nvme-rdma with the spdk nvmf target over iw_cxgb4, I
      configured the target (mistakenly) to generate an error creating the
      NVMF IO queues.  This resulted a "Invalid SQE Parameter" error sent back
      to the host on the first IO queue connect:
      
      [ 9610.928182] nvme nvme1: queue_size 128 > ctrl maxcmd 120, clamping down
      [ 9610.938745] nvme nvme1: creating 32 I/O queues.
      
      So nvmf_connect_io_queue() returns an error to
      nvmf_connect_io_queue() / nvmf_connect_io_queues(), and that
      is returned to nvme_rdma_create_io_queues().  In the error path,
      nvmf_rdma_create_io_queues() frees the queue tagset memory _before_
      stopping and freeing the IB queues, which causes yet another
      touch-after-free crash due to SQ CQEs being flushed after the ib_cqe
      structs pointed-to by the flushed WRs have been freed (since they are
      part of the nvme_rdma_request struct).
      
      The fix is to stop and free the queues in nvmf_connect_io_queues()
      if there is an error connecting any of the queues.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      c8dbc37c
    • C
      nvme-rdma: reject non-connect commands before the queue is live · 553cd9ef
      Christoph Hellwig 提交于
      If we reconncect we might have command queue up that get resent as soon
      as the queue is restarted.  But until the connect command succeeded we
      can't send other command.  Add a new flag that marks a queue as live when
      connect finishes, and delay any non-connect command until the queue is
      live based on it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reported-by: NSteve Wise <swise@opengridcomputing.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      [sagig: fixes admin queue LIVE setting]
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      553cd9ef
  19. 11 11月, 2016 2 次提交
    • C
      nvme: don't pass the full CQE to nvme_complete_async_event · 7bf58533
      Christoph Hellwig 提交于
      We only need the status and result fields, and passing them explicitly
      makes life a lot easier for the Fibre Channel transport which doesn't
      have a full CQE for the fast path case.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7bf58533
    • C
      nvme: introduce struct nvme_request · d49187e9
      Christoph Hellwig 提交于
      This adds a shared per-request structure for all NVMe I/O.  This structure
      is embedded as the first member in all NVMe transport drivers request
      private data and allows to implement common functionality between the
      drivers.
      
      The first use is to replace the current abuse of the SCSI command
      passthrough fields in struct request for the NVMe command passthrough,
      but it will grow a field more fields to allow implementing things
      like common abort handlers in the future.
      
      The passthrough commands are handled by having a pointer to the SQE
      (struct nvme_command) in struct nvme_request, and the union of the
      possible result fields, which had to be turned from an anonymous
      into a named union for that purpose.  This avoids having to pass
      a reference to a full CQE around and thus makes checking the result
      a lot more lightweight.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d49187e9
  20. 24 9月, 2016 2 次提交
  21. 23 9月, 2016 1 次提交
  22. 15 9月, 2016 1 次提交
  23. 13 9月, 2016 2 次提交