1. 04 4月, 2017 1 次提交
  2. 31 3月, 2017 1 次提交
  3. 28 2月, 2017 1 次提交
  4. 23 2月, 2017 3 次提交
  5. 01 2月, 2017 2 次提交
  6. 25 1月, 2017 1 次提交
  7. 14 1月, 2017 1 次提交
  8. 12 1月, 2017 1 次提交
  9. 15 12月, 2016 1 次提交
  10. 09 12月, 2016 1 次提交
    • C
      block: improve handling of the magic discard payload · f9d03f96
      Christoph Hellwig 提交于
      Instead of allocating a single unused biovec for discard requests, send
      them down without any payload.  Instead we allow the driver to add a
      "special" payload using a biovec embedded into struct request (unioned
      over other fields never used while in the driver), and overloading
      the number of segments for this case.
      
      This has a couple of advantages:
      
       - we don't have to allocate the bio_vec
       - the amount of special casing for discard requests in the block
         layer is significantly reduced
       - using this same scheme for other request types is trivial,
         which will be important for implementing the new WRITE_ZEROES
         op on devices where it actually requires a payload (e.g. SCSI)
       - we can get rid of playing games with the request length, as
         we'll never touch it and completions will work just fine
       - it will allow us to support ranged discard operations in the
         future by merging non-contiguous discard bios into a single
         request
       - last but not least it removes a lot of code
      
      This patch is the common base for my WIP series for ranges discards and to
      remove discard_zeroes_data in favor of always using REQ_OP_WRITE_ZEROES,
      so it would be good to get it in quickly.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f9d03f96
  11. 06 12月, 2016 4 次提交
  12. 16 11月, 2016 1 次提交
  13. 14 11月, 2016 2 次提交
    • S
      nvme-rdma: stop and free io queues on connect failure · c8dbc37c
      Steve Wise 提交于
      While testing nvme-rdma with the spdk nvmf target over iw_cxgb4, I
      configured the target (mistakenly) to generate an error creating the
      NVMF IO queues.  This resulted a "Invalid SQE Parameter" error sent back
      to the host on the first IO queue connect:
      
      [ 9610.928182] nvme nvme1: queue_size 128 > ctrl maxcmd 120, clamping down
      [ 9610.938745] nvme nvme1: creating 32 I/O queues.
      
      So nvmf_connect_io_queue() returns an error to
      nvmf_connect_io_queue() / nvmf_connect_io_queues(), and that
      is returned to nvme_rdma_create_io_queues().  In the error path,
      nvmf_rdma_create_io_queues() frees the queue tagset memory _before_
      stopping and freeing the IB queues, which causes yet another
      touch-after-free crash due to SQ CQEs being flushed after the ib_cqe
      structs pointed-to by the flushed WRs have been freed (since they are
      part of the nvme_rdma_request struct).
      
      The fix is to stop and free the queues in nvmf_connect_io_queues()
      if there is an error connecting any of the queues.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      c8dbc37c
    • C
      nvme-rdma: reject non-connect commands before the queue is live · 553cd9ef
      Christoph Hellwig 提交于
      If we reconncect we might have command queue up that get resent as soon
      as the queue is restarted.  But until the connect command succeeded we
      can't send other command.  Add a new flag that marks a queue as live when
      connect finishes, and delay any non-connect command until the queue is
      live based on it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reported-by: NSteve Wise <swise@opengridcomputing.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      [sagig: fixes admin queue LIVE setting]
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      553cd9ef
  14. 11 11月, 2016 2 次提交
    • C
      nvme: don't pass the full CQE to nvme_complete_async_event · 7bf58533
      Christoph Hellwig 提交于
      We only need the status and result fields, and passing them explicitly
      makes life a lot easier for the Fibre Channel transport which doesn't
      have a full CQE for the fast path case.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7bf58533
    • C
      nvme: introduce struct nvme_request · d49187e9
      Christoph Hellwig 提交于
      This adds a shared per-request structure for all NVMe I/O.  This structure
      is embedded as the first member in all NVMe transport drivers request
      private data and allows to implement common functionality between the
      drivers.
      
      The first use is to replace the current abuse of the SCSI command
      passthrough fields in struct request for the NVMe command passthrough,
      but it will grow a field more fields to allow implementing things
      like common abort handlers in the future.
      
      The passthrough commands are handled by having a pointer to the SQE
      (struct nvme_command) in struct nvme_request, and the union of the
      possible result fields, which had to be turned from an anonymous
      into a named union for that purpose.  This avoids having to pass
      a reference to a full CQE around and thus makes checking the result
      a lot more lightweight.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d49187e9
  15. 24 9月, 2016 2 次提交
  16. 23 9月, 2016 1 次提交
  17. 15 9月, 2016 1 次提交
  18. 13 9月, 2016 3 次提交
    • C
      nvme-rdma: fix null pointer dereference on req->mr · 1bda18de
      Colin Ian King 提交于
      If there is an error on req->mr, req->mr is set to null, however
      the following statement sets req->mr->need_inval causing a null
      pointer dereference.  Fix this by bailing out to label 'out' to
      immediately return and hence skip over the offending null pointer
      dereference.
      
      Fixes: f5b7b559 ("nvme-rdma: Get rid of duplicate variable")
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      1bda18de
    • S
      nvme-rdma: use ib_client API to detect device removal · e87a911f
      Steve Wise 提交于
      Change nvme-rdma to use the IB Client API to detect device removal.
      This has the wonderful benefit of being able to blow away all the
      ib/rdma_cm resources for the device being removed.  No craziness about
      not destroying the cm_id handling the event.  No deadlocks due to broken
      iw_cm/rdma_cm/iwarp dependencies.  And no need to have a bound cm_id
      around during controller recovery/reconnect to catch device removal
      events.
      
      We don't use the device_add aspect of the ib_client service since we only
      want to create resources for an IB device if we have a target utilizing
      that device.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      e87a911f
    • S
      nvme-rdma: add DELETING queue flag · e89ca58f
      Sagi Grimberg 提交于
      When we get a surprise disconnect from the target we queue a periodic
      reconnect (which is the sane thing to do...).
      
      We only move the queues out of CONNECTED when we retry to reconnect (after
      10 seconds in the default case) but we stop the blk queues immediately
      so we are not bothered with traffic from now on. If delete() is kicking
      off in this period the queues are still in CONNECTED state.
      
      Part of the delete sequence is trying to issue ctrl shutdown if the
      admin queue is CONNECTED (which it is!). This request is issued but
      stuck in blk-mq waiting for the queues to start again. This might be
      the one preventing us from forward progress...
      
      The patch separates the queue flags to CONNECTED and DELETING. Now we
      will move out of CONNECTED as soon as error recovery kicks in (before
      stopping the queues) and DELETING is on when we start the queue deletion.
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      e89ca58f
  19. 04 9月, 2016 2 次提交
  20. 28 8月, 2016 2 次提交
  21. 18 8月, 2016 2 次提交
    • J
      nvme-rdma: fix sqsize/hsqsize per spec · c5af8654
      Jay Freyensee 提交于
      Per NVMe-over-Fabrics 1.0 spec, sqsize is represented as
      a 0-based value.
      
      Also per spec, the RDMA binding values shall be set
      to sqsize, which makes hsqsize 0-based values.
      
      Thus, the sqsize during NVMf connect() is now:
      
      [root@fedora23-fabrics-host1 for-48]# dmesg
      [  318.720645] nvme_fabrics: nvmf_connect_admin_queue(): sqsize for
      admin queue: 31
      [  318.720884] nvme nvme0: creating 16 I/O queues.
      [  318.810114] nvme_fabrics: nvmf_connect_io_queue(): sqsize for i/o
      queue: 127
      
      Finally, current interpretation implies hrqsize is 1's based
      so set it appropriately.
      Reported-by: NDaniel Verkamp <daniel.verkamp@intel.com>
      Signed-off-by: NJay Freyensee <james_p_freyensee@linux.intel.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      c5af8654
    • J
      fabrics: define admin sqsize min default, per spec · f994d9dc
      Jay Freyensee 提交于
      Upon admin queue connect(), the rdma qp was being
      set based on NVMF_AQ_DEPTH.  However, the fabrics layer was
      using the sqsize field value set for I/O queues for the admin
      queue, which threw the nvme layer and rdma layer off-whack:
      
      root@fedora23-fabrics-host1 nvmf]# dmesg
      [ 3507.798642] nvme_fabrics: nvmf_connect_admin_queue():admin sqsize
      being sent is: 128
      [ 3507.798858] nvme nvme0: creating 16 I/O queues.
      [ 3507.896407] nvme nvme0: new ctrl: NQN "nullside-nqn", addr
      192.168.1.3:4420
      
      Thus, to have a different admin queue value, we use
      NVMF_AQ_DEPTH for connect() and RDMA private data
      as the minimum depth specified in the NVMe-over-Fabrics 1.0 spec
      (and in that RDMA private data we treat hrqsize as 1's-based
      value, per current understanding of the fabrics spec).
      Reported-by: NDaniel Verkamp <daniel.verkamp@intel.com>
      Signed-off-by: NJay Freyensee <james_p_freyensee@linux.intel.com>
      Reviewed-by: NDaniel Verkamp <daniel.verkamp@intel.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      f994d9dc
  22. 16 8月, 2016 1 次提交
  23. 04 8月, 2016 2 次提交
  24. 03 8月, 2016 2 次提交