1. 21 12月, 2016 6 次提交
  2. 19 12月, 2016 1 次提交
  3. 15 12月, 2016 1 次提交
  4. 14 12月, 2016 2 次提交
    • A
      nvme/pci: Log PCI_STATUS when the controller dies · d2a61918
      Andy Lutomirski 提交于
      When debugging nvme controller crashes, it's nice to know whether
      the controller died cleanly so that the failure is just reflected in
      CSTS, it died and put an error in PCI_STATUS, or whether it died so
      badly that it stopped responding to PCI configuration space reads.
      
      I've seen a failure that gives 0xffff in PCI_STATUS on a Samsung
      "SM951 NVMe SAMSUNG 256GB" with firmware "BXW75D0Q".
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      
      Fixed up white space and hunk reject.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d2a61918
    • L
      Revert "nvme: add support for the Write Zeroes command" · cdb98c26
      Linus Torvalds 提交于
      This reverts commit 6d31e3ba.
      
      This causes bootup problems for me both on my laptop and my desktop.
      What they have in common is that they have NVMe disks with dm-crypt, but
      it's not the same controller, so it's not controller-specific.
      
      Jens does not see it on his machine (also NVMe), so it's presumably
      something that triggers just on bootup.  Possibly related to dm-crypt
      and the fact that I mark my luks volume with "allow-discards" in
      /etc/crypttab.
      
      It's 100% repeatable for me, which made it fairly straightforward to
      bisect the problem to this commit. Small mercies.
      
      So we don't know what the reason is yet, but the revert is needed to get
      things going again.
      Acked-by: NJens Axboe <axboe@fb.com>
      Cc: Chaitanya Kulkarni <chaitanya.kulkarni@hgst.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cdb98c26
  5. 09 12月, 2016 1 次提交
    • C
      block: improve handling of the magic discard payload · f9d03f96
      Christoph Hellwig 提交于
      Instead of allocating a single unused biovec for discard requests, send
      them down without any payload.  Instead we allow the driver to add a
      "special" payload using a biovec embedded into struct request (unioned
      over other fields never used while in the driver), and overloading
      the number of segments for this case.
      
      This has a couple of advantages:
      
       - we don't have to allocate the bio_vec
       - the amount of special casing for discard requests in the block
         layer is significantly reduced
       - using this same scheme for other request types is trivial,
         which will be important for implementing the new WRITE_ZEROES
         op on devices where it actually requires a payload (e.g. SCSI)
       - we can get rid of playing games with the request length, as
         we'll never touch it and completions will work just fine
       - it will allow us to support ranged discard operations in the
         future by merging non-contiguous discard bios into a single
         request
       - last but not least it removes a lot of code
      
      This patch is the common base for my WIP series for ranges discards and to
      remove discard_zeroes_data in favor of always using REQ_OP_WRITE_ZEROES,
      so it would be good to get it in quickly.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f9d03f96
  6. 06 12月, 2016 10 次提交
  7. 01 12月, 2016 1 次提交
  8. 30 11月, 2016 5 次提交
  9. 17 11月, 2016 1 次提交
  10. 16 11月, 2016 1 次提交
  11. 14 11月, 2016 2 次提交
    • S
      nvme-rdma: stop and free io queues on connect failure · c8dbc37c
      Steve Wise 提交于
      While testing nvme-rdma with the spdk nvmf target over iw_cxgb4, I
      configured the target (mistakenly) to generate an error creating the
      NVMF IO queues.  This resulted a "Invalid SQE Parameter" error sent back
      to the host on the first IO queue connect:
      
      [ 9610.928182] nvme nvme1: queue_size 128 > ctrl maxcmd 120, clamping down
      [ 9610.938745] nvme nvme1: creating 32 I/O queues.
      
      So nvmf_connect_io_queue() returns an error to
      nvmf_connect_io_queue() / nvmf_connect_io_queues(), and that
      is returned to nvme_rdma_create_io_queues().  In the error path,
      nvmf_rdma_create_io_queues() frees the queue tagset memory _before_
      stopping and freeing the IB queues, which causes yet another
      touch-after-free crash due to SQ CQEs being flushed after the ib_cqe
      structs pointed-to by the flushed WRs have been freed (since they are
      part of the nvme_rdma_request struct).
      
      The fix is to stop and free the queues in nvmf_connect_io_queues()
      if there is an error connecting any of the queues.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      c8dbc37c
    • C
      nvme-rdma: reject non-connect commands before the queue is live · 553cd9ef
      Christoph Hellwig 提交于
      If we reconncect we might have command queue up that get resent as soon
      as the queue is restarted.  But until the connect command succeeded we
      can't send other command.  Add a new flag that marks a queue as live when
      connect finishes, and delay any non-connect command until the queue is
      live based on it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reported-by: NSteve Wise <swise@opengridcomputing.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      [sagig: fixes admin queue LIVE setting]
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      553cd9ef
  12. 12 11月, 2016 1 次提交
  13. 11 11月, 2016 2 次提交
    • C
      nvme: don't pass the full CQE to nvme_complete_async_event · 7bf58533
      Christoph Hellwig 提交于
      We only need the status and result fields, and passing them explicitly
      makes life a lot easier for the Fibre Channel transport which doesn't
      have a full CQE for the fast path case.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7bf58533
    • C
      nvme: introduce struct nvme_request · d49187e9
      Christoph Hellwig 提交于
      This adds a shared per-request structure for all NVMe I/O.  This structure
      is embedded as the first member in all NVMe transport drivers request
      private data and allows to implement common functionality between the
      drivers.
      
      The first use is to replace the current abuse of the SCSI command
      passthrough fields in struct request for the NVMe command passthrough,
      but it will grow a field more fields to allow implementing things
      like common abort handlers in the future.
      
      The passthrough commands are handled by having a pointer to the SQE
      (struct nvme_command) in struct nvme_request, and the union of the
      possible result fields, which had to be turned from an anonymous
      into a named union for that purpose.  This avoids having to pass
      a reference to a full CQE around and thus makes checking the result
      a lot more lightweight.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d49187e9
  14. 03 11月, 2016 4 次提交
  15. 28 10月, 2016 1 次提交
  16. 20 10月, 2016 1 次提交