1. 01 6月, 2018 1 次提交
  2. 25 5月, 2018 4 次提交
  3. 03 5月, 2018 1 次提交
  4. 12 4月, 2018 2 次提交
    • J
      nvme: expand nvmf_check_if_ready checks · bb06ec31
      James Smart 提交于
      The nvmf_check_if_ready() checks that were added are very simplistic.
      As such, the routine allows a lot of cases to fail ios during windows
      of reset or re-connection. In cases where there are not multi-path
      options present, the error goes back to the callee - the filesystem
      or application. Not good.
      
      The common routine was rewritten and calling syntax slightly expanded
      so that per-transport is_ready routines don't need to be present.
      The transports now call the routine directly. The routine is now a
      fabrics routine rather than an inline function.
      
      The routine now looks at controller state to decide the action to
      take. Some states mandate io failure. Others define the condition where
      a command can be accepted.  When the decision is unclear, a generic
      queue-or-reject check is made to look for failfast or multipath ios and
      only fails the io if it is so marked. Otherwise, the io will be queued
      and wait for the controller state to resolve.
      
      Admin commands issued via ioctl share a live admin queue with commands
      from the transport for controller init. The ioctls could be intermixed
      with the initialization commands. It's possible for the ioctl cmd to
      be issued prior to the controller being enabled. To block this, the
      ioctl admin commands need to be distinguished from admin commands used
      for controller init. Added a USERCMD nvme_req(req)->rq_flags bit to
      reflect this division and set it on ioctls requests.  As the
      nvmf_check_if_ready() routine is called prior to nvme_setup_cmd(),
      ensure that commands allocated by the ioctl path (actually anything
      in core.c) preps the nvme_req(req) before starting the io. This will
      preserve the USERCMD flag during execution and/or retry.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.e>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bb06ec31
    • J
      nvme: don't send keep-alives to the discovery controller · 74c6c715
      Johannes Thumshirn 提交于
      NVMe over Fabrics 1.0 Section 5.2 "Discovery Controller Properties and
      Command Support" Figure 31 "Discovery Controller – Admin Commands"
      explicitly listst all commands but "Get Log Page" and "Identify" as
      reserved, but NetApp report the Linux host is sending Keep Alive
      commands to the discovery controller, which is a violation of the
      Spec.
      
      We're already checking for discovery controllers when configuring the
      keep alive timeout but when creating a discovery controller we're not
      hard wiring the keep alive timeout to 0 and thus remain on
      NVME_DEFAULT_KATO for the discovery controller.
      
      This can be easily remproduced when issuing a direct connect to the
      discovery susbsystem using:
      'nvme connect [...] --nqn=nqn.2014-08.org.nvmexpress.discovery'
      Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Fixes: 07bfcd09 ("nvme-fabrics: add a generic NVMe over Fabrics library")
      Reported-by: NMartin George <marting@netapp.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      74c6c715
  5. 09 3月, 2018 1 次提交
  6. 22 2月, 2018 1 次提交
  7. 26 1月, 2018 1 次提交
  8. 16 1月, 2018 1 次提交
  9. 08 1月, 2018 2 次提交
  10. 11 11月, 2017 1 次提交
  11. 01 11月, 2017 1 次提交
  12. 27 10月, 2017 1 次提交
  13. 04 10月, 2017 1 次提交
  14. 25 9月, 2017 1 次提交
  15. 01 9月, 2017 1 次提交
  16. 30 8月, 2017 1 次提交
    • R
      nvme-fabrics: Convert nvmf_transports_mutex to an rwsem · 489beb91
      Roland Dreier 提交于
      The mutex protects against the list of transports changing while a
      controller is being created, but using a plain old mutex means that it
      also serializes controller creation.  This unnecessarily slows down
      creating multiple controllers - for example for the RDMA transport,
      creating a controller involves establishing one connection for every IO
      queue, which involves even more network/software round trips, so the
      delay can become significant.
      
      The simplest way to fix this is to change the mutex to an rwsem and only
      hold it for writing when the list is being mutated.  Since we can take
      the rwsem for reading while creating a controller, we can create multiple
      controllers in parallel.
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      489beb91
  17. 29 8月, 2017 1 次提交
  18. 18 8月, 2017 1 次提交
  19. 28 6月, 2017 4 次提交
  20. 15 6月, 2017 2 次提交
  21. 05 6月, 2017 1 次提交
  22. 04 4月, 2017 1 次提交
    • S
      nvme-fabrics: Allow ctrl loss timeout configuration · 42a45274
      Sagi Grimberg 提交于
      When a host sense that its controller session is damaged,
      it tries to re-establish it periodically (reconnect every
      reconnect_delay). It may very well be that the controller
      is gone and never coming back, in this case the host will
      try to reconnect forever.
      
      Add a ctrl_loss_tmo to bound the number of reconnect attempts
      to a specific controller (default to a reasonable 10 minutes).
      The timeout configuration is actually translated into number of
      reconnect attempts and not a schedule on its own but rather
      divided with reconnect_delay. This is useful to prevent
      racing flows of remove and reconnect, and it doesn't really
      matter if we remove slightly sooner than what the user requested.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      42a45274
  23. 23 2月, 2017 1 次提交
  24. 06 12月, 2016 3 次提交
  25. 11 11月, 2016 1 次提交
    • C
      nvme: introduce struct nvme_request · d49187e9
      Christoph Hellwig 提交于
      This adds a shared per-request structure for all NVMe I/O.  This structure
      is embedded as the first member in all NVMe transport drivers request
      private data and allows to implement common functionality between the
      drivers.
      
      The first use is to replace the current abuse of the SCSI command
      passthrough fields in struct request for the NVMe command passthrough,
      but it will grow a field more fields to allow implementing things
      like common abort handlers in the future.
      
      The passthrough commands are handled by having a pointer to the SQE
      (struct nvme_command) in struct nvme_request, and the union of the
      possible result fields, which had to be turned from an anonymous
      into a named union for that purpose.  This avoids having to pass
      a reference to a full CQE around and thus makes checking the result
      a lot more lightweight.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d49187e9
  26. 24 9月, 2016 2 次提交
  27. 19 8月, 2016 2 次提交