1. 17 10月, 2018 3 次提交
  2. 02 10月, 2018 2 次提交
  3. 24 7月, 2018 1 次提交
    • J
      nvme: if_ready checks to fail io to deleting controller · 6cdefc6e
      James Smart 提交于
      The revised if_ready checks skipped over the case of returning error when
      the controller is being deleted.  Instead it was returning BUSY, which
      caused the ios to retry, which caused the ns delete to hang waiting for
      the ios to drain.
      
      Stack trace of hang looks like:
       kworker/u64:2   D    0    74      2 0x80000000
       Workqueue: nvme-delete-wq nvme_delete_ctrl_work [nvme_core]
       Call Trace:
        ? __schedule+0x26d/0x820
        schedule+0x32/0x80
        blk_mq_freeze_queue_wait+0x36/0x80
        ? remove_wait_queue+0x60/0x60
        blk_cleanup_queue+0x72/0x160
        nvme_ns_remove+0x106/0x140 [nvme_core]
        nvme_remove_namespaces+0x7e/0xa0 [nvme_core]
        nvme_delete_ctrl_work+0x4d/0x80 [nvme_core]
        process_one_work+0x160/0x350
        worker_thread+0x1c3/0x3d0
        kthread+0xf5/0x130
        ? process_one_work+0x350/0x350
        ? kthread_bind+0x10/0x10
        ret_from_fork+0x1f/0x30
      
      Extend nvmf_fail_nonready_command() to supply the controller pointer so
      that the controller state can be looked at. Fail any io to a controller
      that is deleting.
      
      Fixes: 3bc32bb1 ("nvme-fabrics: refactor queue ready check")
      Fixes: 35897b92 ("nvme-fabrics: fix and refine state checks in __nvmf_check_ready")
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NEwan D. Milne <emilne@redhat.com>
      Reviewed-by: NEwan D. Milne <emilne@redhat.com>
      6cdefc6e
  4. 23 7月, 2018 1 次提交
  5. 21 6月, 2018 1 次提交
  6. 15 6月, 2018 1 次提交
  7. 14 6月, 2018 3 次提交
    • J
      nvme-fc: fix nulling of queue data on reconnect · 3e493c00
      James Smart 提交于
      The reconnect path is calling the init routines to clear a queue
      structure. But the queue structure has state that perhaps needs
      to persist as long as the controller is live.
      
      Remove the nvme_fc_init_queue() calls on reconnect.
      The nvme_fc_free_queue() calls will clear state bits and reset
      any relevant queue state for a new connection.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      3e493c00
    • J
      nvme-fc: remove reinit_request routine · 587331f7
      James Smart 提交于
      The reinit_request routine is not necessary. Remove support for the
      op callback.
      
      As all that nvme_reinit_tagset() does is itterate and call the
      reinit routine, it too has no purpose. Remove the call.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      587331f7
    • J
      nvme-fc: change controllers first connect to use reconnect path · 4c984154
      James Smart 提交于
      Current code follows the framework that has been in the transports
      from the beginning where initial link-side controller connect occurs
      as part of "creating the controller". Thus that first connect fully
      talks to the controller and obtains values that can then be used in
      for blk-mq setup, etc. It also means that everything about the
      controller is fully know before the "create controller" call returns.
      
      This has several weaknesses:
      - The initial create_ctrl call made by the cli will block for a long
        time as wire transactions are performed synchronously. This delay
        becomes longer if errors occur or connectivity is lost and retries
        need to be performed.
      - Code wise, it means there is a separate connect path for initial
        controller connect vs the (same) steps used in the reconnect path.
      - And as there's separate paths, it means there's separate error
        handling and retry logic. It also plays havoc with the NEW state
        (should transition out of it after successful initial connect) vs
        the RESETTING and CONNECTING (reconnect) states that want to be
        transitioned to on error.
      - As there's separate paths, to recover from errors and disruptions,
        it requires separate recovery/retry paths as well and can severely
        convolute the controller state.
      
      This patch reworks the fc transport to use the same connect paths
      for the initial connection as it uses for reconnect. This makes a
      single path for error recovery and handling.
      
      This patch:
      - Removes the driving of the initial connect and replaces it with
        a state transition to CONNECTING and initiating the reconnect
        thread. A dummy state transition of RESETTING had to be traversed
        as a direct transtion of NEW->CONNECTING is not allowed. Given
        that the controller is "new", the RESETTING transition is a simple
        no-op. Once in the reconnecting thread, the normal behaviors of
        ctrl_loss_tmo (max_retries * connect_delay) and dev_loss_tmo will
        apply before the controller is torn down.
      - Only if the state transitions couldn't be traversed and the
        reconnect thread not scheduled, will the controller be torn down
        while in create_ctrl.
      - The prior code used the controller state of NEW to indicate
        whether request queues had been initialized or not. For the admin
        queue, the request queue is always created, so there's no need to
        check a state. For IO queues, change to tracking whether a successful
        io request queue create has occurred (e.g. 1st successful connect).
      - The initial controller id is initialized to the dynamic controller
        id used in the initial connect message. It will be overwritten by
        the real controller id once the controller is connected on the wire.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      4c984154
  8. 31 5月, 2018 1 次提交
  9. 25 5月, 2018 1 次提交
  10. 27 4月, 2018 1 次提交
  11. 12 4月, 2018 1 次提交
    • J
      nvme: expand nvmf_check_if_ready checks · bb06ec31
      James Smart 提交于
      The nvmf_check_if_ready() checks that were added are very simplistic.
      As such, the routine allows a lot of cases to fail ios during windows
      of reset or re-connection. In cases where there are not multi-path
      options present, the error goes back to the callee - the filesystem
      or application. Not good.
      
      The common routine was rewritten and calling syntax slightly expanded
      so that per-transport is_ready routines don't need to be present.
      The transports now call the routine directly. The routine is now a
      fabrics routine rather than an inline function.
      
      The routine now looks at controller state to decide the action to
      take. Some states mandate io failure. Others define the condition where
      a command can be accepted.  When the decision is unclear, a generic
      queue-or-reject check is made to look for failfast or multipath ios and
      only fails the io if it is so marked. Otherwise, the io will be queued
      and wait for the controller state to resolve.
      
      Admin commands issued via ioctl share a live admin queue with commands
      from the transport for controller init. The ioctls could be intermixed
      with the initialization commands. It's possible for the ioctl cmd to
      be issued prior to the controller being enabled. To block this, the
      ioctl admin commands need to be distinguished from admin commands used
      for controller init. Added a USERCMD nvme_req(req)->rq_flags bit to
      reflect this division and set it on ioctls requests.  As the
      nvmf_check_if_ready() routine is called prior to nvme_setup_cmd(),
      ensure that commands allocated by the ioctl path (actually anything
      in core.c) preps the nvme_req(req) before starting the io. This will
      preserve the USERCMD flag during execution and/or retry.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.e>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bb06ec31
  12. 26 3月, 2018 5 次提交
  13. 09 3月, 2018 1 次提交
    • J
      nvme_fc: rework sqsize handling · d157e534
      James Smart 提交于
      Corrected four outstanding issues in the transport around sqsize.
      
      1: Create Connection LS is sending the 1's-based sqsize, should be
      sending the 0's-based value.
      
      2: allocation of hw queue is using the 0's-base size. It should be
      using the 1's-based value.
      
      3: normalization of ctrl.sqsize by MQES is using MQES+1 (1's-based
      value). It should be MQES (0's-based value).
      
      4: Missing clause to ensure queue_count not larger than ctrl->sqsize.
      
      Corrected by:
      Clean up routines that pass queue size around. The queue size value is
      the actual count (1's-based) value and determined from ctrl->sqsize + 1.
      
      Routines that send 0's-based value adapt from queue size.
      
      Sset ctrl->sqsize properly for MQES.
      
      Added clause to nsure queue_count not larger than ctrl->sqsize + 1.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      d157e534
  14. 11 2月, 2018 2 次提交
    • J
      nvme_fc: cleanup io completion · c3aedd22
      James Smart 提交于
      There was some old cold that dealt with complete_rq being called
      prior to the lldd returning the io completion. This is garbage code.
      The complete_rq routine was being called after eh_timeouts were
      called and it was due to eh_timeouts not being handled properly.
      The timeouts were fixed in prior patches so that in general, a
      timeout will initiate an abort and the reset timer restarted as
      the abort operation will take care of completing things. Given the
      reset timer restarted, the erroneous complete_rq calls were eliminated.
      
      So remove the work that was synchronizing complete_rq with io
      completion.
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      c3aedd22
    • J
      nvme_fc: correct abort race condition on resets · 3efd6e8e
      James Smart 提交于
      During reset handling, there is live io completing while the reset
      is taking place. The reset path attempts to abort all outstanding io,
      counting the number of ios that were reset. It then waits for those
      ios to be reclaimed from the lldd before continuing.
      
      The transport's logic on io state and flag setting was poor, allowing
      ios to complete simultaneous to the abort request. The completed ios
      were counted, but as the completion had already occurred, the
      completion never reduced the count. As the count never zeros, the
      reset/delete never completes.
      
      Tighten it up by unconditionally changing the op state to completed
      when the io done handler is called.  The reset/abort path now changes
      the op state to aborted, but the abort only continues if the op
      state was live priviously. If complete, the abort is backed out.
      Thus proper counting of io aborts and their completions is working
      again.
      
      Also removed the TERMIO state on the op as it's redundant with the
      op's aborted state.
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      3efd6e8e
  15. 09 2月, 2018 1 次提交
  16. 31 1月, 2018 1 次提交
    • M
      blk-mq: introduce BLK_STS_DEV_RESOURCE · 86ff7c2a
      Ming Lei 提交于
      This status is returned from driver to block layer if device related
      resource is unavailable, but driver can guarantee that IO dispatch
      will be triggered in future when the resource is available.
      
      Convert some drivers to return BLK_STS_DEV_RESOURCE.  Also, if driver
      returns BLK_STS_RESOURCE and SCHED_RESTART is set, rerun queue after
      a delay (BLK_MQ_DELAY_QUEUE) to avoid IO stalls.  BLK_MQ_DELAY_QUEUE is
      3 ms because both scsi-mq and nvmefc are using that magic value.
      
      If a driver can make sure there is in-flight IO, it is safe to return
      BLK_STS_DEV_RESOURCE because:
      
      1) If all in-flight IOs complete before examining SCHED_RESTART in
      blk_mq_dispatch_rq_list(), SCHED_RESTART must be cleared, so queue
      is run immediately in this case by blk_mq_dispatch_rq_list();
      
      2) if there is any in-flight IO after/when examining SCHED_RESTART
      in blk_mq_dispatch_rq_list():
      - if SCHED_RESTART isn't set, queue is run immediately as handled in 1)
      - otherwise, this request will be dispatched after any in-flight IO is
        completed via blk_mq_sched_restart()
      
      3) if SCHED_RESTART is set concurently in context because of
      BLK_STS_RESOURCE, blk_mq_delay_run_hw_queue() will cover the above two
      cases and make sure IO hang can be avoided.
      
      One invariant is that queue will be rerun if SCHED_RESTART is set.
      Suggested-by: NJens Axboe <axboe@kernel.dk>
      Tested-by: NLaurence Oberman <loberman@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      86ff7c2a
  17. 18 1月, 2018 2 次提交
    • J
      nvme-fc: correct hang in nvme_ns_remove() · 0fd997d3
      James Smart 提交于
      When connectivity is lost to a device, the association is terminated
      and the blk-mq queues are quiesced/stopped. When connectivity is
      re-established, they are resumed.
      
      If connectivity is lost for a sufficient amount of time that the
      controller is then deleted, the delete path starts tearing down queues,
      and eventually calling nvme_ns_remove(). It appears that pending
      commands may cause blk_cleanup_queue() to never complete and the
      teardown stalls.
      
      Correct by starting the ns queues after transitioning to a DELETING
      state, allowing pending commands to be flushed with io failures. Thus
      the delete path is clear when reached.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0fd997d3
    • J
      nvme-fc: fix rogue admin cmds stalling teardown · d625d05e
      James Smart 提交于
      When connectivity is lost to a device, the association is terminated
      and the blk-mq queues are quiesced/stopped. When connectivity is
      re-established, they are resumed.
      
      If an admin command is received while connectivity is list, the ioctl
      queues the command on the admin_q and the command stalls (the thread
      issuing the ioctl hangs/waits). if the connectivity is lost long
      enough such that the controller is then deleted, the delete code
      makes its calls to initiate the delete, which then expects the core
      layer to call the transport when all references are removed and the
      controller can be freed.  Unfortunately, nothing in this path dequeued
      the admin command, so a reference sits outstanding and things stop,
      hanging the delete indefinitely.
      
      Correct by unquiescing the admin queue in the delete association. This
      means any admin command (which should only be from an ioctl) issued
      after connectivity is lost will detect the controller is in a
      reconnecting state and will (fast) fail the command. Thus, a pending
      reference can no longer be created.  Once connectivity is re-established,
      a new ioctl/admin command would see proper device state and function again.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      d625d05e
  18. 08 1月, 2018 1 次提交
  19. 15 12月, 2017 1 次提交
  20. 25 11月, 2017 1 次提交
  21. 20 11月, 2017 1 次提交
  22. 11 11月, 2017 5 次提交
  23. 01 11月, 2017 3 次提交