1. 10 7月, 2019 3 次提交
  2. 13 5月, 2019 1 次提交
  3. 01 5月, 2019 2 次提交
  4. 29 3月, 2019 1 次提交
  5. 20 2月, 2019 2 次提交
  6. 24 1月, 2019 1 次提交
  7. 10 1月, 2019 1 次提交
  8. 08 12月, 2018 1 次提交
  9. 05 12月, 2018 1 次提交
  10. 26 11月, 2018 1 次提交
    • J
      block: make blk_poll() take a parameter on whether to spin or not · 0a1b8b87
      Jens Axboe 提交于
      blk_poll() has always kept spinning until it found an IO. This is
      fine for SYNC polling, since we need to find one request we have
      pending, but in preparation for ASYNC polling it can be beneficial
      to just check if we have any entries available or not.
      
      Existing callers are converted to pass in 'spin == true', to retain
      the old behavior.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0a1b8b87
  11. 19 11月, 2018 1 次提交
    • J
      block: have ->poll_fn() return number of entries polled · 85f4d4b6
      Jens Axboe 提交于
      We currently only really support sync poll, ie poll with 1 IO in flight.
      This prepares us for supporting async poll.
      
      Note that the returned value isn't necessarily 100% accurate. If poll
      races with IRQ completion, we assume that the fact that the task is now
      runnable means we found at least one entry. In reality it could be more
      than 1, or not even 1. This is fine, the caller will just need to take
      this into account.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      85f4d4b6
  12. 16 11月, 2018 1 次提交
  13. 09 11月, 2018 1 次提交
  14. 17 10月, 2018 1 次提交
  15. 02 10月, 2018 2 次提交
    • C
      nvme: take node locality into account when selecting a path · f3334447
      Christoph Hellwig 提交于
      Make current_path an array with an entry for every possible node, and
      cache the best path on a per-node basis.  Take the node distance into
      account when selecting it.  This is primarily useful for dual-ported PCIe
      devices which are connected to PCIe root ports on different sockets.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      f3334447
    • J
      nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O · 783f4a44
      James Smart 提交于
      When an io is rejected by nvmf_check_ready() due to validation of the
      controller state, the nvmf_fail_nonready_command() will normally return
      BLK_STS_RESOURCE to requeue and retry.  However, if the controller is
      dying or the I/O is marked for NVMe multipath, the I/O is failed so that
      the controller can terminate or so that the io can be issued on a
      different path.  Unfortunately, as this reject point is before the
      transport has accepted the command, blk-mq ends up completing the I/O
      and never calls nvme_complete_rq(), which is where multipath may preserve
      or re-route the I/O. The end result is, the device user ends up seeing an
      EIO error.
      
      Example: single path connectivity, controller is under load, and a reset
      is induced.  An I/O is received:
      
        a) while the reset state has been set but the queues have yet to be
           stopped; or
        b) after queues are started (at end of reset) but before the reconnect
           has completed.
      
      The I/O finishes with an EIO status.
      
      This patch makes the following changes:
      
        - Adds the HOST_PATH_ERROR pathing status from TP4028
        - Modifies the reject point such that it appears to queue successfully,
          but actually completes the io with the new pathing status and calls
          nvme_complete_rq().
        - nvme_complete_rq() recognizes the new status, avoids resetting the
          controller (likely was already done in order to get this new status),
          and calls the multipather to clear the current path that errored.
          This allows the next command (retry or new command) to select a new
          path if there is one.
      Signed-off-by: NJames Smart <jsmart2021@gmail.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      783f4a44
  16. 28 9月, 2018 2 次提交
  17. 26 9月, 2018 1 次提交
  18. 07 8月, 2018 1 次提交
  19. 28 7月, 2018 2 次提交
  20. 11 6月, 2018 1 次提交
  21. 03 5月, 2018 2 次提交
  22. 26 3月, 2018 1 次提交
  23. 09 3月, 2018 1 次提交
  24. 07 3月, 2018 1 次提交
  25. 01 3月, 2018 1 次提交
  26. 28 2月, 2018 1 次提交
    • B
      nvme-multipath: fix sysfs dangerously created links · 9bd82b1a
      Baegjae Sung 提交于
      If multipathing is enabled, each NVMe subsystem creates a head
      namespace (e.g., nvme0n1) and multiple private namespaces
      (e.g., nvme0c0n1 and nvme0c1n1) in sysfs. When creating links for
      private namespaces, links of head namespace are used, so the
      namespace creation order must be followed (e.g., nvme0n1 ->
      nvme0c1n1). If the order is not followed, links of sysfs will be
      incomplete or kernel panic will occur.
      
      The kernel panic was:
        kernel BUG at fs/sysfs/symlink.c:27!
        Call Trace:
          nvme_mpath_add_disk_links+0x5d/0x80 [nvme_core]
          nvme_validate_ns+0x5c2/0x850 [nvme_core]
          nvme_scan_work+0x1af/0x2d0 [nvme_core]
      
      Correct order
      Context A     Context B
      nvme0n1
      nvme0c0n1     nvme0c1n1
      
      Incorrect order
      Context A     Context B
                    nvme0c1n1
      nvme0n1
      nvme0c0n1
      
      The nvme_mpath_add_disk (for creating head namespace) is called
      just before the nvme_mpath_add_disk_links (for creating private
      namespaces). In nvme_mpath_add_disk, the first context acquires
      the lock of subsystem and creates a head namespace, and other
      contexts do nothing by checking GENHD_FL_UP of a head namespace
      after waiting to acquire the lock. We verified the code with or
      without multipathing using three vendors of dual-port NVMe SSDs.
      Signed-off-by: NBaegjae Sung <baegjae@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      9bd82b1a
  27. 11 1月, 2018 2 次提交
  28. 20 11月, 2017 1 次提交
  29. 11 11月, 2017 3 次提交