1. 20 10月, 2017 5 次提交
    • J
      nvme-fc: correct io timeout behavior · 134aedc9
      James Smart 提交于
      The transport io timeout behavior wasn't quite correct. It ignored
      that the io error handler is supposed to be synchronous so it possibly
      allowed the blk request to be restarted while the io associated was
      still aborting. Timeouts on reserved commands, those used for
      association create, were never timing out thus they hung out forever.
      
      To correct:
      If an io is times out while a remoteport is not connected, just
      restart the io timer. The lack of connectivity will simultaneously
      be resetting the controller, so the reset path will abort and terminate
      the io.
      
      If an io is times out while it was marked for transport abort, just
      reset the io timer. The abort process is underway and will complete
      the io.
      
      Otherwise, if an io times out, abort the io. If the abort was
      unsuccessful (unlikely) give up and return not handled.
      
      If the abort was successful, as the abort process is underway it will
      terminate the io, so rather than synchronously waiting, just restart
      the io timer.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      134aedc9
    • J
      nvme-fc: correct io termination handling · 0a02e39f
      James Smart 提交于
      The io completion handling for i/o's that are failing due to
      to a transport error or association termination had issues, causing
      io failures (DNR set so retries didn't kick in) or long stalls.
      
      Change the io completion handler for the following items:
      
      When an io has been completed due to a transport abort (based on an
      exchange error) or when marked as aborted as part of an association
      termination (FCOP_FLAGS_TERMIO), set the NVME completion status to
      NVME_SC_ABORTED. By default, do not set DNR on the status so that a
      retry can be attempted after association recreate.
      
      In cases where an io is failed (non-successful nvme status including
      aborted), if the controller is being deleted (blk_queue_dying) or
      the io was part of the ios used for association creation (ctrl state
      is NEW or RECONNECTING), then additionally set the DNR bit so the io
      will not be retried. If the failed io was part of association creation,
      the failure will tear down the partially completioned association and
      typically restart a new reconnect attempt (another create association
      later).
      
      Rearranged code flow to remove a largely unneeded local variable.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0a02e39f
    • C
      nvme-pci: add SGL support · a7a7cbe3
      Chaitanya Kulkarni 提交于
      This adds SGL support for NVMe PCIe driver, based on an earlier patch
      from Rajiv Shanmugam Madeswaran <smrajiv15 at gmail.com>. This patch
      refactors the original code and adds new module parameter sgl_threshold
      to determine whether to use SGL or PRP for IOs.
      
      The usage of SGLs is controlled by the sgl_threshold module parameter,
      which allows to conditionally use SGLs if average request segment
      size (avg_seg_size) is greater than sgl_threshold. In the original patch,
      the decision of using SGLs was dependent only on the IO size,
      with the new approach we consider not only IO size but also the
      number of physical segments present in the IO.
      
      We calculate avg_seg_size based on request payload bytes and number
      of physical segments present in the request.
      
      For e.g.:-
      
      1. blk_rq_nr_phys_segments = 2 blk_rq_payload_bytes = 8k
      avg_seg_size = 4K use sgl if avg_seg_size >= sgl_threshold.
      
      2. blk_rq_nr_phys_segments = 2 blk_rq_payload_bytes = 64k
      avg_seg_size = 32K use sgl if avg_seg_size >= sgl_threshold.
      
      3. blk_rq_nr_phys_segments = 16 blk_rq_payload_bytes = 64k
      avg_seg_size = 4K use sgl if avg_seg_size >= sgl_threshold.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a7a7cbe3
    • C
      nvme: use ida_simple_{get,remove} for the controller instance · 9843f685
      Christoph Hellwig 提交于
      Switch to the ida_simple_* helpers instead of opencoding them.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      9843f685
    • R
      nvmet: Change max_nsid in subsystem due to ns_disable if needed · ba2dec35
      Roy Shterman 提交于
      In case we disable namespaces which has the nsid like
      subsystem max_nsid we need to search for the next largest nsid
      in this subsystem. If the subsystem don't has more namespaces
      we set it to 0, else we take nsid from the last namespace in
      namespaces list because the list is sorted while inserting.
      Reviewed-by: NMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NRoy Shterman <roys@lightbitslabs.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      [hch: slight refactor]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      ba2dec35
  2. 19 10月, 2017 15 次提交
  3. 16 10月, 2017 1 次提交
  4. 05 10月, 2017 1 次提交
  5. 04 10月, 2017 6 次提交
  6. 03 10月, 2017 12 次提交
    • C
      block: move __elv_next_request to blk-core.c · 9c988374
      Christoph Hellwig 提交于
      No need to have this helper inline in a header.  Also drop the __ prefix.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9c988374
    • P
      block, bfq: decrease burst size when queues in burst exit · 7cb04004
      Paolo Valente 提交于
      If many queues belonging to the same group happen to be created
      shortly after each other, then the concurrent processes associated
      with these queues have typically a common goal, and they get it done
      as soon as possible if not hampered by device idling.  Examples are
      processes spawned by git grep, or by systemd during boot. As for
      device idling, this mechanism is currently necessary for weight
      raising to succeed in its goal: privileging I/O.  In view of these
      facts, BFQ does not provide the above queues with either weight
      raising or device idling.
      
      On the other hand, a burst of queue creations may be caused also by
      the start-up of a complex application. In this case, these queues need
      usually to be served one after the other, and as quickly as possible,
      to maximise responsiveness. Therefore, in this case the best strategy
      is to weight-raise all the queues created during the burst, i.e., the
      exact opposite of the strategy for the above case.
      
      To distinguish between the two cases, BFQ uses an empirical burst-size
      threshold, found through extensive tests and monitoring of daily
      usage. Only large bursts, i.e., burst with a size above this
      threshold, are considered as generated by a high number of parallel
      processes. In this respect, upstart-based boot proved to be rather
      hard to detect as generating a large burst of queue creations, because
      with upstart most of the queues created in a burst exit *before* the
      next queues in the same burst are created. To address this issue, I
      changed the burst-detection mechanism so as to not decrease the size
      of the current burst even if one of the queues in the burst is
      eliminated.
      
      Unfortunately, this missing decrease causes false positives on very
      fast systems: on the start-up of a complex application, such as
      libreoffice writer, so many queues are created, served and exited
      shortly after each other, that a large burst of queue creations is
      wrongly detected as occurring. These false positives just disappear if
      the size of a burst is decreased when one of the queues in the burst
      exits. This commit restores the missing burst-size decrease, relying
      of the fact that upstart is apparently unlikely to be used on systems
      running this and future versions of the kernel.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NMauro Andreolini <mauro.andreolini@unimore.it>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7cb04004
    • P
      block, bfq: let early-merged queues be weight-raised on split too · 894df937
      Paolo Valente 提交于
      A just-created bfq_queue, say Q, may happen to be merged with another
      bfq_queue on the very first invocation of the function
      __bfq_insert_request. In such a case, even if Q would clearly deserve
      interactive weight raising (as it has just been created), the function
      bfq_add_request does not make it to be invoked for Q, and thus to
      activate weight raising for Q. As a consequence, when the state of Q
      is saved for a possible future restore, after a split of Q from the
      other bfq_queue(s), such a state happens to be (unjustly)
      non-weight-raised. Then the bfq_queue will not enjoy any weight
      raising on the split, even if should still be in an interactive
      weight-raising period when the split occurs.
      
      This commit solves this problem as follows, for a just-created
      bfq_queue that is being early-merged: it stores directly, in the saved
      state of the bfq_queue, the weight-raising state that would have been
      assigned to the bfq_queue if not early-merged.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      894df937
    • P
      block, bfq: check and switch back to interactive wr also on queue split · 3e2bdd6d
      Paolo Valente 提交于
      As already explained in the message of commit "block, bfq: fix
      wrong init of saved start time for weight raising", if a soft
      real-time weight-raising period happens to be nested in a larger
      interactive weight-raising period, then BFQ restores the interactive
      weight raising at the end of the soft real-time weight raising. In
      particular, BFQ checks whether the latter has ended only on request
      dispatches.
      
      Unfortunately, the above scheme fails to restore interactive weight
      raising in the following corner case: if a bfq_queue, say Q,
      1) Is merged with another bfq_queue while it is in a nested soft
      real-time weight-raising period. The weight-raising state of Q is
      then saved, and not considered any longer until a split occurs.
      2) Is split from the other bfq_queue(s) at a time instant when its
      soft real-time weight raising is already finished.
      On the split, while resuming the previous, soft real-time
      weight-raised state of the bfq_queue Q, BFQ checks whether the
      current soft real-time weight-raising period is actually over. If so,
      BFQ switches weight raising off for Q, *without* checking whether the
      soft real-time period was actually nested in a non-yet-finished
      interactive weight-raising period.
      
      This commit addresses this issue by adding the above missing check in
      bfq_queue splits, and restoring interactive weight raising if needed.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3e2bdd6d
    • P
      block, bfq: fix wrong init of saved start time for weight raising · 4baa8bb1
      Paolo Valente 提交于
      This commit fixes a bug that causes bfq to fail to guarantee a high
      responsiveness on some drives, if there is heavy random read+write I/O
      in the background. More precisely, such a failure allowed this bug to
      be found [1], but the bug may well cause other yet unreported
      anomalies.
      
      BFQ raises the weight of the bfq_queues associated with soft real-time
      applications, to privilege the I/O, and thus reduce latency, for these
      applications. This mechanism is named soft-real-time weight raising in
      BFQ. A soft real-time period may happen to be nested into an
      interactive weight raising period, i.e., it may happen that, when a
      bfq_queue switches to a soft real-time weight-raised state, the
      bfq_queue is already being weight-raised because deemed interactive
      too. In this case, BFQ saves in a special variable
      wr_start_at_switch_to_srt, the time instant when the interactive
      weight-raising period started for the bfq_queue, i.e., the time
      instant when BFQ started to deem the bfq_queue interactive. This value
      is then used to check whether the interactive weight-raising period
      would still be in progress when the soft real-time weight-raising
      period ends.  If so, interactive weight raising is restored for the
      bfq_queue. This restore is useful, in particular, because it prevents
      bfq_queues from losing their interactive weight raising prematurely,
      as a consequence of spurious, short-lived soft real-time
      weight-raising periods caused by wrong detections as soft real-time.
      
      If, instead, a bfq_queue switches to soft-real-time weight raising
      while it *is not* already in an interactive weight-raising period,
      then the variable wr_start_at_switch_to_srt has no meaning during the
      following soft real-time weight-raising period. Unfortunately the
      handling of this case is wrong in BFQ: not only the variable is not
      flagged somehow as meaningless, but it is also set to the time when
      the switch to soft real-time weight-raising occurs. This may cause an
      interactive weight-raising period to be considered mistakenly as still
      in progress, and thus a spurious interactive weight-raising period to
      start for the bfq_queue, at the end of the soft-real-time
      weight-raising period. In particular the spurious interactive
      weight-raising period will be considered as still in progress, if the
      soft-real-time weight-raising period does not last very long. The
      bfq_queue will then be wrongly privileged and, if I/O bound, will
      unjustly steal bandwidth to truly interactive or soft real-time
      bfq_queues, harming responsiveness and low latency.
      
      This commit fixes this issue by just setting wr_start_at_switch_to_srt
      to minus infinity (farthest past time instant according to jiffies
      macros): when the soft-real-time weight-raising period ends, certainly
      no interactive weight-raising period will be considered as still in
      progress.
      
      [1] Background I/O Type: Random - Background I/O mix: Reads and writes
      - Application to start: LibreOffice Writer in
      http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.13-IO-LaptopSigned-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4baa8bb1
    • J
      writeback: only allow one inflight and pending full flush · aac8d41c
      Jens Axboe 提交于
      When someone calls wakeup_flusher_threads() or
      wakeup_flusher_threads_bdi(), they schedule writeback of all dirty
      pages in the system (or on that bdi). If we are tight on memory, we
      can get tons of these queued from kswapd/vmscan. This causes (at
      least) two problems:
      
      1) We consume a ton of memory just allocating writeback work items.
         We've seen as much as 600 million of these writeback work items
         pending. That's a lot of memory to pointlessly hold hostage,
         while the box is under memory pressure.
      
      2) We spend so much time processing these work items, that we
         introduce a softlockup in writeback processing. This is because
         each of the writeback work items don't end up doing any work (it's
         hard when you have millions of identical ones coming in to the
         flush machinery), so we just sit in a tight loop pulling work
         items and deleting/freeing them.
      
      Fix this by adding a 'start_all' bit to the writeback structure, and
      set that when someone attempts to flush all dirty pages. The bit is
      cleared when we start writeback on that work item. If the bit is
      already set when we attempt to queue !nr_pages writeback, then we
      simply ignore it.
      
      This provides us one full flush in flight, with one pending as well,
      and makes for more efficient handling of this type of writeback.
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Tested-by: NChris Mason <clm@fb.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      aac8d41c
    • J
      writeback: move nr_pages == 0 logic to one location · e8e8a0c6
      Jens Axboe 提交于
      Now that we have no external callers of wb_start_writeback(), we
      can shuffle the passing in of 'nr_pages'. Everybody passes in 0
      at this point, so just kill the argument and move the dirty
      count retrieval to that function.
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Tested-by: NChris Mason <clm@fb.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e8e8a0c6
    • J
      writeback: make wb_start_writeback() static · 9dfb176f
      Jens Axboe 提交于
      We don't have any callers outside of fs-writeback.c anymore,
      make it private.
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Tested-by: NChris Mason <clm@fb.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9dfb176f
    • J
      writeback: pass in '0' for nr_pages writeback in laptop mode · 0ab29fd0
      Jens Axboe 提交于
      Laptop mode really wants to writeback the number of dirty
      pages and inodes. Instead of calculating this in the caller,
      just pass in 0 and let wakeup_flusher_threads() handle it.
      
      Use the new wakeup_flusher_threads_bdi() instead of rolling
      our own.
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Tested-by: NChris Mason <clm@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0ab29fd0
    • J
      writeback: provide a wakeup_flusher_threads_bdi() · 595043e5
      Jens Axboe 提交于
      Similar to wakeup_flusher_threads(), except that we only wake
      up the flusher threads on the specified backing device.
      
      No functional changes in this patch.
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Tested-by: NChris Mason <clm@fb.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      595043e5
    • J
      writeback: remove 'range_cyclic' argument for wb_start_writeback() · 47410d88
      Jens Axboe 提交于
      All the callers pass in 'true' for range_cyclic, so kill the
      argument.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      47410d88
    • J
      writeback: switch wakeup_flusher_threads() to cyclic writeback · d31cd9d3
      Jens Axboe 提交于
      We're writing back the full range of dirty pages on the devices,
      there's no point in making this special and not do normal range
      cyclic writeback.
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d31cd9d3