1. 18 4月, 2018 1 次提交
  2. 27 3月, 2018 1 次提交
  3. 08 2月, 2018 1 次提交
    • P
      block, bfq: add requeue-request hook · a7877390
      Paolo Valente 提交于
      Commit 'a6a252e6 ("blk-mq-sched: decide how to handle flush rq via
      RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
      be re-inserted into the active I/O scheduler for that device. As a
      consequence, I/O schedulers may get the same request inserted again,
      even several times, without a finish_request invoked on that request
      before each re-insertion.
      
      This fact is the cause of the failure reported in [1]. For an I/O
      scheduler, every re-insertion of the same re-prepared request is
      equivalent to the insertion of a new request. For schedulers like
      mq-deadline or kyber, this fact causes no harm. In contrast, it
      confuses a stateful scheduler like BFQ, which keeps state for an I/O
      request, until the finish_request hook is invoked on the request. In
      particular, BFQ may get stuck, waiting forever for the number of
      request dispatches, of the same request, to be balanced by an equal
      number of request completions (while there will be one completion for
      that request). In this state, BFQ may refuse to serve I/O requests
      from other bfq_queues. The hang reported in [1] then follows.
      
      However, the above re-prepared requests undergo a requeue, thus the
      requeue_request hook of the active elevator is invoked for these
      requests, if set. This commit then addresses the above issue by
      properly implementing the hook requeue_request in BFQ.
      
      [1] https://marc.info/?l=linux-block&m=151211117608676Reported-by: NIvan Kozik <ivan@ludios.org>
      Reported-by: NAlban Browaeys <alban.browaeys@gmail.com>
      Tested-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NSerena Ziviani <ziviani.serena@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a7877390
  4. 18 1月, 2018 2 次提交
    • P
      block, bfq: limit sectors served with interactive weight raising · 8a8747dc
      Paolo Valente 提交于
      To maximise responsiveness, BFQ raises the weight, and performs device
      idling, for bfq_queues associated with processes deemed as
      interactive. In particular, weight raising has a maximum duration,
      equal to the time needed to start a large application. If a
      weight-raised process goes on doing I/O beyond this maximum duration,
      it loses weight-raising.
      
      This mechanism is evidently vulnerable to the following false
      positives: I/O-bound applications that will go on doing I/O for much
      longer than the duration of weight-raising. These applications have
      basically no benefit from being weight-raised at the beginning of
      their I/O. On the opposite end, while being weight-raised, these
      applications
      a) unjustly steal throughput to applications that may truly need
      low latency;
      b) make BFQ uselessly perform device idling; device idling results
      in loss of device throughput with most flash-based storage, and may
      increase latencies when used purposelessly.
      
      This commit adds a countermeasure to reduce both the above
      problems. To introduce this countermeasure, we provide the following
      extra piece of information (full details in the comments added by this
      commit). During the start-up of the large application used as a
      reference to set the duration of weight-raising, involved processes
      transfer at most ~110K sectors each. Accordingly, a process initially
      deemed as interactive has no right to be weight-raised any longer,
      once transferred 110K sectors or more.
      
      Basing on this consideration, this commit early-ends weight-raising
      for a bfq_queue if the latter happens to have received an amount of
      service at least equal to 110K sectors (actually, a little bit more,
      to keep a safety margin). I/O-bound applications that reach a high
      throughput, such as file copy, get to this threshold much before the
      allowed weight-raising period finishes. Thus this early ending of
      weight-raising reduces the amount of time during which these
      applications cause the problems described above.
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NHolger Hoffstätte <holger@applied-asynchrony.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8a8747dc
    • P
      block, bfq: limit tags for writes and async I/O · a52a69ea
      Paolo Valente 提交于
      Asynchronous I/O can easily starve synchronous I/O (both sync reads
      and sync writes), by consuming all request tags. Similarly, storms of
      synchronous writes, such as those that sync(2) may trigger, can starve
      synchronous reads. In their turn, these two problems may also cause
      BFQ to loose control on latency for interactive and soft real-time
      applications. For example, on a PLEXTOR PX-256M5S SSD, LibreOffice
      Writer takes 0.6 seconds to start if the device is idle, but it takes
      more than 45 seconds (!) if there are sequential writes in the
      background.
      
      This commit addresses this issue by limiting the maximum percentage of
      tags that asynchronous I/O requests and synchronous write requests can
      consume. In particular, this commit grants a higher threshold to
      synchronous writes, to prevent the latter from being starved by
      asynchronous I/O.
      
      According to the above test, LibreOffice Writer now starts in about
      1.2 seconds on average, regardless of the background workload, and
      apart from some rare outlier. To check this improvement, run, e.g.,
      sudo ./comm_startup_lat.sh bfq 5 5 seq 10 "lowriter --terminate_after_init"
      for the comm_startup_lat benchmark in the S suite [1].
      
      [1] https://github.com/Algodev-github/STested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NHolger Hoffstätte <holger@applied-asynchrony.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a52a69ea
  5. 10 1月, 2018 2 次提交
  6. 09 1月, 2018 1 次提交
  7. 06 1月, 2018 7 次提交
    • P
      block, bfq: remove batches of confusing ifdefs · 9b25bd03
      Paolo Valente 提交于
      Commit a33801e8 ("block, bfq: move debug blkio stats behind
      CONFIG_DEBUG_BLK_CGROUP") introduced two batches of confusing ifdefs:
      one reported in [1], plus a similar one in another function. This
      commit removes both batches, in the way suggested in [1].
      
      [1] https://www.spinics.net/lists/linux-block/msg20043.html
      
      Fixes: a33801e8 ("block, bfq: move debug blkio stats behind CONFIG_DEBUG_BLK_CGROUP")
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Tested-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9b25bd03
    • P
      block, bfq: consider also past I/O in soft real-time detection · a34b0244
      Paolo Valente 提交于
      BFQ privileges the I/O of soft real-time applications, such as video
      players, to guarantee to these application a high bandwidth and a low
      latency. In this respect, it is not easy to correctly detect when an
      application is soft real-time. A particularly nasty false positive is
      that of an I/O-bound application that occasionally happens to meet all
      requirements to be deemed as soft real-time. After being detected as
      soft real-time, such an application monopolizes the device. Fortunately,
      BFQ will realize soon that the application is actually not soft
      real-time and suspend every privilege. Yet, the application may happen
      again to be wrongly detected as soft real-time, and so on.
      
      As highlighted by our tests, this problem causes BFQ to occasionally
      fail to guarantee a high responsiveness, in the presence of heavy
      background I/O workloads. The reason is that the background workload
      happens to be detected as soft real-time, more or less frequently,
      during the execution of the interactive task under test. To give an
      idea, because of this problem, Libreoffice Writer occasionally takes 8
      seconds, instead of 3, to start up, if there are sequential reads and
      writes in the background, on a Kingston SSDNow V300.
      
      This commit addresses this issue by leveraging the following facts.
      
      The reason why some applications are detected as soft real-time despite
      all BFQ checks to avoid false positives, is simply that, during high
      CPU or storage-device load, I/O-bound applications may happen to do
      I/O slowly enough to meet all soft real-time requirements, and pass
      all BFQ extra checks. Yet, this happens only for limited time periods:
      slow-speed time intervals are usually interspersed between other time
      intervals during which these applications do I/O at a very high speed.
      To exploit these facts, this commit introduces a little change, in the
      detection of soft real-time behavior, to systematically consider also
      the recent past: the higher the speed was in the recent past, the
      later next I/O should arrive for the application to be considered as
      soft real-time. At the beginning of a slow-speed interval, the minimum
      arrival time allowed for the next I/O usually happens to still be so
      high, to fall *after* the end of the slow-speed period itself. As a
      consequence, the application does not risk to be deemed as soft
      real-time during the slow-speed interval. Then, during the next
      high-speed interval, the application cannot, evidently, be deemed as
      soft real-time (exactly because of its speed), and so on.
      
      This extra filtering proved to be rather effective: in the above test,
      the frequency of false positives became so low that the start-up time
      was 3 seconds in all iterations (apart from occasional outliers,
      caused by page-cache-management issues, which are out of the scope of
      this commit, and cannot be solved by an I/O scheduler).
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a34b0244
    • A
      block, bfq: remove superfluous check in queue-merging setup · 4403e4e4
      Angelo Ruocco 提交于
      When two or more processes do I/O in a way that the their requests are
      sequential in respect to one another, BFQ merges the bfq_queues associated
      with the processes. This way the overall I/O pattern becomes sequential,
      and thus there is a boost in througput.
      These cooperating processes usually start or restart to do I/O shortly
      after each other. So, in order to avoid merging non-cooperating processes,
      BFQ ensures that none of these queues has been in weight raising for too
      long.
      
      In this respect, from commit "block, bfq-sq, bfq-mq: let a queue be merged
      only shortly after being created", BFQ checks whether any queue (and not
      only weight-raised ones) is doing I/O continuously from too long to be
      merged.
      
      This new additional check makes the first one useless: a queue doing
      I/O from long enough, if being weight-raised, is also a queue in
      weight raising for too long to be merged. Accordingly, this commit
      removes the first check.
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4403e4e4
    • P
      block, bfq: let a queue be merged only shortly after starting I/O · 7b8fa3b9
      Paolo Valente 提交于
      In BFQ and CFQ, two processes are said to be cooperating if they do
      I/O in such a way that the union of their I/O requests yields a
      sequential I/O pattern. To get such a sequential I/O pattern out of
      the non-sequential pattern of each cooperating process, BFQ and CFQ
      merge the queues associated with these processes. In more detail,
      cooperating processes, and thus their associated queues, usually
      start, or restart, to do I/O shortly after each other. This is the
      case, e.g., for the I/O threads of KVM/QEMU and of the dump
      utility. Basing on this assumption, this commit allows a bfq_queue to
      be merged only during a short time interval (100ms) after it starts,
      or re-starts, to do I/O.  This filtering provides two important
      benefits.
      
      First, it greatly reduces the probability that two non-cooperating
      processes have their queues merged by mistake, if they just happen to
      do I/O close to each other for a short time interval. These spurious
      merges cause loss of service guarantees. A low-weight bfq_queue may
      unjustly get more than its expected share of the throughput: if such a
      low-weight queue is merged with a high-weight queue, then the I/O for
      the low-weight queue is served as if the queue had a high weight. This
      may damage other high-weight queues unexpectedly.  For instance,
      because of this issue, lxterminal occasionally took 7.5 seconds to
      start, instead of 6.5 seconds, when some sequential readers and
      writers did I/O in the background on a FUJITSU MHX2300BT HDD.  The
      reason is that the bfq_queues associated with some of the readers or
      the writers were merged with the high-weight queues of some processes
      that had to do some urgent but little I/O. The readers then exploited
      the inherited high weight for all or most of their I/O, during the
      start-up of terminal. The filtering introduced by this commit
      eliminated any outlier caused by spurious queue merges in our start-up
      time tests.
      
      This filtering also provides a little boost of the throughput
      sustainable by BFQ: 3-4%, depending on the CPU. The reason is that,
      once a bfq_queue cannot be merged any longer, this commit makes BFQ
      stop updating the data needed to handle merging for the queue.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7b8fa3b9
    • A
      block, bfq: check low_latency flag in bfq_bfqq_save_state() · 1be6e8a9
      Angelo Ruocco 提交于
      A just-created bfq_queue will certainly be deemed as interactive on
      the arrival of its first I/O request, if the low_latency flag is
      set. Yet, if the queue is merged with another queue on the arrival of
      its first I/O request, it will not have the chance to be flagged as
      interactive. Nevertheless, if the queue is then split soon enough, it
      has to be flagged as interactive after the split.
      
      To handle this early-merge scenario correctly, BFQ saves the state of
      the queue, on the merge, as if the latter had already been deemed
      interactive. So, if the queue is split soon, it will get
      weight-raised, because the previous state of the queue is resumed on
      the split.
      
      Unfortunately, in the act of saving the state of the newly-created
      queue, BFQ doesn't check whether the low_latency flag is set, and this
      causes early-merged queues to be then weight-raised, on queue splits,
      even if low_latency is off. This commit addresses this problem by
      adding the missing check.
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1be6e8a9
    • P
      block, bfq: add missing rq_pos_tree update on rq removal · 05e90283
      Paolo Valente 提交于
      If two processes do I/O close to each other, then BFQ merges the
      bfq_queues associated with these processes, to get a more sequential
      I/O, and thus a higher throughput.  In this respect, to detect whether
      two processes are doing I/O close to each other, BFQ keeps a list of
      the head-of-line I/O requests of all active bfq_queues.  The list is
      ordered by initial sectors, and implemented through a red-black tree
      (rq_pos_tree).
      
      Unfortunately, the update of the rq_pos_tree was incomplete, because
      the tree was not updated on the removal of the head-of-line I/O
      request of a bfq_queue, in case the queue did not remain empty. This
      commit adds the missing update.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      05e90283
    • P
      block, bfq: increase threshold to deem I/O as random · f0ba5ea2
      Paolo Valente 提交于
      If two processes do I/O close to each other, i.e., are cooperating
      processes in BFQ (and CFQ'S) nomenclature, then BFQ merges their
      associated bfq_queues, so as to get sequential I/O from the union of
      the I/O requests of the processes, and thus reach a higher
      throughput. A merged queue is then split if its I/O stops being
      sequential. In this respect, BFQ deems the I/O of a bfq_queue as
      (mostly) sequential only if less than 4 I/O requests are random, out
      of the last 32 requests inserted into the queue.
      
      Unfortunately, extensive testing (with the interleaved_io benchmark of
      the S suite [1], and with real applications spawning cooperating
      processes) has clearly shown that, with such a low threshold, only a
      rather low I/O throughput may be reached when several cooperating
      processes do I/O. In particular, the outcome of each test run was
      bimodal: if queue merging occurred and was stable during the test,
      then the throughput was close to the peak rate of the storage device,
      otherwise the throughput was arbitrarily low (usually around 1/10 of
      the peak rate with a rotational device). The probability to get the
      unlucky outcomes grew with the number of cooperating processes: it was
      already significant with 5 processes, and close to one with 7 or more
      processes.
      
      The cause of the low throughput in the unlucky runs was that the
      merged queues containing the I/O of these cooperating processes were
      soon split, because they contained more random I/O requests than those
      tolerated by the 4/32 threshold, but
      - that I/O would have however allowed the storage device to reach
        peak throughput or almost peak throughput;
      - in contrast, the I/O of these processes, if served individually
        (from separate queues) yielded a rather low throughput.
      
      So we repeated our tests with increasing values of the threshold,
      until we found the minimum value (19) for which we obtained maximum
      throughput, reliably, with at least up to 9 cooperating
      processes. Then we checked that the use of that higher threshold value
      did not cause any regression for any other benchmark in the suite [1].
      This commit raises the threshold to such a higher value.
      
      [1] https://github.com/Algodev-github/SSigned-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f0ba5ea2
  8. 15 11月, 2017 3 次提交
    • L
      block, bfq: move debug blkio stats behind CONFIG_DEBUG_BLK_CGROUP · a33801e8
      Luca Miccio 提交于
      BFQ currently creates, and updates, its own instance of the whole
      set of blkio statistics that cfq creates. Yet, from the comments
      of Tejun Heo in [1], it turned out that most of these statistics
      are meant/useful only for debugging. This commit makes BFQ create
      the latter, debugging statistics only if the option
      CONFIG_DEBUG_BLK_CGROUP is set.
      
      By doing so, this commit also enables BFQ to enjoy a high perfomance
      boost. The reason is that, if CONFIG_DEBUG_BLK_CGROUP is not set, then
      BFQ has to update far fewer statistics, and, in particular, not the
      heaviest to update.  To give an idea of the benefits, if
      CONFIG_DEBUG_BLK_CGROUP is not set, then, on an Intel i7-4850HQ, and
      with 8 threads doing random I/O in parallel on null_blk (configured
      with 0 latency), the throughput of BFQ grows from 310 to 400 KIOPS
      (+30%). We have measured similar or even much higher boosts with other
      CPUs: e.g., +45% with an ARM CortexTM-A53 Octa-core. Our results have
      been obtained and can be reproduced very easily with the script in [1].
      
      [1] https://www.spinics.net/lists/linux-block/msg18943.htmlSuggested-by: NTejun Heo <tj@kernel.org>
      Suggested-by: NUlf Hansson <ulf.hansson@linaro.org>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a33801e8
    • P
      block, bfq: update blkio stats outside the scheduler lock · 24bfd19b
      Paolo Valente 提交于
      bfq invokes various blkg_*stats_* functions to update the statistics
      contained in the special files blkio.bfq.* in the blkio controller
      groups, i.e., the I/O accounting related to the proportional-share
      policy provided by bfq. The execution of these functions takes a
      considerable percentage, about 40%, of the total per-request execution
      time of bfq (i.e., of the sum of the execution time of all the bfq
      functions that have to be executed to process an I/O request from its
      creation to its destruction).  This reduces the request-processing
      rate sustainable by bfq noticeably, even on a multicore CPU. In fact,
      the bfq functions that invoke blkg_*stats_* functions cannot be
      executed in parallel with the rest of the code of bfq, because both
      are executed under the same same per-device scheduler lock.
      
      To reduce this slowdown, this commit moves, wherever possible, the
      invocation of these functions (more precisely, of the bfq functions
      that invoke blkg_*stats_* functions) outside the critical sections
      protected by the scheduler lock.
      
      With this change, and with all blkio.bfq.* statistics enabled, the
      throughput grows, e.g., from 250 to 310 KIOPS (+25%) on an Intel
      i7-4850HQ, in case of 8 threads doing random I/O in parallel on
      null_blk, with the latter configured with 0 latency. We obtained the
      same or higher throughput boosts, up to +30%, with other processors
      (some figures are reported in the documentation). For our tests, we
      used the script [1], with which our results can be easily reproduced.
      
      NOTE. This commit still protects the invocation of blkg_*stats_*
      functions with the request_queue lock, because the group these
      functions are invoked on may otherwise disappear before or while these
      functions are executed.  Fortunately, tests without even this lock
      show, by difference, that the serialization caused by this lock has a
      little impact (at most ~5% of throughput reduction).
      
      [1] https://github.com/Algodev-github/IOSpeedTested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      24bfd19b
    • L
      block, bfq: add missing invocations of bfqg_stats_update_io_add/remove · 614822f8
      Luca Miccio 提交于
      bfqg_stats_update_io_add and bfqg_stats_update_io_remove are to be
      invoked, respectively, when an I/O request enters and when an I/O
      request exits the scheduler. Unfortunately, bfq does not fully comply
      with this scheme, because it does not invoke these functions for
      requests that are inserted into or extracted from its priority
      dispatch list. This commit fixes this mistake.
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      614822f8
  9. 09 10月, 2017 2 次提交
    • P
      block, bfq: fix unbalanced decrements of burst size · 99fead8d
      Paolo Valente 提交于
      The commit "block, bfq: decrease burst size when queues in burst
      exit" introduced the decrement of burst_size on the removal of a
      bfq_queue from the burst list. Unfortunately, this decrement can
      happen to be performed even when burst size is already equal to 0,
      because of unbalanced decrements. A description follows of the cause
      of these unbalanced decrements, namely a wrong assumption, and of the
      way how this wrong assumption leads to unbalanced decrements.
      
      The wrong assumption is that a bfq_queue can exit only if the process
      associated with the bfq_queue has exited. This is false, because a
      bfq_queue, say Q, may exit also as a consequence of a merge with
      another bfq_queue. In this case, Q exits because the I/O of its
      associated process has been redirected to another bfq_queue.
      
      The decrement unbalance occurs because Q may then be re-created after
      a split, and added back to the current burst list, *without*
      incrementing burst_size. burst_size is not incremented because Q is
      not a new bfq_queue added to the burst list, but a bfq_queue only
      temporarily removed from the list, and, before the commit "bfq-sq,
      bfq-mq: decrease burst size when queues in burst exit", burst_size was
      not decremented when Q was removed.
      
      This commit addresses this issue by just checking whether the exiting
      bfq_queue is a merged bfq_queue, and, in that case, not decrementing
      burst_size. Unfortunately, this still leaves room for unbalanced
      decrements, in the following rarer case: on a split, the bfq_queue
      happens to be inserted into a different burst list than that it was
      removed from when merged. If this happens, the number of elements in
      the new burst list becomes higher than burst_size (by one). When the
      bfq_queue then exits, it is of course not in a merged state any
      longer, thus burst_size is decremented, which results in an unbalanced
      decrement.  To handle this sporadic, unlucky case in a simple way,
      this commit also checks that burst_size is larger than 0 before
      decrementing it.
      
      Finally, this commit removes an useless, extra check: the check that
      the bfq_queue is sync, performed before checking whether the bfq_queue
      is in the burst list. This extra check is redundant, because only sync
      bfq_queues can be inserted into the burst list.
      
      Fixes: 7cb04004 ("block, bfq: decrease burst size when queues in burst exit")
      Reported-by: NPhilip Müller <philm@manjaro.org>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NPhilip Müller <philm@manjaro.org>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      99fead8d
    • L
      block,bfq: Disable writeback throttling · b5dc5d4d
      Luca Miccio 提交于
      Similarly to CFQ, BFQ has its write-throttling heuristics, and it
      is better not to combine them with further write-throttling
      heuristics of a different nature.
      So this commit disables write-back throttling for a device if BFQ
      is used as I/O scheduler for that device.
      Signed-off-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b5dc5d4d
  10. 03 10月, 2017 4 次提交
    • P
      block, bfq: decrease burst size when queues in burst exit · 7cb04004
      Paolo Valente 提交于
      If many queues belonging to the same group happen to be created
      shortly after each other, then the concurrent processes associated
      with these queues have typically a common goal, and they get it done
      as soon as possible if not hampered by device idling.  Examples are
      processes spawned by git grep, or by systemd during boot. As for
      device idling, this mechanism is currently necessary for weight
      raising to succeed in its goal: privileging I/O.  In view of these
      facts, BFQ does not provide the above queues with either weight
      raising or device idling.
      
      On the other hand, a burst of queue creations may be caused also by
      the start-up of a complex application. In this case, these queues need
      usually to be served one after the other, and as quickly as possible,
      to maximise responsiveness. Therefore, in this case the best strategy
      is to weight-raise all the queues created during the burst, i.e., the
      exact opposite of the strategy for the above case.
      
      To distinguish between the two cases, BFQ uses an empirical burst-size
      threshold, found through extensive tests and monitoring of daily
      usage. Only large bursts, i.e., burst with a size above this
      threshold, are considered as generated by a high number of parallel
      processes. In this respect, upstart-based boot proved to be rather
      hard to detect as generating a large burst of queue creations, because
      with upstart most of the queues created in a burst exit *before* the
      next queues in the same burst are created. To address this issue, I
      changed the burst-detection mechanism so as to not decrease the size
      of the current burst even if one of the queues in the burst is
      eliminated.
      
      Unfortunately, this missing decrease causes false positives on very
      fast systems: on the start-up of a complex application, such as
      libreoffice writer, so many queues are created, served and exited
      shortly after each other, that a large burst of queue creations is
      wrongly detected as occurring. These false positives just disappear if
      the size of a burst is decreased when one of the queues in the burst
      exits. This commit restores the missing burst-size decrease, relying
      of the fact that upstart is apparently unlikely to be used on systems
      running this and future versions of the kernel.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NMauro Andreolini <mauro.andreolini@unimore.it>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7cb04004
    • P
      block, bfq: let early-merged queues be weight-raised on split too · 894df937
      Paolo Valente 提交于
      A just-created bfq_queue, say Q, may happen to be merged with another
      bfq_queue on the very first invocation of the function
      __bfq_insert_request. In such a case, even if Q would clearly deserve
      interactive weight raising (as it has just been created), the function
      bfq_add_request does not make it to be invoked for Q, and thus to
      activate weight raising for Q. As a consequence, when the state of Q
      is saved for a possible future restore, after a split of Q from the
      other bfq_queue(s), such a state happens to be (unjustly)
      non-weight-raised. Then the bfq_queue will not enjoy any weight
      raising on the split, even if should still be in an interactive
      weight-raising period when the split occurs.
      
      This commit solves this problem as follows, for a just-created
      bfq_queue that is being early-merged: it stores directly, in the saved
      state of the bfq_queue, the weight-raising state that would have been
      assigned to the bfq_queue if not early-merged.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      894df937
    • P
      block, bfq: check and switch back to interactive wr also on queue split · 3e2bdd6d
      Paolo Valente 提交于
      As already explained in the message of commit "block, bfq: fix
      wrong init of saved start time for weight raising", if a soft
      real-time weight-raising period happens to be nested in a larger
      interactive weight-raising period, then BFQ restores the interactive
      weight raising at the end of the soft real-time weight raising. In
      particular, BFQ checks whether the latter has ended only on request
      dispatches.
      
      Unfortunately, the above scheme fails to restore interactive weight
      raising in the following corner case: if a bfq_queue, say Q,
      1) Is merged with another bfq_queue while it is in a nested soft
      real-time weight-raising period. The weight-raising state of Q is
      then saved, and not considered any longer until a split occurs.
      2) Is split from the other bfq_queue(s) at a time instant when its
      soft real-time weight raising is already finished.
      On the split, while resuming the previous, soft real-time
      weight-raised state of the bfq_queue Q, BFQ checks whether the
      current soft real-time weight-raising period is actually over. If so,
      BFQ switches weight raising off for Q, *without* checking whether the
      soft real-time period was actually nested in a non-yet-finished
      interactive weight-raising period.
      
      This commit addresses this issue by adding the above missing check in
      bfq_queue splits, and restoring interactive weight raising if needed.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3e2bdd6d
    • P
      block, bfq: fix wrong init of saved start time for weight raising · 4baa8bb1
      Paolo Valente 提交于
      This commit fixes a bug that causes bfq to fail to guarantee a high
      responsiveness on some drives, if there is heavy random read+write I/O
      in the background. More precisely, such a failure allowed this bug to
      be found [1], but the bug may well cause other yet unreported
      anomalies.
      
      BFQ raises the weight of the bfq_queues associated with soft real-time
      applications, to privilege the I/O, and thus reduce latency, for these
      applications. This mechanism is named soft-real-time weight raising in
      BFQ. A soft real-time period may happen to be nested into an
      interactive weight raising period, i.e., it may happen that, when a
      bfq_queue switches to a soft real-time weight-raised state, the
      bfq_queue is already being weight-raised because deemed interactive
      too. In this case, BFQ saves in a special variable
      wr_start_at_switch_to_srt, the time instant when the interactive
      weight-raising period started for the bfq_queue, i.e., the time
      instant when BFQ started to deem the bfq_queue interactive. This value
      is then used to check whether the interactive weight-raising period
      would still be in progress when the soft real-time weight-raising
      period ends.  If so, interactive weight raising is restored for the
      bfq_queue. This restore is useful, in particular, because it prevents
      bfq_queues from losing their interactive weight raising prematurely,
      as a consequence of spurious, short-lived soft real-time
      weight-raising periods caused by wrong detections as soft real-time.
      
      If, instead, a bfq_queue switches to soft-real-time weight raising
      while it *is not* already in an interactive weight-raising period,
      then the variable wr_start_at_switch_to_srt has no meaning during the
      following soft real-time weight-raising period. Unfortunately the
      handling of this case is wrong in BFQ: not only the variable is not
      flagged somehow as meaningless, but it is also set to the time when
      the switch to soft real-time weight-raising occurs. This may cause an
      interactive weight-raising period to be considered mistakenly as still
      in progress, and thus a spurious interactive weight-raising period to
      start for the bfq_queue, at the end of the soft-real-time
      weight-raising period. In particular the spurious interactive
      weight-raising period will be considered as still in progress, if the
      soft-real-time weight-raising period does not last very long. The
      bfq_queue will then be wrongly privileged and, if I/O bound, will
      unjustly steal bandwidth to truly interactive or soft real-time
      bfq_queues, harming responsiveness and low latency.
      
      This commit fixes this issue by just setting wr_start_at_switch_to_srt
      to minus infinity (farthest past time instant according to jiffies
      macros): when the soft-real-time weight-raising period ends, certainly
      no interactive weight-raising period will be considered as still in
      progress.
      
      [1] Background I/O Type: Random - Background I/O mix: Reads and writes
      - Application to start: LibreOffice Writer in
      http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.13-IO-LaptopSigned-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NAngelo Ruocco <angeloruocco90@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: NMirko Montanari <mirkomontanari91@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4baa8bb1
  11. 02 9月, 2017 4 次提交
  12. 31 8月, 2017 1 次提交
    • P
      block, bfq: make lookup_next_entity push up vtime on expirations · 80294c3b
      Paolo Valente 提交于
      To provide a very smooth service, bfq starts to serve a bfq_queue
      only if the queue is 'eligible', i.e., if the same queue would
      have started to be served in the ideal, perfectly fair system that
      bfq simulates internally. This is obtained by associating each
      queue with a virtual start time, and by computing a special system
      virtual time quantity: a queue is eligible only if the system
      virtual time has reached the virtual start time of the
      queue. Finally, bfq guarantees that, when a new queue must be set
      in service, there is always at least one eligible entity for each
      active parent entity in the scheduler. To provide this guarantee,
      the function __bfq_lookup_next_entity pushes up, for each parent
      entity on which it is invoked, the system virtual time to the
      minimum among the virtual start times of the entities in the
      active tree for the parent entity (more precisely, the push up
      occurs if the system virtual time happens to be lower than all
      such virtual start times).
      
      There is however a circumstance in which __bfq_lookup_next_entity
      cannot push up the system virtual time for a parent entity, even
      if the system virtual time is lower than the virtual start times
      of all the child entities in the active tree. It happens if one of
      the child entities is in service. In fact, in such a case, there
      is already an eligible entity, the in-service one, even if it may
      not be not present in the active tree (because in-service entities
      may be removed from the active tree).
      
      Unfortunately, in the last re-design of the
      hierarchical-scheduling engine, the reset of the pointer to the
      in-service entity for a given parent entity--reset to be done as a
      consequence of the expiration of the in-service entity--always
      happens after the function __bfq_lookup_next_entity has been
      invoked. This causes the function to think that there is still an
      entity in service for the parent entity, and then that the system
      virtual time cannot be pushed up, even if actually such a
      no-more-in-service entity has already been properly reinserted
      into the active tree (or in some other tree if no more
      active). Yet, the system virtual time *had* to be pushed up, to be
      ready to correctly choose the next queue to serve. Because of the
      lack of this push up, bfq may wrongly set in service a queue that
      had been speculatively pre-computed as the possible
      next-in-service queue, but that would no more be the one to serve
      after the expiration and the reinsertion into the active trees of
      the previously in-service entities.
      
      This commit addresses this issue by making
      __bfq_lookup_next_entity properly push up the system virtual time
      if an expiration is occurring.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      80294c3b
  13. 30 8月, 2017 1 次提交
  14. 29 8月, 2017 1 次提交
  15. 24 8月, 2017 1 次提交
  16. 11 8月, 2017 2 次提交
    • P
      block, bfq: boost throughput with flash-based non-queueing devices · edaf9428
      Paolo Valente 提交于
      When a queue associated with a process remains empty, there are cases
      where throughput gets boosted if the device is idled to await the
      arrival of a new I/O request for that queue. Currently, BFQ assumes
      that one of these cases is when the device has no internal queueing
      (regardless of the properties of the I/O being served). Unfortunately,
      this condition has proved to be too general. So, this commit refines it
      as "the device has no internal queueing and is rotational".
      
      This refinement provides a significant throughput boost with random
      I/O, on flash-based storage without internal queueing. For example, on
      a HiKey board, throughput increases by up to 125%, growing, e.g., from
      6.9MB/s to 15.6MB/s with two or three random readers in parallel.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      edaf9428
    • P
      block,bfq: refactor device-idling logic · d5be3fef
      Paolo Valente 提交于
      The logic that decides whether to idle the device is scattered across
      three functions. Almost all of the logic is in the function
      bfq_bfqq_may_idle, but (1) part of the decision is made in
      bfq_update_idle_window, and (2) the function bfq_bfqq_must_idle may
      switch off idling regardless of the output of bfq_bfqq_may_idle. In
      addition, both bfq_update_idle_window and bfq_bfqq_must_idle make
      their decisions as a function of parameters that are used, for similar
      purposes, also in bfq_bfqq_may_idle. This commit addresses these
      issues by moving all the logic into bfq_bfqq_may_idle.
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d5be3fef
  17. 12 7月, 2017 1 次提交
    • H
      bfq: dispatch request to prevent queue stalling after the request completion · 3f7cb4f4
      Hou Tao 提交于
      There are mq devices (eg., virtio-blk, nbd and loopback) which don't
      invoke blk_mq_run_hw_queues() after the completion of a request.
      If bfq is enabled on these devices and the slice_idle attribute or
      strict_guarantees attribute is set as zero, it is possible that
      after a request completion the remaining requests of busy bfq queue
      will stalled in the bfq schedule until a new request arrives.
      
      To fix the scheduler latency problem, we need to check whether or not
      all issued requests have completed and dispatch more requests to driver
      if there is no request in driver.
      
      The problem can be reproduced by running the following script
      on a virtio-blk device with nr_hw_queues as 1:
      
      #!/bin/sh
      
      dev=vdb
      # mount point for dev
      mp=/tmp/mnt
      cd $mp
      
      job=strict.job
      cat <<EOF > $job
      [global]
      direct=1
      bs=4k
      size=256M
      rw=write
      ioengine=libaio
      iodepth=128
      runtime=5
      time_based
      
      [1]
      filename=1.data
      
      [2]
      new_group
      filename=2.data
      EOF
      
      echo bfq > /sys/block/$dev/queue/scheduler
      echo 1 > /sys/block/$dev/queue/iosched/strict_guarantees
      fio $job
      Signed-off-by: NHou Tao <houtao1@huawei.com>
      Reviewed-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3f7cb4f4
  18. 04 7月, 2017 1 次提交
    • P
      block, bfq: don't change ioprio class for a bfq_queue on a service tree · 431b17f9
      Paolo Valente 提交于
      On each deactivation or re-scheduling (after being served) of a
      bfq_queue, BFQ invokes the function __bfq_entity_update_weight_prio(),
      to perform pending updates of ioprio, weight and ioprio class for the
      bfq_queue. BFQ also invokes this function on I/O-request dispatches,
      to raise or lower weights more quickly when needed, thereby improving
      latency. However, the entity representing the bfq_queue may be on the
      active (sub)tree of a service tree when this happens, and, although
      with a very low probability, the bfq_queue may happen to also have a
      pending change of its ioprio class. If both conditions hold when
      __bfq_entity_update_weight_prio() is invoked, then the entity moves to
      a sort of hybrid state: the new service tree for the entity, as
      returned by bfq_entity_service_tree(), differs from service tree on
      which the entity still is. The functions that handle activations and
      deactivations of entities do not cope with such a hybrid state (and
      would need to become more complex to cope).
      
      This commit addresses this issue by just making
      __bfq_entity_update_weight_prio() not perform also a possible pending
      change of ioprio class, when invoked on an I/O-request dispatch for a
      bfq_queue. Such a change is thus postponed to when
      __bfq_entity_update_weight_prio() is invoked on deactivation or
      re-scheduling of the bfq_queue.
      Reported-by: NMarco Piazza <mpiazza@gmail.com>
      Reported-by: NLaurentiu Nicola <lnicola@dend.ro>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Tested-by: NMarco Piazza <mpiazza@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      431b17f9
  19. 28 6月, 2017 1 次提交
    • P
      block, bfq: update wr_busy_queues if needed on a queue split · 13c931bd
      Paolo Valente 提交于
      This commit fixes a bug triggered by a non-trivial sequence of
      events. These events are briefly described in the next two
      paragraphs. The impatiens, or those who are familiar with queue
      merging and splitting, can jump directly to the last paragraph.
      
      On each I/O-request arrival for a shared bfq_queue, i.e., for a
      bfq_queue that is the result of the merge of two or more bfq_queues,
      BFQ checks whether the shared bfq_queue has become seeky (i.e., if too
      many random I/O requests have arrived for the bfq_queue; if the device
      is non rotational, then random requests must be also small for the
      bfq_queue to be tagged as seeky). If the shared bfq_queue is actually
      detected as seeky, then a split occurs: the bfq I/O context of the
      process that has issued the request is redirected from the shared
      bfq_queue to a new non-shared bfq_queue. As a degenerate case, if the
      shared bfq_queue actually happens to be shared only by one process
      (because of previous splits), then no new bfq_queue is created: the
      state of the shared bfq_queue is just changed from shared to non
      shared.
      
      Regardless of whether a brand new non-shared bfq_queue is created, or
      the pre-existing shared bfq_queue is just turned into a non-shared
      bfq_queue, several parameters of the non-shared bfq_queue are set
      (restored) to the original values they had when the bfq_queue
      associated with the bfq I/O context of the process (that has just
      issued an I/O request) was merged with the shared bfq_queue. One of
      these parameters is the weight-raising state.
      
      If, on the split of a shared bfq_queue,
      1) a pre-existing shared bfq_queue is turned into a non-shared
      bfq_queue;
      2) the previously shared bfq_queue happens to be busy;
      3) the weight-raising state of the previously shared bfq_queue happens
      to change;
      the number of weight-raised busy queues changes. The field
      wr_busy_queues must then be updated accordingly, but such an update
      was missing. This commit adds the missing update.
      Reported-by: NLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: NPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      13c931bd
  20. 19 6月, 2017 3 次提交