1. 09 11月, 2009 1 次提交
    • C
      cfq-iosched: fix next_rq computation · cf7c25cf
      Corrado Zoccolo 提交于
      Cfq has a bug in computation of next_rq, that affects transition
      between multiple sequential request streams in a single queue
      (e.g.: two sequential buffered writers of the same priority),
      causing the alternation between the two streams for a transient period.
      
        8,0    1    18737     0.260400660  5312  D   W 141653311 + 256
        8,0    1    20839     0.273239461  5400  D   W 141653567 + 256
        8,0    1    20841     0.276343885  5394  D   W 142803919 + 256
        8,0    1    20843     0.279490878  5394  D   W 141668927 + 256
        8,0    1    20845     0.292459993  5400  D   W 142804175 + 256
        8,0    1    20847     0.295537247  5400  D   W 141668671 + 256
        8,0    1    20849     0.298656337  5400  D   W 142804431 + 256
        8,0    1    20851     0.311481148  5394  D   W 141668415 + 256
        8,0    1    20853     0.314421305  5394  D   W 142804687 + 256
        8,0    1    20855     0.318960112  5400  D   W 142804943 + 256
      
      The fix makes sure that the next_rq is computed from the last
      dispatched request, and not affected by merging.
      
        8,0    1    37776     4.305161306     0  D   W 141738087 + 256
        8,0    1    37778     4.308298091     0  D   W 141738343 + 256
        8,0    1    37780     4.312885190     0  D   W 141738599 + 256
        8,0    1    37782     4.315933291     0  D   W 141738855 + 256
        8,0    1    37784     4.319064459     0  D   W 141739111 + 256
        8,0    1    37786     4.331918431  5672  D   W 142803007 + 256
        8,0    1    37788     4.334930332  5672  D   W 142803263 + 256
        8,0    1    37790     4.337902723  5672  D   W 142803519 + 256
        8,0    1    37792     4.342359774  5672  D   W 142803775 + 256
        8,0    1    37794     4.345318286     0  D   W 142804031 + 256
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      cf7c25cf
  2. 04 11月, 2009 5 次提交
  3. 02 11月, 2009 1 次提交
  4. 28 10月, 2009 6 次提交
    • J
      cfq-iosched: fix style issue in cfq_get_avg_queues() · 5869619c
      Jens Axboe 提交于
      Line breaks and bad brace placement.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5869619c
    • C
      cfq-iosched: fairness for sync no-idle queues · 718eee05
      Corrado Zoccolo 提交于
      Currently no-idle queues in cfq are not serviced fairly:
      even if they can only dispatch a small number of requests at a time,
      they have to compete with idling queues to be serviced, experiencing
      large latencies.
      
      We should notice, instead, that no-idle queues are the ones that would
      benefit most from having low latency, in fact they are any of:
      * processes with large think times (e.g. interactive ones like file
        managers)
      * seeky (e.g. programs faulting in their code at startup)
      * or marked as no-idle from upper levels, to improve latencies of those
        requests.
      
      This patch improves the fairness and latency for those queues, by:
      * separating sync idle, sync no-idle and async queues in separate
        service_trees, for each priority
      * service all no-idle queues together
      * and idling when the last no-idle queue has been serviced, to
        anticipate for more no-idle work
      * the timeslices allotted for idle and no-idle service_trees are
        computed proportionally to the number of processes in each set.
      
      Servicing all no-idle queues together should have a performance boost
      for NCQ-capable drives, without compromising fairness.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      718eee05
    • C
      cfq-iosched: enable idling for last queue on priority class · a6d44e98
      Corrado Zoccolo 提交于
      cfq can disable idling for queues in various circumstances.
      When workloads of different priorities are competing, if the higher
      priority queue has idling disabled, lower priority queues may steal
      its disk share. For example, in a scenario with an RT process
      performing seeky reads vs a BE process performing sequential reads,
      on an NCQ enabled hardware, with low_latency unset,
      the RT process will dispatch only the few pending requests every full
      slice of service for the BE process.
      
      The patch solves this issue by always performing idle on the last
      queue at a given priority class > idle. If the same process, or one
      that can pre-empt it (so at the same priority or higher), submits a
      new request within the idle window, the lower priority queue won't
      dispatch, saving the disk bandwidth for higher priority ones.
      
      Note: this doesn't touch the non_rotational + NCQ case (no hardware
      to test if this is a benefit in that case).
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a6d44e98
    • C
      cfq-iosched: reimplement priorities using different service trees · c0324a02
      Corrado Zoccolo 提交于
      We use different service trees for different priority classes.
      This allows a simplification in the service tree insertion code, that no
      longer has to consider priority while walking the tree.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c0324a02
    • C
      cfq-iosched: preparation to handle multiple service trees · aa6f6a3d
      Corrado Zoccolo 提交于
      We embed a pointer to the service tree in each queue, to handle multiple
      service trees easily.
      Service trees are enriched with a counter.
      cfq_add_rq_rb is invoked after putting the rq in the fifo, to ensure
      that all fields in rq are properly initialized.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      aa6f6a3d
    • C
      cfq-iosched: adapt slice to number of processes doing I/O · 5db5d642
      Corrado Zoccolo 提交于
      When the number of processes performing I/O concurrently increases,
      a fixed time slice per process will cause large latencies.
      
      This patch, if low_latency mode is enabled,  will scale the time slice
      assigned to each process according to a 300ms target latency.
      
      In order to keep fairness among processes:
      * The number of active processes is computed using a special form of
      running average, that quickly follows sudden increases (to keep latency low),
      and decrease slowly (to have fairness in spite of rapid decreases of this
      value).
      
      To safeguard sequential bandwidth, we impose a minimum time slice
      (computed using 2*cfq_slice_idle as base, adjusted according to priority
      and async-ness).
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5db5d642
  5. 27 10月, 2009 1 次提交
  6. 26 10月, 2009 4 次提交
  7. 24 10月, 2009 1 次提交
  8. 12 10月, 2009 1 次提交
  9. 09 10月, 2009 1 次提交
    • K
      elv_iosched_store(): fix strstrip() misuse · 8c279598
      KOSAKI Motohiro 提交于
      elv_iosched_store() ignore the return value of strstrip().  It makes small
      inconsistent behavior.
      
      This patch fixes it.
      
       <before>
       ====================================
       # cd /sys/block/{blockdev}/queue
      
       case1:
       # echo "anticipatory" > scheduler
       # cat scheduler
       noop [anticipatory] deadline cfq
      
       case2:
       # echo "anticipatory " > scheduler
       # cat scheduler
       noop [anticipatory] deadline cfq
      
       case3:
       # echo " anticipatory" > scheduler
       bash: echo: write error: Invalid argument
      
       <after>
       ====================================
       # cd /sys/block/{blockdev}/queue
      
       case1:
       # echo "anticipatory" > scheduler
       # cat scheduler
       noop [anticipatory] deadline cfq
      
       case2:
       # echo "anticipatory " > scheduler
       # cat scheduler
       noop [anticipatory] deadline cfq
      
       case3:
       # echo " anticipatory" > scheduler
       noop [anticipatory] deadline cfq
      
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      8c279598
  10. 08 10月, 2009 3 次提交
  11. 07 10月, 2009 4 次提交
  12. 05 10月, 2009 5 次提交
    • J
      block: get rid of kblock_schedule_delayed_work() · 23e018a1
      Jens Axboe 提交于
      It was briefly introduced to allow CFQ to to delayed scheduling,
      but we ended up removing that feature again. So lets kill the
      function and export, and just switch CFQ back to the normal work
      schedule since it is now passing in a '0' delay from all call
      sites.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      23e018a1
    • C
      cfq-iosched: fix possible problem with jiffies wraparound · 48e025e6
      Corrado Zoccolo 提交于
      The RR service tree is indexed by a key that is relative to current jiffies.
      This can cause problems on jiffies wraparound.
      
      The patch fixes it using time_before comparison, and changing
      the add_front path to use a relative number, too.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      48e025e6
    • J
      cfq-iosched: fix issue with rq-rq merging and fifo list ordering · 30996f40
      Jens Axboe 提交于
      cfq uses rq->start_time as the fifo indicator, but that field may
      get modified prior to cfq doing it's fifo list adjustment when
      a request gets merged with another request. This can cause the
      fifo list to become unordered.
      Reported-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      30996f40
    • J
      Revert "Seperate read and write statistics of in_flight requests" · 0f78ab98
      Jens Axboe 提交于
      This reverts commit a9327cac.
      
      Corrado Zoccolo <czoccolo@gmail.com> reports:
      
      "with 2.6.32-rc1 I started getting the following strange output from
      "iostat -kx 2":
      Linux 2.6.31bisect (et2) 	04/10/2009 	_i686_	(2 CPU)
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                10,70    0,00    3,16   15,75    0,00   70,38
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda              18,22     0,00    0,67    0,01    14,77     0,02
      43,94     0,01   10,53 39043915,03 2629219,87
      sdb              60,89     9,68   50,79    3,04  1724,43    50,52
      65,95     0,70   13,06 488437,47 2629219,87
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 2,72    0,00    0,74    0,00    0,00   96,53
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      sdb               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 6,68    0,00    0,99    0,00    0,00   92,33
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      sdb               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 4,40    0,00    0,73    1,47    0,00   93,40
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      sdb               0,00     4,00    0,00    3,00     0,00    28,00
      18,67     0,06   19,50 333,33 100,00
      
      Global values for service time and utilization are garbage. For
      interval values, utilization is always 100%, and service time is
      higher than normal.
      
      I bisected it down to:
      [a9327cac] Seperate read and write
      statistics of in_flight requests
      and verified that reverting just that commit indeed solves the issue
      on 2.6.32-rc1."
      
      So until this is debugged, revert the bad commit.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      0f78ab98
    • J
      cfq-iosched: don't delay async queue if it hasn't dispatched at all · e00c54c3
      Jens Axboe 提交于
      We cannot delay for the first dispatch of the async queue if it
      hasn't dispatched at all, since that could present a local user
      DoS attack vector using an app that just did slow timed sync reads
      while filling memory.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      e00c54c3
  13. 04 10月, 2009 3 次提交
  14. 03 10月, 2009 4 次提交
新手
引导
客服 返回
顶部