1. 11 11月, 2009 1 次提交
  2. 09 11月, 2009 1 次提交
    • C
      cfq-iosched: fix next_rq computation · cf7c25cf
      Corrado Zoccolo 提交于
      Cfq has a bug in computation of next_rq, that affects transition
      between multiple sequential request streams in a single queue
      (e.g.: two sequential buffered writers of the same priority),
      causing the alternation between the two streams for a transient period.
      
        8,0    1    18737     0.260400660  5312  D   W 141653311 + 256
        8,0    1    20839     0.273239461  5400  D   W 141653567 + 256
        8,0    1    20841     0.276343885  5394  D   W 142803919 + 256
        8,0    1    20843     0.279490878  5394  D   W 141668927 + 256
        8,0    1    20845     0.292459993  5400  D   W 142804175 + 256
        8,0    1    20847     0.295537247  5400  D   W 141668671 + 256
        8,0    1    20849     0.298656337  5400  D   W 142804431 + 256
        8,0    1    20851     0.311481148  5394  D   W 141668415 + 256
        8,0    1    20853     0.314421305  5394  D   W 142804687 + 256
        8,0    1    20855     0.318960112  5400  D   W 142804943 + 256
      
      The fix makes sure that the next_rq is computed from the last
      dispatched request, and not affected by merging.
      
        8,0    1    37776     4.305161306     0  D   W 141738087 + 256
        8,0    1    37778     4.308298091     0  D   W 141738343 + 256
        8,0    1    37780     4.312885190     0  D   W 141738599 + 256
        8,0    1    37782     4.315933291     0  D   W 141738855 + 256
        8,0    1    37784     4.319064459     0  D   W 141739111 + 256
        8,0    1    37786     4.331918431  5672  D   W 142803007 + 256
        8,0    1    37788     4.334930332  5672  D   W 142803263 + 256
        8,0    1    37790     4.337902723  5672  D   W 142803519 + 256
        8,0    1    37792     4.342359774  5672  D   W 142803775 + 256
        8,0    1    37794     4.345318286     0  D   W 142804031 + 256
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      cf7c25cf
  3. 04 11月, 2009 4 次提交
  4. 02 11月, 2009 1 次提交
  5. 28 10月, 2009 6 次提交
    • J
      cfq-iosched: fix style issue in cfq_get_avg_queues() · 5869619c
      Jens Axboe 提交于
      Line breaks and bad brace placement.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5869619c
    • C
      cfq-iosched: fairness for sync no-idle queues · 718eee05
      Corrado Zoccolo 提交于
      Currently no-idle queues in cfq are not serviced fairly:
      even if they can only dispatch a small number of requests at a time,
      they have to compete with idling queues to be serviced, experiencing
      large latencies.
      
      We should notice, instead, that no-idle queues are the ones that would
      benefit most from having low latency, in fact they are any of:
      * processes with large think times (e.g. interactive ones like file
        managers)
      * seeky (e.g. programs faulting in their code at startup)
      * or marked as no-idle from upper levels, to improve latencies of those
        requests.
      
      This patch improves the fairness and latency for those queues, by:
      * separating sync idle, sync no-idle and async queues in separate
        service_trees, for each priority
      * service all no-idle queues together
      * and idling when the last no-idle queue has been serviced, to
        anticipate for more no-idle work
      * the timeslices allotted for idle and no-idle service_trees are
        computed proportionally to the number of processes in each set.
      
      Servicing all no-idle queues together should have a performance boost
      for NCQ-capable drives, without compromising fairness.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      718eee05
    • C
      cfq-iosched: enable idling for last queue on priority class · a6d44e98
      Corrado Zoccolo 提交于
      cfq can disable idling for queues in various circumstances.
      When workloads of different priorities are competing, if the higher
      priority queue has idling disabled, lower priority queues may steal
      its disk share. For example, in a scenario with an RT process
      performing seeky reads vs a BE process performing sequential reads,
      on an NCQ enabled hardware, with low_latency unset,
      the RT process will dispatch only the few pending requests every full
      slice of service for the BE process.
      
      The patch solves this issue by always performing idle on the last
      queue at a given priority class > idle. If the same process, or one
      that can pre-empt it (so at the same priority or higher), submits a
      new request within the idle window, the lower priority queue won't
      dispatch, saving the disk bandwidth for higher priority ones.
      
      Note: this doesn't touch the non_rotational + NCQ case (no hardware
      to test if this is a benefit in that case).
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a6d44e98
    • C
      cfq-iosched: reimplement priorities using different service trees · c0324a02
      Corrado Zoccolo 提交于
      We use different service trees for different priority classes.
      This allows a simplification in the service tree insertion code, that no
      longer has to consider priority while walking the tree.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c0324a02
    • C
      cfq-iosched: preparation to handle multiple service trees · aa6f6a3d
      Corrado Zoccolo 提交于
      We embed a pointer to the service tree in each queue, to handle multiple
      service trees easily.
      Service trees are enriched with a counter.
      cfq_add_rq_rb is invoked after putting the rq in the fifo, to ensure
      that all fields in rq are properly initialized.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      aa6f6a3d
    • C
      cfq-iosched: adapt slice to number of processes doing I/O · 5db5d642
      Corrado Zoccolo 提交于
      When the number of processes performing I/O concurrently increases,
      a fixed time slice per process will cause large latencies.
      
      This patch, if low_latency mode is enabled,  will scale the time slice
      assigned to each process according to a 300ms target latency.
      
      In order to keep fairness among processes:
      * The number of active processes is computed using a special form of
      running average, that quickly follows sudden increases (to keep latency low),
      and decrease slowly (to have fairness in spite of rapid decreases of this
      value).
      
      To safeguard sequential bandwidth, we impose a minimum time slice
      (computed using 2*cfq_slice_idle as base, adjusted according to priority
      and async-ness).
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5db5d642
  6. 27 10月, 2009 1 次提交
  7. 26 10月, 2009 4 次提交
  8. 08 10月, 2009 3 次提交
  9. 07 10月, 2009 2 次提交
  10. 05 10月, 2009 4 次提交
  11. 04 10月, 2009 2 次提交
  12. 03 10月, 2009 3 次提交
  13. 14 9月, 2009 1 次提交
  14. 11 9月, 2009 5 次提交
  15. 11 7月, 2009 1 次提交
    • V
      cfq-iosched: reset oom_cfqq in cfq_set_request() · 32f2e807
      Vivek Goyal 提交于
      In case memory is scarce, we now default to oom_cfqq. Once memory is
      available again, we should allocate a new cfqq and stop using oom_cfqq for
      a particular io context.
      
      Once a new request comes in, check if we are using oom_cfqq, and if yes,
      try to allocate a new cfqq.
      
      Tested the patch by forcing the use of oom_cfqq and upon next request thread
      realized that it was using oom_cfqq and it allocated a new cfqq.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      32f2e807
  16. 01 7月, 2009 1 次提交