1. 09 12月, 2009 1 次提交
  2. 08 12月, 2009 1 次提交
  3. 06 12月, 2009 1 次提交
  4. 04 12月, 2009 23 次提交
  5. 03 12月, 2009 1 次提交
    • S
      cfq-iosched: no dispatch limit for single queue · 474b18cc
      Shaohua Li 提交于
      Since commit 2f5cb738, each queue can send
      up to 4 * 4 requests if only one queue exists. I wonder why we have such limit.
      Device supports tag can send more requests. For example, AHCI can send 31
      requests. Test (direct aio randread) shows the limits reduce about 4% disk
      thoughput.
      On the other hand, since we send one request one time, if other queue
      pop when current is sending more than cfq_quantum requests, current queue will
      stop send requests soon after one request, so sounds there is no big latency.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      474b18cc
  6. 30 11月, 2009 1 次提交
  7. 26 11月, 2009 6 次提交
    • C
      cfq-iosched: fix corner cases in idling logic · 8e550632
      Corrado Zoccolo 提交于
      Idling logic was disabled in some corner cases, leading to unfair share
       for noidle queues.
       * the idle timer was not armed if there were other requests in the
         driver. unfortunately, those requests could come from other workloads,
         or queues for which we don't enable idling. So we will check only
         pending requests from the active queue
       * rq_noidle check on no-idle queue could disable the end of tree idle if
         the last completed request was rq_noidle. Now, we will disable that
         idle only if all the queues served in the no-idle tree had rq_noidle
         requests.
      Reported-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      8e550632
    • C
      cfq-iosched: idling on deep seeky sync queues · 76280aff
      Corrado Zoccolo 提交于
      Seeky sync queues with large depth can gain unfairly big share of disk
       time, at the expense of other seeky queues. This patch ensures that
       idling will be enabled for queues with I/O depth at least 4, and small
       think time. The decision to enable idling is sticky, until an idle
       window times out without seeing a new request.
      
      The reasoning behind the decision is that, if an application is using
      large I/O depth, it is already optimized to make full utilization of
      the hardware, and therefore we reserve a slice of exclusive use for it.
      Reported-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      76280aff
    • C
      cfq-iosched: fix no-idle preemption logic · e4a22919
      Corrado Zoccolo 提交于
      An incoming no-idle queue should preempt the active no-idle queue
       only if the active queue is idling due to service tree empty.
       Previous code was buggy in two ways:
       * it relied on service_tree field to be set on the active queue, while
         it is not set when the code is idling for a new request
       * it didn't check for the service tree empty condition, so could lead to
         LIFO behaviour if multiple queues with depth > 1 were preempting each
         other on an non-NCQ device.
      Reported-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      e4a22919
    • C
      cfq-iosched: fix ncq detection code · e459dd08
      Corrado Zoccolo 提交于
      CFQ's detection of queueing devices initially assumes a queuing device
      and detects if the queue depth reaches a certain threshold.
      However, it will reconsider this choice periodically.
      
      Unfortunately, if device is considered not queuing, CFQ will force a
      unit queue depth for some workloads, thus defeating the detection logic.
      This leads to poor performance on queuing hardware,
      since the idle window remains enabled.
      
      Given this premise, switching to hw_tag = 0 after we have proved at
      least once that the device is NCQ capable is not a good choice.
      
      The new detection code starts in an indeterminate state, in which CFQ behaves
      as if hw_tag = 1, and then, if for a long observation period we never saw
      large depth, we switch to hw_tag = 0, otherwise we stick to hw_tag = 1,
      without reconsidering it again.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      e459dd08
    • C
      cfq-iosched: cleanup unreachable code · c16632ba
      Corrado Zoccolo 提交于
      cfq_should_idle returns false for no-idle queues that are not the last,
      so the control flow will never reach the removed code in a state that
      satisfies the if condition.
      The unreachable code was added to emulate previous cfq behaviour for
      non-NCQ rotational devices. My tests show that even without it, the
      performances and fairness are comparable with previous cfq, thanks to
      the fact that all seeky queues are grouped together, and that we idle at
      the end of the tree.
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c16632ba
    • G
      cfq: Make use of service count to estimate the rb_key offset · 3586e917
      Gui Jianfeng 提交于
      For the moment, different workload cfq queues are put into different
      service trees. But CFQ still uses "busy_queues" to estimate rb_key
      offset when inserting a cfq queue into a service tree. I think this
      isn't appropriate, and it should make use of service tree count to do
      this estimation. This patch is for for-2.6.33 branch.
      Signed-off-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      3586e917
  8. 11 11月, 2009 1 次提交
  9. 09 11月, 2009 1 次提交
    • C
      cfq-iosched: fix next_rq computation · cf7c25cf
      Corrado Zoccolo 提交于
      Cfq has a bug in computation of next_rq, that affects transition
      between multiple sequential request streams in a single queue
      (e.g.: two sequential buffered writers of the same priority),
      causing the alternation between the two streams for a transient period.
      
        8,0    1    18737     0.260400660  5312  D   W 141653311 + 256
        8,0    1    20839     0.273239461  5400  D   W 141653567 + 256
        8,0    1    20841     0.276343885  5394  D   W 142803919 + 256
        8,0    1    20843     0.279490878  5394  D   W 141668927 + 256
        8,0    1    20845     0.292459993  5400  D   W 142804175 + 256
        8,0    1    20847     0.295537247  5400  D   W 141668671 + 256
        8,0    1    20849     0.298656337  5400  D   W 142804431 + 256
        8,0    1    20851     0.311481148  5394  D   W 141668415 + 256
        8,0    1    20853     0.314421305  5394  D   W 142804687 + 256
        8,0    1    20855     0.318960112  5400  D   W 142804943 + 256
      
      The fix makes sure that the next_rq is computed from the last
      dispatched request, and not affected by merging.
      
        8,0    1    37776     4.305161306     0  D   W 141738087 + 256
        8,0    1    37778     4.308298091     0  D   W 141738343 + 256
        8,0    1    37780     4.312885190     0  D   W 141738599 + 256
        8,0    1    37782     4.315933291     0  D   W 141738855 + 256
        8,0    1    37784     4.319064459     0  D   W 141739111 + 256
        8,0    1    37786     4.331918431  5672  D   W 142803007 + 256
        8,0    1    37788     4.334930332  5672  D   W 142803263 + 256
        8,0    1    37790     4.337902723  5672  D   W 142803519 + 256
        8,0    1    37792     4.342359774  5672  D   W 142803775 + 256
        8,0    1    37794     4.345318286     0  D   W 142804031 + 256
      Signed-off-by: NCorrado Zoccolo <czoccolo@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      cf7c25cf
  10. 04 11月, 2009 4 次提交