1. 25 4月, 2007 1 次提交
    • J
      cfq-iosched: fix alias + front merge bug · 5044eed4
      Jens Axboe 提交于
      There's a really rare and obscure bug in CFQ, that causes a crash in
      cfq_dispatch_insert() due to rq == NULL.  One example of the resulting
      oops is seen here:
      
      	http://lkml.org/lkml/2007/4/15/41
      
      Neil correctly diagnosed the situation for how this can happen: if two
      concurrent requests with the exact same sector number (due to direct IO
      or aliasing between MD and the raw device access), the alias handling
      will add the request to the sortlist, but next_rq remains NULL.
      
      Read the more complete analysis at:
      
      	http://lkml.org/lkml/2007/4/25/57
      
      This looks like it requires md to trigger, even though it should
      potentially be possible to due with O_DIRECT (at least if you edit the
      kernel and doctor some of the unplug calls).
      
      The fix is to move the ->next_rq update to when we add a request to the
      rbtree. Then we remove the possibility for a request to exist in the
      rbtree code, but not have ->next_rq correctly updated.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5044eed4
  2. 21 4月, 2007 1 次提交
    • J
      cfq-iosched: fix sequential write regression · a9938006
      Jens Axboe 提交于
      We have a 10-15% performance regression for sequential writes on TCQ/NCQ
      enabled drives in 2.6.21-rcX after the CFQ update went in.  It has been
      reported by Valerie Clement <valerie.clement@bull.net> and the Intel
      testing folks.  The regression is because of CFQ's now more aggressive
      queue control, limiting the depth available to the device.
      
      This patches fixes that regression by allowing a greater depth when only
      one queue is busy.  It has been tested to not impact sync-vs-async
      workloads too much - we still do a lot better than 2.6.20.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9938006
  3. 12 2月, 2007 11 次提交
  4. 03 1月, 2007 1 次提交
    • J
      [PATCH] cfq-iosched: merging problem · ec8acb69
      Jens Axboe 提交于
      Two issues:
      
      - The final return 1 should be a return 0, otherwise comparing cfqq is
        a noop.
      
      - bio_sync() only checks the sync flag, while rq_is_sync() checks both
        for READ and sync. The latter is what we want. Expand the bio check
        to include reads, and relax the restriction to allow merging of async
        io into sync requests.
      
      In the future we want to clean up the SYNC logic, right now it means
      both sync request (such as READ and O_DIRECT WRITE) and unplug-on-issue.
      Leave that for later.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ec8acb69
  5. 23 12月, 2006 1 次提交
  6. 20 12月, 2006 1 次提交
    • J
      [PATCH] cfq-iosched: don't allow sync merges across queues · da775265
      Jens Axboe 提交于
      Currently we allow any merge, even if the io originates from different
      processes. This can cause really bad starvation and unfairness, if those
      ios happen to be synchronous (reads or direct writes).
      
      So add a allow_merge hook to the io scheduler ops, so an io scheduler can
      help decide whether a bio/process combination may be merged with an
      existing request.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      da775265
  7. 13 12月, 2006 1 次提交
  8. 08 12月, 2006 1 次提交
  9. 01 12月, 2006 1 次提交
  10. 22 11月, 2006 1 次提交
    • D
      WorkStruct: Pass the work_struct pointer instead of context data · 65f27f38
      David Howells 提交于
      Pass the work_struct pointer to the work function rather than context data.
      The work function can use container_of() to work out the data.
      
      For the cases where the container of the work_struct may go away the moment the
      pending bit is cleared, it is made possible to defer the release of the
      structure by deferring the clearing of the pending bit.
      
      To make this work, an extra flag is introduced into the management side of the
      work_struct.  This governs auto-release of the structure upon execution.
      
      Ordinarily, the work queue executor would release the work_struct for further
      scheduling or deallocation by clearing the pending bit prior to jumping to the
      work function.  This means that, unless the driver makes some guarantee itself
      that the work_struct won't go away, the work function may not access anything
      else in the work_struct or its container lest they be deallocated..  This is a
      problem if the auxiliary data is taken away (as done by the last patch).
      
      However, if the pending bit is *not* cleared before jumping to the work
      function, then the work function *may* access the work_struct and its container
      with no problems.  But then the work function must itself release the
      work_struct by calling work_release().
      
      In most cases, automatic release is fine, so this is the default.  Special
      initiators exist for the non-auto-release case (ending in _NAR).
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      65f27f38
  11. 01 11月, 2006 1 次提交
  12. 31 10月, 2006 2 次提交
  13. 01 10月, 2006 17 次提交