1. 28 3月, 2006 2 次提交
    • J
      [BLOCK] cfq-iosched: seek and async performance fixes · 206dc69b
      Jens Axboe 提交于
      Detect whether a given process is seeky and if so disable (mostly) the
      idle window if it is. We still allow just a little idle time, just enough
      to allow that process to submit a new request. That is needed to maintain
      fairness across priority groups.
      
      In some cases, we could setup several async queues. This is not optimal
      from a performance POV, since we want all async io in one queue to perform
      good sorting on it. It also impacted sync queues, as async io got too much
      slice time.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      206dc69b
    • J
      [PATCH] [BLOCK] cfq-iosched: change cfq io context linking from list to tree · e2d74ac0
      Jens Axboe 提交于
      On setups with many disks, we spend a considerable amount of time
      looking up the process-disk mapping on each queue of io. Testing with
      a NULL based block driver, this costs 40-50% reduction in throughput
      for 1000 disks.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      e2d74ac0
  2. 24 3月, 2006 1 次提交
  3. 19 3月, 2006 3 次提交
  4. 24 1月, 2006 1 次提交
  5. 09 1月, 2006 2 次提交
  6. 06 1月, 2006 3 次提交
  7. 16 12月, 2005 1 次提交
    • M
      [SCSI] seperate max_sectors from max_hw_sectors · defd94b7
      Mike Christie 提交于
      - export __blk_put_request and blk_execute_rq_nowait
      needed for async REQ_BLOCK_PC requests
      - seperate max_hw_sectors and max_sectors for block/scsi_ioctl.c and
      SG_IO bio.c helpers per Jens's last comments. Since block/scsi_ioctl.c SG_IO was
      already testing against max_sectors and SCSI-ml was setting max_sectors and
      max_hw_sectors to the same value this does not change any scsi SG_IO behavior. It only
      prepares ll_rw_blk.c, scsi_ioctl.c and bio.c for when SCSI-ml begins to set
      a valid max_hw_sectors for all LLDs. Today if a LLD does not set it
      SCSI-ml sets it to a safe default and some LLDs set it to a artificial low
      value to overcome memory and feedback issues.
      
      Note: Since we now cap max_sectors to BLK_DEF_MAX_SECTORS, which is 1024,
      drivers that used to call blk_queue_max_sectors with a large value of
      max_sectors will now see the fs requests capped to BLK_DEF_MAX_SECTORS.
      Signed-off-by: NMike Christie <michaelc@cs.wisc.edu>
      Signed-off-by: NJames Bottomley <James.Bottomley@SteelEye.com>
      defd94b7
  8. 15 12月, 2005 2 次提交
  9. 12 11月, 2005 1 次提交
  10. 28 10月, 2005 7 次提交
  11. 11 9月, 2005 1 次提交
  12. 06 8月, 2005 1 次提交
    • T
      [PATCH] blk: fix tag shrinking (revive real_max_size) · ba025082
      Tejun Heo 提交于
      My patch in commit fa72b903 incorrectly
      removed blk_queue_tag->real_max_depth.
      
      The original resize implementation was incorrect in the following
      points.
      
       * actual allocation size of tag_index was shorter than real_max_size,
         but assumed to be of the same size, possibly causing memory access
         beyond the allocated area.
       * bits in tag_map between max_deptn and real_max_depth were
         initialized to 1's, making the tags permanently reserved.
      
      In an attempt to fix above two bugs, I had removed allocation optimization
      in init_tag_map and real_max_size.  Tag map/index were allocated and freed
      immediately during resize.
      
      Unfortunately, I wasn't considering that tag map/index can be resized
      dynamically with tags beyond new_depth active.  This led to accessing
      freed area after shrinking tags and led to the following bug reporting
      thread on linux-scsi.
      
         http://marc.theaimsgroup.com/?l=linux-scsi&m=112319898111885&w=2
      
      To fix the problem, I've revived real_max_depth without allocation
      optimization in init_tag_map, and Andrew Vasquez confirmed that the
      problem was fixed.  As Jens is not going to be available for a week, he
      asked me to make sure that this patch reaches you.
      
         http://marc.theaimsgroup.com/?l=linux-scsi&m=112325778530886&w=2
      
      Also, a comment was added to make sure that real_max_size is needed for
      dynamic shrinking.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ba025082
  13. 29 6月, 2005 1 次提交
    • N
      [PATCH] blk: light iocontext ops · fb3cc432
      Nick Piggin 提交于
      get_io_context needlessly turned off interrupts and checked for racing io
      context creations.  Both of which aren't needed, because the io context can
      only be created while in process context of the current process.
      
      Also, split the function in 2.  A light version, current_io_context does not
      elevate the reference count specifically, but can be used when in process
      context, because the process holds a reference itself.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Cc: Jens Axboe <axboe@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb3cc432
  14. 28 6月, 2005 1 次提交
    • J
      [PATCH] Update cfq io scheduler to time sliced design · 22e2c507
      Jens Axboe 提交于
      This updates the CFQ io scheduler to the new time sliced design (cfq
      v3).  It provides full process fairness, while giving excellent
      aggregate system throughput even for many competing processes.  It
      supports io priorities, either inherited from the cpu nice value or set
      directly with the ioprio_get/set syscalls.  The latter closely mimic
      set/getpriority.
      
      This import is based on my latest from -mm.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      22e2c507
  15. 26 6月, 2005 1 次提交
  16. 24 6月, 2005 3 次提交
  17. 20 6月, 2005 4 次提交
  18. 21 5月, 2005 1 次提交
  19. 17 4月, 2005 2 次提交
    • [PATCH] fix NMI lockup with CFQ scheduler · 152587de
      提交于
      The current problem seen is that the queue lock is actually in the
      SCSI device structure, so when that structure is freed on device
      release, we go boom if the queue tries to access the lock again.
      
      The fix here is to move the lock from the scsi_device to the queue.
      Signed-off-by: NJames Bottomley <James.Bottomley@SteelEye.com>
      152587de
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4