1. 10 5月, 2007 15 次提交
  2. 09 5月, 2007 1 次提交
  3. 08 5月, 2007 1 次提交
  4. 30 4月, 2007 1 次提交
    • J
      [BLOCK] Don't pin lots of memory in mempools · 5972511b
      Jens Axboe 提交于
      Currently we scale the mempool sizes depending on memory installed
      in the machine, except for the bio pool itself which sits at a fixed
      256 entry pre-allocation.
      
      There's really no point in "optimizing" this OOM path, we just need
      enough preallocated to make progress. A single unit is enough, lets
      scale it down to 2 just to be on the safe side.
      
      This patch saves ~150kb of pinned kernel memory on a 32-bit box.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5972511b
  5. 13 4月, 2007 1 次提交
  6. 05 4月, 2007 1 次提交
  7. 28 3月, 2007 3 次提交
  8. 17 3月, 2007 1 次提交
    • A
      [PATCH] fix read past end of array in md/linear.c · bed31ed9
      Andy Isaacson 提交于
      When iterating through an array, one must be careful to test one's index
      variable rather than another similarly-named variable.
      
      The loop will read off the end of conf->disks[] in the following
      (pathological) case:
      
        % dd bs=1 seek=840716287 if=/dev/zero of=d1 count=1
        % for i in 2 3 4; do dd if=/dev/zero of=d$i bs=1k count=$(($i+150)); done
        % ./vmlinux ubd0=root ubd1=d1 ubd2=d2 ubd3=d3 ubd4=d4
        # mdadm -C /dev/md0 --level=linear --raid-devices=4 /dev/ubd[1234]
      
      adding some printks, I saw this:
      
        [42949374.960000] hash_spacing = 821120
        [42949374.960000] cnt          = 4
        [42949374.960000] min_spacing  = 801
        [42949374.960000] j=0 size=820928 sz=820928
        [42949374.960000] i=0 sz=820928 hash_spacing=820928
        [42949374.960000] j=1 size=64 sz=64
        [42949374.960000] j=2 size=64 sz=128
        [42949374.960000] j=3 size=64 sz=192
        [42949374.960000] j=4 size=1515870810 sz=1515871002
      
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Acked-by: NNeil Brown <neilb@cse.unsw.edu.au>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bed31ed9
  9. 05 3月, 2007 1 次提交
    • N
      [PATCH] md: fix for raid6 reshape · 6d3baf2e
      NeilBrown 提交于
      Recent patch for raid6 reshape had a change missing that showed up in
      subsequent review.
      
      Many places in the raid5 code used "conf->raid_disks-1" to mean "number of
      data disks".  With raid6 that had to be changed to "conf->raid_disk -
      conf->max_degraded" or similar.  One place was missed.
      
      This bug means that if a raid6 reshape were aborted in the middle the
      recorded position would be wrong.  On restart it would either fail (as the
      position wasn't on an appropriate boundary) or would leave a section of the
      array unreshaped, causing data corruption.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d3baf2e
  10. 02 3月, 2007 6 次提交
  11. 15 2月, 2007 2 次提交
  12. 13 2月, 2007 1 次提交
  13. 12 2月, 2007 1 次提交
  14. 10 2月, 2007 2 次提交
    • N
      [PATCH] md: avoid possible BUG_ON in md bitmap handling · da6e1a32
      Neil Brown 提交于
      md/bitmap tracks how many active write requests are pending on blocks
      associated with each bit in the bitmap, so that it knows when it can clear
      the bit (when count hits zero).
      
      The counter has 14 bits of space, so if there are ever more than 16383, we
      cannot cope.
      
      Currently the code just calles BUG_ON as "all" drivers have request queue
      limits much smaller than this.
      
      However is seems that some don't.  Apparently some multipath configurations
      can allow more than 16383 concurrent write requests.
      
      So, in this unlikely situation, instead of calling BUG_ON we now wait
      for the count to drop down a bit.  This requires a new wait_queue_head,
      some waiting code, and a wakeup call.
      
      Tested by limiting the counter to 20 instead of 16383 (writes go a lot slower
      in that case...).
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da6e1a32
    • N
      [PATCH] md: fix various bugs with aligned reads in RAID5 · 387bb173
      Neil Brown 提交于
      It is possible for raid5 to be sent a bio that is too big for an underlying
      device.  So if it is a READ that we pass stright down to a device, it will
      fail and confuse RAID5.
      
      So in 'chunk_aligned_read' we check that the bio fits within the parameters
      for the target device and if it doesn't fit, fall back on reading through
      the stripe cache and making lots of one-page requests.
      
      Note that this is the earliest time we can check against the device because
      earlier we don't have a lock on the device, so it could change underneath
      us.
      
      Also, the code for handling a retry through the cache when a read fails has
      not been tested and was badly broken.  This patch fixes that code.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Cc: "Kai" <epimetreus@fastmail.fm>
      Cc: <stable@suse.de>
      Cc: <org@suse.de>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      387bb173
  15. 27 1月, 2007 3 次提交