1. 21 9月, 2011 1 次提交
    • N
      md: Avoid waking up a thread after it has been freed. · 01f96c0a
      NeilBrown 提交于
      Two related problems:
      
      1/ some error paths call "md_unregister_thread(mddev->thread)"
         without subsequently clearing ->thread.  A subsequent call
         to mddev_unlock will try to wake the thread, and crash.
      
      2/ Most calls to md_wakeup_thread are protected against the thread
         disappeared either by:
            - holding the ->mutex
            - having an active request, so something else must be keeping
              the array active.
         However mddev_unlock calls md_wakeup_thread after dropping the
         mutex and without any certainty of an active request, so the
         ->thread could theoretically disappear.
         So we need a spinlock to provide some protections.
      
      So change md_unregister_thread to take a pointer to the thread
      pointer, and ensure that it always does the required locking, and
      clears the pointer properly.
      Reported-by: N"Moshe Melnikov" <moshe@zadarastorage.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      cc: stable@kernel.org
      01f96c0a
  2. 10 9月, 2011 4 次提交
    • N
      md: Fix handling for devices from 2TB to 4TB in 0.90 metadata. · 27a7b260
      NeilBrown 提交于
      0.90 metadata uses an unsigned 32bit number to count the number of
      kilobytes used from each device.
      This should allow up to 4TB per device.
      However we multiply this by 2 (to get sectors) before casting to a
      larger type, so sizes above 2TB get truncated.
      
      Also we allow rdev->sectors to be larger than 4TB, so it is possible
      for the array to be resized larger than the metadata can handle.
      So make sure rdev->sectors never exceeds 4TB when 0.90 metadata is in
      used.
      
      Also the sanity check at the end of super_90_load should include level
      1 as it used ->size too. (RAID0 and Linear don't use ->size at all).
      Reported-by: NPim Zandbergen <P.Zandbergen@macroscoop.nl>
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      27a7b260
    • N
      md/raid1,10: Remove use-after-free bug in make_request. · 079fa166
      NeilBrown 提交于
      A single request to RAID1 or RAID10 might result in multiple
      requests if there are known bad blocks that need to be avoided.
      
      To detect if we need to submit another write request we test:
       	if (sectors_handled < (bio->bi_size >> 9)) {
      
      However this is after we call **_write_done() so the 'bio' no longer
      belongs to us - the writes could have completed and the bio freed.
      
      So move the **_write_done call until after the test against
      bio->bi_size.
      
      This addresses https://bugzilla.kernel.org/show_bug.cgi?id=41862Reported-by: NBruno Wolff III <bruno@wolff.to>
      Tested-by: NBruno Wolff III <bruno@wolff.to>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      079fa166
    • N
      md/raid10: unify handling of write completion. · 19d5f834
      NeilBrown 提交于
      A write can complete at two different places:
      1/ when the last member-device write completes, through
         raid10_end_write_request
      2/ in make_request() when we remove the initial bias from ->remaining.
      
      These two should do exactly the same thing and the comment says they
      do, but they don't.
      
      So factor the correct code out into a function and call it in both
      places.  This makes the code much more similar to RAID1.
      
      The difference is only significant if there is an error, and they
      usually take a while, so it is unlikely that there will be an error
      already when make_request is completing, so this is unlikely to cause
      real problems.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      19d5f834
    • N
      Avoid dereferencing a 'request_queue' after last close. · 94007751
      NeilBrown 提交于
      On the last close of an 'md' device which as been stopped, the device
      is destroyed and in particular the request_queue is freed.  The free
      is done in a separate thread so it might happen a short time later.
      
      __blkdev_put calls bdev_inode_switch_bdi *after* ->release has been
      called.
      
      Since commit f758eeab
      bdev_inode_switch_bdi will dereference the 'old' bdi, which lives
      inside a request_queue, to get a spin lock.  This causes the last
      close on an md device to sometime take a spin_lock which lives in
      freed memory - which results in an oops.
      
      So move the called to bdev_inode_switch_bdi before the call to
      ->release.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Acked-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      94007751
  3. 31 8月, 2011 1 次提交
    • N
      md/raid5: fix a hang on device failure. · 43220aa0
      NeilBrown 提交于
      Waiting for a 'blocked' rdev to become unblocked in the raid5d thread
      cannot work with internal metadata as it is the raid5d thread which
      will clear the blocked flag.
      This wasn't a problem in 3.0 and earlier as we only set the blocked
      flag when external metadata was used then.
      However we now set it always, so we need to be more careful.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      43220aa0
  4. 30 8月, 2011 1 次提交
  5. 25 8月, 2011 4 次提交
  6. 24 8月, 2011 7 次提交
  7. 23 8月, 2011 8 次提交
  8. 22 8月, 2011 7 次提交
  9. 21 8月, 2011 4 次提交
  10. 20 8月, 2011 3 次提交