1. 03 4月, 2012 1 次提交
  2. 19 3月, 2012 5 次提交
    • N
      md/raid10 - support resizing some RAID10 arrays. · 006a09a0
      NeilBrown 提交于
      'resizing' an array in this context means making use of extra
      space that has become available in component devices, not adding new
      devices.
      It also includes shrinking the array to take up less space of
      component devices.
      
      This is not supported for array with a 'far' layout.  However
      for 'near' and 'offset' layout arrays, adding and removing space at
      the end of the devices is easy to support, and this patch provides
      that support.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      006a09a0
    • N
      md/raid10: handle merge_bvec_fn in member devices. · 050b6615
      NeilBrown 提交于
      Currently we don't honour merge_bvec_fn in member devices so if there
      is one, we force all requests to be single-page at most.
      This is not ideal.
      
      So enhance the raid10 merge_bvec_fn to check that function in children
      as well.
      
      This introduces a small problem.  There is no locking around calls
      the ->merge_bvec_fn and subsequent calls to ->make_request.  So a
      device added between these could end up getting a request which
      violates its merge_bvec_fn.
      
      Currently the best we can do is synchronize_sched().  This will work
      providing no preemption happens.  If there is preemption, we just
      have to hope that new devices are largely consistent with old devices.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      050b6615
    • N
      md: tidy up rdev_for_each usage. · dafb20fa
      NeilBrown 提交于
      md.h has an 'rdev_for_each()' macro for iterating the rdevs in an
      mddev.  However it uses the 'safe' version of list_for_each_entry,
      and so requires the extra variable, but doesn't include 'safe' in the
      name, which is useful documentation.
      
      Consequently some places use this safe version without needing it, and
      many use an explicity list_for_each entry.
      
      So:
       - rename rdev_for_each to rdev_for_each_safe
       - create a new rdev_for_each which uses the plain
         list_for_each_entry,
       - use the 'safe' version only where needed, and convert all other
         list_for_each_entry calls to use rdev_for_each.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dafb20fa
    • N
      md/raid1,raid10: avoid deadlock during resync/recovery. · d6b42dcb
      NeilBrown 提交于
      If RAID1 or RAID10 is used under LVM or some other stacking
      block device, it is possible to enter a deadlock during
      resync or recovery.
      This can happen if the upper level block device creates
      two requests to the RAID1 or RAID10.  The first request gets
      processed, blocks recovery and queue requests for underlying
      requests in current->bio_list.  A resync request then starts
      which will wait for those requests and block new IO.
      
      But then the second request to the RAID1/10 will be attempted
      and it cannot progress until the resync request completes,
      which cannot progress until the underlying device requests complete,
      which are on a queue behind that second request.
      
      So allow that second request to proceed even though there is
      a resync request about to start.
      
      This is suitable for any -stable kernel.
      
      Cc: stable@vger.kernel.org
      Reported-by: NRay Morris <support@bettercgi.com>
      Tested-by: NRay Morris <support@bettercgi.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d6b42dcb
    • N
      md: allow re-add to failed arrays. · dc10c643
      NeilBrown 提交于
      When an array is failed (some data inaccessible) then there is no
      point attempting to add a spare as it could not possibly be recovered.
      
      However that may be value in re-adding a recently removed device.
      e.g. if there is a write-intent-bitmap and it is clear, then access
      to the data could be restored by this action.
      
      So don't reject a re-add to a failed array for RAID10 and RAID5 (the
      only arrays  types that check for a failed array).
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dc10c643
  3. 13 3月, 2012 1 次提交
  4. 06 3月, 2012 1 次提交
  5. 14 2月, 2012 1 次提交
    • N
      md/raid10: fix handling of error on last working device in array. · fae8cc5e
      NeilBrown 提交于
      If we get a read error on the last working device in a RAID10 which
      contains the target block, then we don't fail the device (which is
      good) but we don't abort retries, which is wrong.
      We end up in an infinite loop retrying the read on the one device.
      
      This patch fixes the problem in two places:
      1/ in raid10_end_read_request we don't even ask for a retry if this
         was the last usable device.  This is efficient but a little racy
         and will sometimes retry when it should not.
      
      2/ in handle_read_error we are careful to exclude any device from
         retry which we tried to mark as faulty (that might have failed if
         it was the last device).  This is race-free but less efficient.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      fae8cc5e
  6. 23 12月, 2011 11 次提交
  7. 01 11月, 2011 1 次提交
  8. 31 10月, 2011 1 次提交
    • N
      md/raid10: Fix bug when activating a hot-spare. · 7fcc7c8a
      NeilBrown 提交于
      This is a fairly serious bug in RAID10.
      
      When a RAID10 array is degraded and a hot-spare is activated, the
      spare does not take up the empty slot, but rather replaces the first
      working device.
      This is likely to make the array non-functional.   It would normally
      be possible to recover the data, but that would need care and is not
      guaranteed.
      
      This bug was introduced in commit
         2bb77736
      which first appeared in 3.1.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7fcc7c8a
  9. 26 10月, 2011 1 次提交
    • N
      md: Fix some bugs in recovery_disabled handling. · d890fa2b
      NeilBrown 提交于
      In 3.0 we changed the way recovery_disabled was handle so that instead
      of testing against zero, we test an mddev-> value against a conf->
      value.
      Two problems:
        1/ one place in raid1 was missed and still sets to '1'.
        2/ We didn't explicitly set the conf-> value at array creation
           time.
           It defaulted to '0' just like the mddev value does so they
           could appear equal and thus disable recovery.
           This did not affect normal 'md' as it calls bind_rdev_to_array
           which changes the mddev value.  However the dmraid interface
           doesn't call this and so doesn't change ->recovery_disabled; so at
           array start all recovery is incorrectly disabled.
      
      So initialise the 'conf' value to one less that the mddev value, so
      the will only be the same when explicitly set that way.
      Reported-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown  <neilb@suse.de>
      d890fa2b
  10. 11 10月, 2011 8 次提交
  11. 21 9月, 2011 1 次提交
    • N
      md: Avoid waking up a thread after it has been freed. · 01f96c0a
      NeilBrown 提交于
      Two related problems:
      
      1/ some error paths call "md_unregister_thread(mddev->thread)"
         without subsequently clearing ->thread.  A subsequent call
         to mddev_unlock will try to wake the thread, and crash.
      
      2/ Most calls to md_wakeup_thread are protected against the thread
         disappeared either by:
            - holding the ->mutex
            - having an active request, so something else must be keeping
              the array active.
         However mddev_unlock calls md_wakeup_thread after dropping the
         mutex and without any certainty of an active request, so the
         ->thread could theoretically disappear.
         So we need a spinlock to provide some protections.
      
      So change md_unregister_thread to take a pointer to the thread
      pointer, and ensure that it always does the required locking, and
      clears the pointer properly.
      Reported-by: N"Moshe Melnikov" <moshe@zadarastorage.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      cc: stable@kernel.org
      01f96c0a
  12. 12 9月, 2011 1 次提交
  13. 10 9月, 2011 2 次提交
    • N
      md/raid1,10: Remove use-after-free bug in make_request. · 079fa166
      NeilBrown 提交于
      A single request to RAID1 or RAID10 might result in multiple
      requests if there are known bad blocks that need to be avoided.
      
      To detect if we need to submit another write request we test:
       	if (sectors_handled < (bio->bi_size >> 9)) {
      
      However this is after we call **_write_done() so the 'bio' no longer
      belongs to us - the writes could have completed and the bio freed.
      
      So move the **_write_done call until after the test against
      bio->bi_size.
      
      This addresses https://bugzilla.kernel.org/show_bug.cgi?id=41862Reported-by: NBruno Wolff III <bruno@wolff.to>
      Tested-by: NBruno Wolff III <bruno@wolff.to>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      079fa166
    • N
      md/raid10: unify handling of write completion. · 19d5f834
      NeilBrown 提交于
      A write can complete at two different places:
      1/ when the last member-device write completes, through
         raid10_end_write_request
      2/ in make_request() when we remove the initial bias from ->remaining.
      
      These two should do exactly the same thing and the comment says they
      do, but they don't.
      
      So factor the correct code out into a function and call it in both
      places.  This makes the code much more similar to RAID1.
      
      The difference is only significant if there is an error, and they
      usually take a while, so it is unlikely that there will be an error
      already when make_request is completing, so this is unlikely to cause
      real problems.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      19d5f834
  14. 28 7月, 2011 5 次提交
    • N
      md/raid10: handle further errors during fix_read_error better. · 58c54fcc
      NeilBrown 提交于
      If we find more read/write errors we should record a bad block before
      failing the device.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      58c54fcc
    • N
      md/raid10: Handle read errors during recovery better. · 5e570289
      NeilBrown 提交于
      Currently when we get a read error during recovery, we simply abort
      the recovery.
      
      Instead, repeat the read in page-sized blocks.
      On successful reads, write to the target.
      On read errors, record a bad block on the destination,
      and only if that fails do we abort the recovery.
      
      As we now retry reads we need to know where we read from.  This was in
      bi_sector but that can be changed during a read attempt.
      So store the correct from_addr and to_addr in the r10_bio for later
      access.
      
      
      Signed-off-by: NeilBrown<neilb@suse.de>
      5e570289
    • N
      md/raid10: simplify read error handling during recovery. · e684e41d
      NeilBrown 提交于
      If a read error is detected during recovery the code currently
      fails the read device.
      This isn't really necessary.  recovery_request_write will signal
      a write error to end_sync_write and it will record a write
      error on the destination device which will record a bad block
      there or kick it from the array.
      
      So just remove this call to do md_error.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e684e41d
    • N
      md/raid10: record bad blocks due to write errors during resync/recovery. · 1a0b7cd8
      NeilBrown 提交于
      If we get a write error during resync/recovery don't fail the device
      but instead record a bad block.  If that fails we can then fail the
      device.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1a0b7cd8
    • N
      md/raid10: attempt to fix read errors during resync/check · f84ee364
      NeilBrown 提交于
      We already attempt to fix read errors found during normal IO
      and a 'repair' process.
      It is best to try to repair them at any time they are found,
      so move a test so that during sync and check a read error will
      be corrected by over-writing with good data.
      
      If both (all) devices have known bad blocks in the sync section we
      won't try to fix even though the bad blocks might not overlap.  That
      should be considered later.
      
      Also if we hit a read error during recovery we don't try to fix it.
      It would only be possible to fix if there were at least three copies
      of data, which is not very common with RAID10.  But it should still
      be considered later.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f84ee364
新手
引导
客服 返回
顶部