1. 23 12月, 2011 8 次提交
  2. 01 11月, 2011 1 次提交
  3. 31 10月, 2011 1 次提交
    • N
      md/raid10: Fix bug when activating a hot-spare. · 7fcc7c8a
      NeilBrown 提交于
      This is a fairly serious bug in RAID10.
      
      When a RAID10 array is degraded and a hot-spare is activated, the
      spare does not take up the empty slot, but rather replaces the first
      working device.
      This is likely to make the array non-functional.   It would normally
      be possible to recover the data, but that would need care and is not
      guaranteed.
      
      This bug was introduced in commit
         2bb77736
      which first appeared in 3.1.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7fcc7c8a
  4. 26 10月, 2011 1 次提交
    • N
      md: Fix some bugs in recovery_disabled handling. · d890fa2b
      NeilBrown 提交于
      In 3.0 we changed the way recovery_disabled was handle so that instead
      of testing against zero, we test an mddev-> value against a conf->
      value.
      Two problems:
        1/ one place in raid1 was missed and still sets to '1'.
        2/ We didn't explicitly set the conf-> value at array creation
           time.
           It defaulted to '0' just like the mddev value does so they
           could appear equal and thus disable recovery.
           This did not affect normal 'md' as it calls bind_rdev_to_array
           which changes the mddev value.  However the dmraid interface
           doesn't call this and so doesn't change ->recovery_disabled; so at
           array start all recovery is incorrectly disabled.
      
      So initialise the 'conf' value to one less that the mddev value, so
      the will only be the same when explicitly set that way.
      Reported-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown  <neilb@suse.de>
      d890fa2b
  5. 11 10月, 2011 8 次提交
  6. 21 9月, 2011 1 次提交
    • N
      md: Avoid waking up a thread after it has been freed. · 01f96c0a
      NeilBrown 提交于
      Two related problems:
      
      1/ some error paths call "md_unregister_thread(mddev->thread)"
         without subsequently clearing ->thread.  A subsequent call
         to mddev_unlock will try to wake the thread, and crash.
      
      2/ Most calls to md_wakeup_thread are protected against the thread
         disappeared either by:
            - holding the ->mutex
            - having an active request, so something else must be keeping
              the array active.
         However mddev_unlock calls md_wakeup_thread after dropping the
         mutex and without any certainty of an active request, so the
         ->thread could theoretically disappear.
         So we need a spinlock to provide some protections.
      
      So change md_unregister_thread to take a pointer to the thread
      pointer, and ensure that it always does the required locking, and
      clears the pointer properly.
      Reported-by: N"Moshe Melnikov" <moshe@zadarastorage.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      cc: stable@kernel.org
      01f96c0a
  7. 12 9月, 2011 1 次提交
  8. 10 9月, 2011 2 次提交
    • N
      md/raid1,10: Remove use-after-free bug in make_request. · 079fa166
      NeilBrown 提交于
      A single request to RAID1 or RAID10 might result in multiple
      requests if there are known bad blocks that need to be avoided.
      
      To detect if we need to submit another write request we test:
       	if (sectors_handled < (bio->bi_size >> 9)) {
      
      However this is after we call **_write_done() so the 'bio' no longer
      belongs to us - the writes could have completed and the bio freed.
      
      So move the **_write_done call until after the test against
      bio->bi_size.
      
      This addresses https://bugzilla.kernel.org/show_bug.cgi?id=41862Reported-by: NBruno Wolff III <bruno@wolff.to>
      Tested-by: NBruno Wolff III <bruno@wolff.to>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      079fa166
    • N
      md/raid10: unify handling of write completion. · 19d5f834
      NeilBrown 提交于
      A write can complete at two different places:
      1/ when the last member-device write completes, through
         raid10_end_write_request
      2/ in make_request() when we remove the initial bias from ->remaining.
      
      These two should do exactly the same thing and the comment says they
      do, but they don't.
      
      So factor the correct code out into a function and call it in both
      places.  This makes the code much more similar to RAID1.
      
      The difference is only significant if there is an error, and they
      usually take a while, so it is unlikely that there will be an error
      already when make_request is completing, so this is unlikely to cause
      real problems.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      19d5f834
  9. 28 7月, 2011 17 次提交
    • N
      md/raid10: handle further errors during fix_read_error better. · 58c54fcc
      NeilBrown 提交于
      If we find more read/write errors we should record a bad block before
      failing the device.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      58c54fcc
    • N
      md/raid10: Handle read errors during recovery better. · 5e570289
      NeilBrown 提交于
      Currently when we get a read error during recovery, we simply abort
      the recovery.
      
      Instead, repeat the read in page-sized blocks.
      On successful reads, write to the target.
      On read errors, record a bad block on the destination,
      and only if that fails do we abort the recovery.
      
      As we now retry reads we need to know where we read from.  This was in
      bi_sector but that can be changed during a read attempt.
      So store the correct from_addr and to_addr in the r10_bio for later
      access.
      
      
      Signed-off-by: NeilBrown<neilb@suse.de>
      5e570289
    • N
      md/raid10: simplify read error handling during recovery. · e684e41d
      NeilBrown 提交于
      If a read error is detected during recovery the code currently
      fails the read device.
      This isn't really necessary.  recovery_request_write will signal
      a write error to end_sync_write and it will record a write
      error on the destination device which will record a bad block
      there or kick it from the array.
      
      So just remove this call to do md_error.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e684e41d
    • N
      md/raid10: record bad blocks due to write errors during resync/recovery. · 1a0b7cd8
      NeilBrown 提交于
      If we get a write error during resync/recovery don't fail the device
      but instead record a bad block.  If that fails we can then fail the
      device.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1a0b7cd8
    • N
      md/raid10: attempt to fix read errors during resync/check · f84ee364
      NeilBrown 提交于
      We already attempt to fix read errors found during normal IO
      and a 'repair' process.
      It is best to try to repair them at any time they are found,
      so move a test so that during sync and check a read error will
      be corrected by over-writing with good data.
      
      If both (all) devices have known bad blocks in the sync section we
      won't try to fix even though the bad blocks might not overlap.  That
      should be considered later.
      
      Also if we hit a read error during recovery we don't try to fix it.
      It would only be possible to fix if there were at least three copies
      of data, which is not very common with RAID10.  But it should still
      be considered later.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f84ee364
    • N
      md/raid10: Handle write errors by updating badblock log. · bd870a16
      NeilBrown 提交于
      When we get a write error (in the data area, not in metadata),
      update the badblock log rather than failing the whole device.
      
      As the write may well be many blocks, we trying writing each
      block individually and only log the ones which fail.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bd870a16
    • N
      md/raid10: clear bad-block record when write succeeds. · 749c55e9
      NeilBrown 提交于
      If we succeed in writing to a block that was recorded as
      being bad, we clear the bad-block record.
      
      This requires some delayed handling as the bad-block-list update has
      to happen in process-context.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      749c55e9
    • N
      md/raid10: avoid writing to known bad blocks on known bad drives. · d4432c23
      NeilBrown 提交于
      Writing to known bad blocks on drives that have seen a write error
      is asking for trouble.  So try to avoid these blocks.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d4432c23
    • N
      md/raid10 record bad blocks as needed during recovery. · e875ecea
      NeilBrown 提交于
      When recovering one or more devices, if all the good devices have
      bad blocks we should record a bad block on the device being rebuilt.
      
      If this fails, we need to abort the recovery.
      
      To ensure we don't think that we aborted later than we actually did,
      we need to move the check for MD_RECOVERY_INTR earlier in md_do_sync,
      in particular before mddev->curr_resync is updated.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e875ecea
    • N
      md/raid10: avoid reading known bad blocks during resync/recovery. · 40c356ce
      NeilBrown 提交于
      During resync/recovery limit the size of the request to avoid
      reading into a bad block that does not start at-or-before the current
      read address.
      
      Similarly if there is a bad block at this address, don't allow the
      current request to extend beyond the end of that bad block.
      
      Now that we don't ever read from known bad blocks, it is safe to allow
      devices with those blocks into the array.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      40c356ce
    • N
      md/raid10 - avoid reading from known bad blocks - part 3 · 8dbed5ce
      NeilBrown 提交于
      When attempting to repair a read error, don't read from
      devices with a known bad block.
      
      As we are only reading PAGE_SIZE blocks, we don't try to
      narrow down to smaller regions in the hope that only part of this
      page is bad - it isn't worth the effort.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8dbed5ce
    • N
      md/raid10: avoid reading from known bad blocks - part 2 · 7399c31b
      NeilBrown 提交于
      When redirecting a read error to a different device, we must
      again avoid bad blocks and possibly split the request.
      
      Spin_lock typo fixed thanks to Dan Carpenter <error27@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7399c31b
    • N
      md/raid10: avoid reading from known bad blocks - part 1 · 856e08e2
      NeilBrown 提交于
      This patch just covers the basic read path:
       1/ read_balance needs to check for badblocks, and return not only
          the chosen slot, but also how many good blocks are available
          there.
       2/ read submission must be ready to issue multiple reads to
          different devices as different bad blocks on different devices
          could mean that a single large read cannot be served by any one
          device, but can still be served by the array.
          This requires keeping count of the number of outstanding requests
          per bio.  This count is stored in 'bi_phys_segments'
      
      On read error we currently just fail the request if another target
      cannot handle the whole request.  Next patch refines that a bit.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      856e08e2
    • N
      md/raid10: Split handle_read_error out from raid10d. · 560f8e55
      NeilBrown 提交于
      raid10d() is too big and is about to get bigger, so split
      handle_read_error() out as a separate function.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      560f8e55
    • N
      md/raid10: simplify/reindent some loops. · 1294b9c9
      NeilBrown 提交于
      When a loop ends with a large if, it can be neater to change the
      if to invert the condition and just 'continue'.
      Then the body of the if can be indented to a lower level.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1294b9c9
    • N
      md: make it easier to wait for bad blocks to be acknowledged. · de393cde
      NeilBrown 提交于
      It is only safe to choose not to write to a bad block if that bad
      block is safely recorded in metadata - i.e. if it has been
      'acknowledged'.
      
      If it hasn't we need to wait for the acknowledgement.
      
      We support that using rdev->blocked wait and
      md_wait_for_blocked_rdev by introducing a new device flag
      'BlockedBadBlock'.
      
      This flag is only advisory.
      It is cleared whenever we acknowledge a bad block, so that a waiter
      can re-check the particular bad blocks that it is interested it.
      
      It should be set by a caller when they find they need to wait.
      This (set after test) is inherently racy, but as
      md_wait_for_blocked_rdev already has a timeout, losing the race will
      have minimal impact.
      
      When we clear "Blocked" was also clear "BlockedBadBlocks" incase it
      was set incorrectly (see above race).
      
      We also modify the way we manage 'Blocked' to fit better with the new
      handling of 'BlockedBadBlocks' and to make it consistent between
      externally managed and internally managed metadata.   This requires
      that each raidXd loop checks if the metadata needs to be written and
      triggers a write (md_check_recovery) if needed.  Otherwise a queued
      write request might cause raidXd to wait for the metadata to write,
      and only that thread can write it.
      
      Before writing metadata, we set FaultRecorded for all devices that
      are Faulty, then after writing the metadata we clear Blocked for any
      device for which the Fault was certainly Recorded.
      
      The 'faulty' device flag now appears in sysfs if the device is faulty
      *or* it has unacknowledged bad blocks.  So user-space which does not
      understand bad blocks can continue to function correctly.
      User space which does, should not assume a device is faulty until it
      sees the 'faulty' flag, and then sees the list of unacknowledged bad
      blocks is empty.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      de393cde
    • N
      md: don't allow arrays to contain devices with bad blocks. · 34b343cf
      NeilBrown 提交于
      As no personality understand bad block lists yet, we must
      reject any device that is known to contain bad blocks.
      As the personalities get taught, these tests can be removed.
      
      This only applies to raid1/raid5/raid10.
      For linear/raid0/multipath/faulty the whole concept of bad blocks
      doesn't mean anything so there is no point adding the checks.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      34b343cf