1. 28 7月, 2011 3 次提交
    • N
      md/raid1: avoid reading known bad blocks during resync · 06f60385
      NeilBrown 提交于
      When performing resync/etc, keep the size of the request
      small enough that it doesn't overlap any known bad blocks.
      Devices with badblocks at the start of the request are completely
      excluded.
      If there is nowhere to read from due to bad blocks, record
      a bad block on each target device.
      
      Now that we never read from known-bad-blocks we can allow devices with
      known-bad-blocks into a RAID1.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      06f60385
    • N
      md/raid1: avoid reading from known bad blocks. · d2eb35ac
      NeilBrown 提交于
      Now that we have a bad block list, we should not read from those
      blocks.
      There are several main parts to this:
        1/ read_balance needs to check for bad blocks, and return not only
           the chosen device, but also how many good blocks are available
           there.
        2/ fix_read_error needs to avoid trying to read from bad blocks.
        3/ read submission must be ready to issue multiple reads to
           different devices as different bad blocks on different devices
           could mean that a single large read cannot be served by any one
           device, but can still be served by the array.
           This requires keeping count of the number of outstanding requests
           per bio.  This count is stored in 'bi_phys_segments'
        4/ retrying a read needs to also be ready to submit a smaller read
           and queue another request for the rest.
      
      This does not yet handle bad blocks when reading to perform resync,
      recovery, or check.
      
      'md_trim_bio' will also be used for RAID10, so put it in md.c and
      export it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d2eb35ac
    • N
      md: don't allow arrays to contain devices with bad blocks. · 34b343cf
      NeilBrown 提交于
      As no personality understand bad block lists yet, we must
      reject any device that is known to contain bad blocks.
      As the personalities get taught, these tests can be removed.
      
      This only applies to raid1/raid5/raid10.
      For linear/raid0/multipath/faulty the whole concept of bad blocks
      doesn't mean anything so there is no point adding the checks.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      34b343cf
  2. 27 7月, 2011 5 次提交
  3. 08 6月, 2011 1 次提交
  4. 11 5月, 2011 6 次提交
    • N
      md: allow resync_start to be set while an array is active. · b098636c
      NeilBrown 提交于
      The sysfs attribute 'resync_start' (known internally as recovery_cp),
      records where a resync is up to.  A value of 0 means the array is
      not known to be in-sync at all.  A value of MaxSector means the array
      is believed to be fully in-sync.
      
      When the size of member devices of an array (RAID1,RAID4/5/6) is
      increased, the array can be increased to match.  This process sets
      resync_start to the old end-of-device offset so that the new part of
      the array gets resynced.
      
      However with RAID1 (and RAID6) a resync is not technically necessary
      and may be undesirable.  So it would be good if the implied resync
      after the array is resized could be avoided.
      
      So: change 'resync_start' so the value can be changed while the array
      is active, and as a precaution only allow it to be changed while
      resync/recovery is 'frozen'.  Changing it once resync has started is
      not going to be useful anyway.
      
      This allows the array to be resized without a resync by:
        write 'frozen' to 'sync_action'
        write new size to 'component_size' (this will set resync_start)
        write 'none' to 'resync_start'
        write 'idle' to 'sync_action'.
      
      Also slightly improve some tests on recovery_cp when resizing
      raid1/raid5.  Now that an arbitrary value could be set we should be
      more careful in our tests.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b098636c
    • N
      md/raid1: improve handling of pages allocated for write-behind. · af6d7b76
      NeilBrown 提交于
      The current handling and freeing of these pages is a bit fragile.
      We only keep the list of allocated pages in each bio, so we need to
      still have a valid bio when freeing the pages, which is a bit clumsy.
      
      So simply store the allocated page list in the r1_bio so it can easily
      be found and freed when we are finished with the r1_bio.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      af6d7b76
    • N
      md/raid1: try fix_sync_read_error before process_checks. · 7ca78d57
      NeilBrown 提交于
      If we get a read error during resync/recovery we current repeat with
      single-page reads to find out just where the error is, and possibly
      read each page from a different device.
      
      With check/repair we don't currently do that, we just fail.
      However it is possible that while all devices fail on the large 64K
      read, we might be able to satisfy each 4K from one device or another.
      
      So call fix_sync_read_error before process_checks to maximise the
      chance of finding good data and writing it out to the devices with
      read errors.
      
      For this to work, we need to set the 'uptodate' flags properly after
      fix_sync_read_error has succeeded.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7ca78d57
    • N
      md/raid1: tidy up new functions: process_checks and fix_sync_read_error. · 78d7f5f7
      NeilBrown 提交于
      These changes are mostly cosmetic:
      
      1/ change mddev->raid_disks to conf->raid_disks because the later is
         technically safer, though in current practice it doesn't matter in
         this particular context.
      2/ Rearrange two for / if loops to have an early 'continue' so the
         body of the 'if' doesn't need to be indented so much.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      78d7f5f7
    • N
      md/raid1: split out two sub-functions from sync_request_write · a68e5870
      NeilBrown 提交于
      sync_request_write is too big and too deep.
      So split out two self-contains bits of functionality into separate
      function.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a68e5870
    • N
      md/raid1: clean up read_balance. · 76073054
      NeilBrown 提交于
      read_balance has two loops which both look for a 'best'
      device based on slightly different criteria.
      This is clumsy and makes is hard to add extra criteria.
      
      So replace it all with a single loop that combines everything.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      76073054
  5. 18 4月, 2011 2 次提交
  6. 17 3月, 2011 1 次提交
  7. 10 3月, 2011 1 次提交
  8. 21 2月, 2011 1 次提交
    • N
      md: avoid spinlock problem in blk_throtl_exit · da9cf505
      NeilBrown 提交于
      blk_throtl_exit assumes that ->queue_lock still exists,
      so make sure that it does.
      To do this, we stop redirecting ->queue_lock to conf->device_lock
      and leave it pointing where it is initialised - __queue_lock.
      
      As the blk_plug functions check the ->queue_lock is held, we now
      take that spin_lock explicitly around the plug functions.  We don't
      need the locking, just the warning removal.
      
      This is needed for any kernel with the blk_throtl code, which is
      which is 2.6.37 and later.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      da9cf505
  9. 14 1月, 2011 2 次提交
  10. 24 11月, 2010 1 次提交
    • N
      md/raid1: really fix recovery looping when single good device fails. · 8f9e0ee3
      NeilBrown 提交于
      Commit 4044ba58 supposedly fixed a
      problem where if a raid1 with just one good device gets a read-error
      during recovery, the recovery would abort and immediately restart in
      an infinite loop.
      
      However it depended on raid1_remove_disk removing the spare device
      from the array.  But that does not happen in this case.  So add a test
      so that in the 'recovery_disabled' case, the device will be removed.
      
      This suitable for any kernel since 2.6.29 which is when
      recovery_disabled was introduced.
      
      Cc: stable@kernel.org
      Reported-by: NSebastian Färber <faerber@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8f9e0ee3
  11. 29 10月, 2010 3 次提交
  12. 28 10月, 2010 6 次提交
  13. 07 10月, 2010 2 次提交
    • N
      md/raid1: minor bio initialisation improvements. · db8d9d35
      NeilBrown 提交于
      
      When performing a resync we pre-allocate some bios and repeatedly use
      them.  This requires us to re-initialise them each time.
      One field (bi_comp_cpu) and some flags weren't being initiaised
      reliably.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      db8d9d35
    • N
      md/raid1: avoid overflow in raid1 resync when bitmap is in use. · 7571ae88
      NeilBrown 提交于
      bitmap_start_sync returns - via a pass-by-reference variable - the
      number of sectors before we need to check with the bitmap again.
      Since commit ef425673 this number can be substantially larger,
      2^27 is a common value.
      
      Unfortunately it is an 'int' and so when raid1.c:sync_request shifts
      it 9 places to the left it becomes 0.  This results in a zero-length
      read which the scsi layer justifiably complains about.
      
      This patch just removes the shift so the common case becomes safe with
      a trivially-correct patch.
      
      In the next merge window we will convert this 'int' to a 'sector_t'
      Reported-by: N"George Spelvin" <linux@horizon.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7571ae88
  14. 10 9月, 2010 1 次提交
    • T
      md: implment REQ_FLUSH/FUA support · e9c7469b
      Tejun Heo 提交于
      This patch converts md to support REQ_FLUSH/FUA instead of now
      deprecated REQ_HARDBARRIER.  In the core part (md.c), the following
      changes are notable.
      
      * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
        processing of other requests and thus there is no reason to mark the
        queue congested while FLUSH/FUA is in progress.
      
      * REQ_FLUSH/FUA failures are final and its users don't need retry
        logic.  Retry logic is removed.
      
      * Preflush needs to be issued to all member devices but FUA writes can
        be handled the same way as other writes - their processing can be
        deferred to request_queue of member devices.  md_barrier_request()
        is renamed to md_flush_request() and simplified accordingly.
      
      For linear, raid0 and multipath, the core changes are enough.  raid1,
      5 and 10 need the following conversions.
      
      * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
        request_queues of member devices.  Barrier related logic removed.
      
      * raid5: Queue draining logic dropped.  FUA bit is propagated through
        biodrain and stripe resconstruction such that all the updated parts
        of the stripe are written out with FUA writes if any of the dirtying
        writes was FUA.  preread_active_stripes handling in make_request()
        is updated as suggested by Neil Brown.
      
      * raid10: FUA bit needs to be propagated to write clones.
      
      linear, raid0, 1, 5 and 10 tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e9c7469b
  15. 18 8月, 2010 3 次提交
  16. 08 8月, 2010 1 次提交
    • C
      block: unify flags for struct bio and struct request · 7b6d91da
      Christoph Hellwig 提交于
      Remove the current bio flags and reuse the request flags for the bio, too.
      This allows to more easily trace the type of I/O from the filesystem
      down to the block driver.  There were two flags in the bio that were
      missing in the requests:  BIO_RW_UNPLUG and BIO_RW_AHEAD.  Also I've
      renamed two request flags that had a superflous RW in them.
      
      Note that the flags are in bio.h despite having the REQ_ name - as
      blkdev.h includes bio.h that is the only way to go for now.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      7b6d91da
  17. 18 5月, 2010 1 次提交