1. 11 5月, 2011 13 次提交
    • N
      md/raid10: make more use of 'slot' in raid10d. · a8830bca
      NeilBrown 提交于
      Now that we have a 'slot' variable, make better use of it to simplify
      some code a little.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a8830bca
    • N
      md/raid10: some tidying up in fix_read_error · 7c4e06ff
      NeilBrown 提交于
      Currently the rdev on which a read error happened could be removed
      before we perform the fix_error handling.  This requires extra tests
      for NULL.
      
      So delay the rdev_dec_pending call until after the call to
      fix_read_error so that we can be sure that the rdev still exists.
      
      This allows an 'if' clause to be removed so the body gets re-indented
      back one level.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7c4e06ff
    • N
      md/raid1: improve handling of pages allocated for write-behind. · af6d7b76
      NeilBrown 提交于
      The current handling and freeing of these pages is a bit fragile.
      We only keep the list of allocated pages in each bio, so we need to
      still have a valid bio when freeing the pages, which is a bit clumsy.
      
      So simply store the allocated page list in the r1_bio so it can easily
      be found and freed when we are finished with the r1_bio.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      af6d7b76
    • N
      md/raid1: try fix_sync_read_error before process_checks. · 7ca78d57
      NeilBrown 提交于
      If we get a read error during resync/recovery we current repeat with
      single-page reads to find out just where the error is, and possibly
      read each page from a different device.
      
      With check/repair we don't currently do that, we just fail.
      However it is possible that while all devices fail on the large 64K
      read, we might be able to satisfy each 4K from one device or another.
      
      So call fix_sync_read_error before process_checks to maximise the
      chance of finding good data and writing it out to the devices with
      read errors.
      
      For this to work, we need to set the 'uptodate' flags properly after
      fix_sync_read_error has succeeded.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7ca78d57
    • N
      md/raid1: tidy up new functions: process_checks and fix_sync_read_error. · 78d7f5f7
      NeilBrown 提交于
      These changes are mostly cosmetic:
      
      1/ change mddev->raid_disks to conf->raid_disks because the later is
         technically safer, though in current practice it doesn't matter in
         this particular context.
      2/ Rearrange two for / if loops to have an early 'continue' so the
         body of the 'if' doesn't need to be indented so much.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      78d7f5f7
    • N
      md/raid1: split out two sub-functions from sync_request_write · a68e5870
      NeilBrown 提交于
      sync_request_write is too big and too deep.
      So split out two self-contains bits of functionality into separate
      function.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a68e5870
    • N
      md: make error_handler functions more uniform and correct. · 6f8d0c77
      NeilBrown 提交于
      - there is no need to test_bit Faulty, as that was already done in
        md_error which is the only caller of these functions.
      - MD_CHANGE_DEVS should be set *after* faulty is set to ensure
        metadata is updated correctly.
      - spinlock should be held while updating ->degraded.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      
        
      6f8d0c77
    • N
      md/multipath: discard ->working_disks in favour of ->degraded · 92f861a7
      NeilBrown 提交于
      conf->working_disks duplicates information already available
      in mddev->degraded.
      So remove working_disks.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      92f861a7
    • N
      md/raid1: clean up read_balance. · 76073054
      NeilBrown 提交于
      read_balance has two loops which both look for a 'best'
      device based on slightly different criteria.
      This is clumsy and makes is hard to add extra criteria.
      
      So replace it all with a single loop that combines everything.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      76073054
    • N
      md: simplify raid10 read_balance · 56d99121
      NeilBrown 提交于
      raid10 read balance has two different loop for looking through
      possible devices to chose the best.
      Collapse those into one loop and generally make the code more
      readable.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      56d99121
    • N
      md/bitmap: fix saving of events_cleared and other state. · 8258c532
      NeilBrown 提交于
      If a bitmap is found to be 'stale' the events_cleared value
      is set to match 'events'.
      However if the array is degraded this does not get stored on disk.
      This can subsequently lead to incorrect behaviour.
      
      So change bitmap_update_sb to always update events_cleared in the
      superblock from the known events_cleared.
      For neatness also set ->state from ->flags.
      This requires updating ->state whenever we update ->flags, which makes
      sense anyway.
      
      This is suitable for any active -stable release.
      
      cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8258c532
    • N
      md: reject a re-add request that cannot be honoured. · bedd86b7
      NeilBrown 提交于
      The 'add_new_disk' ioctl can be used to add a device either as a
      spare, or as an active disk that just needs to be resynced based on
      write-intent-bitmap information (re-add)
      
      Currently if a re-add is requested but fails we add as a spare
      instead.  This makes it impossible for user-space to check for
      failure.
      
      So change to require that a re-add attempt will either succeed or
      completely fail.  User-space can then decide what to do next.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bedd86b7
    • N
      md: Fix race when creating a new md device. · b0140891
      NeilBrown 提交于
      There is a race when creating an md device by opening /dev/mdXX.
      
      If two processes do this at much the same time they will follow the
      call path
        __blkdev_get -> get_gendisk -> kobj_lookup
      
      The first will call
        -> md_probe -> md_alloc -> add_disk -> blk_register_region
      
      and the race happens when the second gets to kobj_lookup after
      add_disk has called blk_register_region but before it returns to
      md_alloc.
      
      In the case the second will not call md_probe (as the probe is already
      done) but will get a handle on the gendisk, return to __blkdev_get
      which will then call md_open (via the ->open) pointer.
      
      As mddev->gendisk hasn't been set yet, md_open will think something is
      wrong an return with ERESTARTSYS.
      
      This can loop endlessly while the first thread makes no progress
      through add_disk.  Nothing is blocking it, but due to scheduler
      behaviour it doesn't get a turn.
      So this is essentially a live-lock.
      
      We fix this by simply moving the assignment to mddev->gendisk before
      the call the add_disk() so md_open doesn't get confused.
      Also move blk_queue_flush earlier because add_disk should be as late
      as possible.
      
      To make sure that md_open doesn't complete until md_alloc has done all
      that is needed, we take mddev->open_mutex during the last part of
      md_alloc.  md_open will wait for this.
      
      This can cause a lock-up on boot so Cc:ing for stable.
      For 2.6.36 and earlier a different patch will be needed as the
      'blk_queue_flush' call isn't there.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reported-by: NThomas Jarosch <thomas.jarosch@intra2net.com>
      Tested-by: NThomas Jarosch <thomas.jarosch@intra2net.com>
      Cc: stable@kernel.org
      b0140891
  2. 10 5月, 2011 25 次提交
  3. 09 5月, 2011 2 次提交