1. 18 4月, 2011 3 次提交
    • N
      md: incorporate new plugging into raid5. · 7c13edc8
      NeilBrown 提交于
      In raid5 plugging is used for 2 things:
       1/ collecting writes that require a bitmap update
       2/ collecting writes in the hope that we can create full
          stripes - or at least more-full.
      
      We now release these different sets of stripes when plug_cnt
      is zero.
      
      Also in make_request, we call mddev_check_plug to hopefully increase
      plug_cnt, and wake up the thread at the end if plugging wasn't
      achieved for some reason.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7c13edc8
    • N
      md - remove old plugging code. · 482c0834
      NeilBrown 提交于
      md has some plugging infrastructure for RAID5 to use because the
      normal plugging infrastructure required a 'request_queue', and when
      called from dm, RAID5 doesn't have one of those available.
      
      This relied on the ->unplug_fn callback which doesn't exist any more.
      
      So remove all of that code, both in md and raid5.  Subsequent patches
      with restore the plugging functionality.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      482c0834
    • N
      md: use new plugging interface for RAID IO. · e1dfa0a2
      NeilBrown 提交于
      md/raid submits a lot of IO from the various raid threads.
      So adding start/finish plug calls to those so that some
      plugging happens.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e1dfa0a2
  2. 10 3月, 2011 1 次提交
  3. 21 2月, 2011 1 次提交
    • N
      md: avoid spinlock problem in blk_throtl_exit · da9cf505
      NeilBrown 提交于
      blk_throtl_exit assumes that ->queue_lock still exists,
      so make sure that it does.
      To do this, we stop redirecting ->queue_lock to conf->device_lock
      and leave it pointing where it is initialised - __queue_lock.
      
      As the blk_plug functions check the ->queue_lock is held, we now
      take that spin_lock explicitly around the plug functions.  We don't
      need the locking, just the warning removal.
      
      This is needed for any kernel with the blk_throtl code, which is
      which is 2.6.37 and later.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      da9cf505
  4. 31 1月, 2011 3 次提交
    • N
      md: don't abort checking spares as soon as one cannot be added. · 50da0840
      NeilBrown 提交于
      As spares can be added manually before a reshape starts, we need to
      find them all to mark some of them as in_sync.
      
      Previously we would abort looking for spares when we found an
      unallocated spare what could not be added to the array (implying there
      was no room for new spares).  However already-added spares could be
      later in the list, so we need to keep searching.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      50da0840
    • N
      md: fix the test for finding spares in raid5_start_reshape. · 469518a3
      NeilBrown 提交于
      As spares can be added to the array before the reshape is started,
      we need to find and count them when checking there are enough.
      The array could have been degraded, so we need to check all devices,
      no just those out side of the range of devices in the array before
      the reshape.
      
      So instead of checking the index, check the In_sync flag as that
      reliably tells if the device is a spare or this purpose.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      469518a3
    • N
      md: simplify some 'if' conditionals in raid5_start_reshape. · 87a8dec9
      NeilBrown 提交于
      There are two consecutive 'if' statements.
      
       if (mddev->delta_disks >= 0)
            ....
       if (mddev->delta_disks > 0)
      
      The code in the second is equally valid if delta_disks == 0, and these
      two statements are the only place that 'added_devices' is used.
      
      So make them a single if statement, make added_devices a local
      variable, and re-indent it all.
      
      No functional change.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      87a8dec9
  5. 14 1月, 2011 4 次提交
  6. 28 10月, 2010 2 次提交
    • N
      md: use separate bio pool for each md device. · a167f663
      NeilBrown 提交于
      bio_clone and bio_alloc allocate from a common bio pool.
      If an md device is stacked with other devices that use this pool, or under
      something like swap which uses the pool, then the multiple calls on
      the pool can cause deadlocks.
      
      So allocate a local bio pool for each md array and use that rather
      than the common pool.
      
      This pool is used both for regular IO and metadata updates.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a167f663
    • N
      md: use sector_t in bitmap_get_counter · 57dab0bd
      NeilBrown 提交于
      bitmap_get_counter returns the number of sectors covered
      by the counter in a pass-by-reference variable.
      In some cases this can be very large, so make it a sector_t
      for safety.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      57dab0bd
  7. 10 9月, 2010 1 次提交
    • T
      md: implment REQ_FLUSH/FUA support · e9c7469b
      Tejun Heo 提交于
      This patch converts md to support REQ_FLUSH/FUA instead of now
      deprecated REQ_HARDBARRIER.  In the core part (md.c), the following
      changes are notable.
      
      * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
        processing of other requests and thus there is no reason to mark the
        queue congested while FLUSH/FUA is in progress.
      
      * REQ_FLUSH/FUA failures are final and its users don't need retry
        logic.  Retry logic is removed.
      
      * Preflush needs to be issued to all member devices but FUA writes can
        be handled the same way as other writes - their processing can be
        deferred to request_queue of member devices.  md_barrier_request()
        is renamed to md_flush_request() and simplified accordingly.
      
      For linear, raid0 and multipath, the core changes are enough.  raid1,
      5 and 10 need the following conversions.
      
      * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
        request_queues of member devices.  Barrier related logic removed.
      
      * raid5: Queue draining logic dropped.  FUA bit is propagated through
        biodrain and stripe resconstruction such that all the updated parts
        of the stripe are written out with FUA writes if any of the dirtying
        writes was FUA.  preread_active_stripes handling in make_request()
        is updated as suggested by Neil Brown.
      
      * raid10: FUA bit needs to be propagated to write clones.
      
      linear, raid0, 1, 5 and 10 tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e9c7469b
  8. 18 8月, 2010 2 次提交
  9. 08 8月, 2010 1 次提交
    • C
      block: unify flags for struct bio and struct request · 7b6d91da
      Christoph Hellwig 提交于
      Remove the current bio flags and reuse the request flags for the bio, too.
      This allows to more easily trace the type of I/O from the filesystem
      down to the block driver.  There were two flags in the bio that were
      missing in the requests:  BIO_RW_UNPLUG and BIO_RW_AHEAD.  Also I've
      renamed two request flags that had a superflous RW in them.
      
      Note that the flags are in bio.h despite having the REQ_ name - as
      blkdev.h includes bio.h that is the only way to go for now.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      7b6d91da
  10. 26 7月, 2010 6 次提交
  11. 21 7月, 2010 2 次提交
  12. 24 6月, 2010 6 次提交
    • N
      md/raid5: don't include 'spare' drives when reshaping to fewer devices. · 3424bf6a
      NeilBrown 提交于
      There are few situations where it would make any sense to add a spare
      when reducing the number of devices in an array, but it is
      conceivable:  A 6 drive RAID6 with two missing devices could be
      reshaped to a 5 drive RAID6, and a spare could become available
      just in time for the reshape, but not early enough to have been
      recovered first.  'freezing' recovery can make this easy to
      do without any races.
      
      However doing such a thing is a bad idea.  md will not record the
      partially-recovered state of the 'spare' and when the reshape
      finished it will think that the spare is still spare.
      Easiest way to avoid this confusion is to simply disallow it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      3424bf6a
    • N
      md/raid5: add a missing 'continue' in a loop. · 2f115882
      NeilBrown 提交于
      As the comment says, the tail of this loop only applies to devices
      that are not fully in sync, so if In_sync was set, we should avoid
      the rest of the loop.
      
      This bug will hardly ever cause an actual problem.  The worst it
      can do is allow an array to be assembled that is dirty and degraded,
      which is not generally a good idea (without warning the sysadmin
      first).
      
      This will only happen if the array is RAID4 or a RAID5/6 in an
      intermediate state during a reshape and so has one drive that is
      all 'parity' - no data - while some other device has failed.
      
      This is certainly possible, but not at all common.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2f115882
    • N
      md/raid5: Allow recovered part of partially recovered devices to be in-sync · 415e72d0
      NeilBrown 提交于
      During a recovery of reshape the early part of some devices might be
      in-sync while the later parts are not.
      We we know we are looking at an early part it is good to treat that
      part as in-sync for stripe calculations.
      
      This is particularly important for a reshape which suffers device
      failure.  Treating the data as in-sync can mean the difference between
      data-safety and data-loss.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      415e72d0
    • N
      md/raid5: More careful check for "has array failed". · 674806d6
      NeilBrown 提交于
      When we are reshaping an array, the device failure combinations
      that cause us to decide that the array as failed are more subtle.
      
      In particular, any 'spare' will be fully in-sync in the section
      of the array that has already been reshaped, thus failures that
      affect only that section are less critical.
      
      So encode this subtlety in a new function and call it as appropriate.
      
      The case that showed this problem was a 4 drive RAID5 to 8 drive RAID6
      conversion where the last two devices failed.
      This resulted in:
      
        good good good good incomplete good good failed failed
      
      while converting a 5-drive RAID6 to 8 drive RAID5
      The incomplete device causes the whole array to look bad,
      bad as it was actually good for the section that had been
      converted to 8-drives, all the data was actually safe.
      Reported-by: NTerry Morris <tbmorris@tbmorris.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      674806d6
    • N
      md: Don't update ->recovery_offset when reshaping an array to fewer devices. · 70fffd0b
      NeilBrown 提交于
      When an array is reshaped to have fewer devices, the reshape proceeds
      from the end of the devices to the beginning.
      
      If a device happens to be non-In_sync (which is possible but rare)
      we would normally update the ->recovery_offset as the reshape
      progresses. However that would be wrong as the recover_offset records
      that the early part of the device is in_sync, while in fact it would
      only be the later part that is in_sync, and in any case the offset
      number would be measured from the wrong end of the device.
      
      Relatedly, if after a reshape a spare is discovered to not be
      recoverred all the way to the end, not allow spare_active
      to incorporate it in the array.
      
      This becomes relevant in the following sample scenario:
      
      A 4 drive RAID5 is converted to a 6 drive RAID6 in a combined
      operation.
      The RAID5->RAID6 conversion will cause a 5 drive to be included as a
      spare, then the 5drive -> 6drive reshape will effectively rebuild that
      spare as it progresses.  The 6th drive is treated as in_sync the whole
      time as there is never any case that we might consider reading from
      it, but must not because there is no valid data.
      
      If we interrupt this reshape part-way through and reverse it to return
      to a 5-drive RAID6 (or event a 4-drive RAID5), we don't want to update
      the recovery_offset - as that would be wrong - and we don't want to
      include that spare as active in the 5-drive RAID6 when the reversed
      reshape completed and it will be mostly out-of-sync still.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      70fffd0b
    • N
      md/raid5: avoid oops when number of devices is reduced then increased. · e4e11e38
      NeilBrown 提交于
      The entries in the stripe_cache maintained by raid5 are enlarged
      when we increased the number of devices in the array, but not
      shrunk when we reduce the number of devices.
      So if entries are added after reducing the number of devices, we
      much ensure to initialise the whole entry, not just the part that
      is currently relevant.  Otherwise if we enlarge the array again,
      we will reference uninitialised values.
      
      As grow_buffers/shrink_buffer now want to use a count that is stored
      explicity in the raid_conf, they should get it from there rather than
      being passed it as a parameter.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e4e11e38
  13. 28 5月, 2010 1 次提交
  14. 18 5月, 2010 7 次提交