1. 19 7月, 2012 3 次提交
  2. 03 7月, 2012 8 次提交
    • N
      md: fix up plugging (again). · b357f04a
      NeilBrown 提交于
      The value returned by "mddev_check_plug" is only valid until the
      next 'schedule' as that will unplug things.  This could happen at any
      call to mempool_alloc.
      So just calling mddev_check_plug at the start doesn't really make
      sense.
      
      So call it just before, or just after, queuing things for the thread.
      As the action that happens at unplug is to wake the thread, this makes
      lots of sense.
      If we cannot add a plug (which requires a small GFP_ATOMIC alloc) we
      wake thread immediately.
      
      RAID5 is a bit different.  Requests are queued for the thread and the
      thread is woken by release_stripe.  So we don't need to wake the
      thread on failure.
      However the thread doesn't perform certain actions when there is any
      active plug, so it is important to install a plug before waking the
      thread.  So for RAID5 we install the plug *before* queuing the request
      and waking the thread.
      
      Without this patch it is possible for raid1 or raid10 to queue a
      request without then waking the thread, resulting in the array locking
      up.
      
      Also change raid10 to only flush_pending_write when there are not
      active plugs, just like raid1.
      
      This patch is suitable for 3.0 or later.  I plan to submit it to
      -stable, but I'll like to let it spend a few weeks in mainline
      first to be sure it is completely safe.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b357f04a
    • S
      raid5: delayed stripe fix · fab363b5
      Shaohua Li 提交于
      There isn't locking setting STRIPE_DELAYED and STRIPE_PREREAD_ACTIVE bits, but
      the two bits have relationship. A delayed stripe can be moved to hold list only
      when preread active stripe count is below IO_THRESHOLD. If a stripe has both
      the bits set, such stripe will be in delayed list and preread count not 0,
      which will make such stripe never leave delayed list.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      fab363b5
    • M
      md/raid456: When read error cannot be recovered, record bad block · 2e8ac303
      majianpeng 提交于
      We may not be able to fix a bad block if:
       - the array is degraded
       - the over-write fails.
      
      In these cases we currently eject the device, but we should
      record a bad block if possible.
      Signed-off-by: Nmajianpeng <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2e8ac303
    • N
      md: make 'name' arg to md_register_thread non-optional. · 0232605d
      NeilBrown 提交于
      Having the 'name' arg optional and defaulting to the current
      personality name is no necessary and leads to errors, as when
      changing the level of an array we can end up using the
      name of the old level instead of the new one.
      
      So make it non-optional and always explicitly pass the name
      of the level that the array will be.
      Reported-by: Nmajianpeng <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      0232605d
    • N
      md/raid5: fix refcount problem when blocked_rdev is set. · 5f066c63
      NeilBrown 提交于
      commit 43220aa0
          md/raid5: fix a hang on device failure.
      
      fixed a hang, but introduced a refcounting in-balance so
      that if the presence of bad-blocks ever caused an rdev to
      be 'blocked' we would increment the refcount on the rdev and
      never decrement it.
      
      So added the needed rdev_dec_pending when md_wait_for_blocked_rdev
      is not called.
      Reported-by: Nmajianpeng <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5f066c63
    • M
      md/raid5: In ops_run_io, inc nr_pending before calling md_wait_for_blocked_rdev · 1850753d
      majianpeng 提交于
      In ops_run_io(), the call to md_wait_for_blocked_rdev will decrement
      nr_pending so we lose the reference we hold on the rdev.
      So atomic_inc it first to maintain the reference.
      
      This bug was introduced by commit  73e92e51
          md/raid5.  Don't write to known bad block on doubtful devices.
      
      which appeared in 3.0, so patch is suitable for stable kernels since
      then.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: Nmajianpeng <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1850753d
    • M
      md/raid5: Do not add data_offset before call to is_badblock · 6c0544e2
      majianpeng 提交于
      In chunk_aligned_read() we are adding data_offset before calling
      is_badblock.  But is_badblock also adds data_offset, so that is bad.
      
      So move the addition of data_offset to after the call to
      is_badblock.
      
      This bug was introduced by commit 31c176ec
           md/raid5: avoid reading from known bad blocks.
      which first appeared in 3.0.  So that patch is suitable for any
      -stable kernel from 3.0.y onwards.  However it will need minor
      revision for most of those (as the comment didn't appear until
      recently).
      
      Cc: stable@vger.kernel.org
      Signed-off-by: Nmajianpeng <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6c0544e2
    • N
      md/raid5: prefer replacing failed devices over want-replacement devices. · 5cfb22a1
      NeilBrown 提交于
      If a RAID5 has both a failed device and a device marked as
      'WantReplacement', then we should preferentially replace the failed
      device.
      However the current code replaces whichever is found first.
      So split into 2 loops, check fail failed/missing first, and only check
      for WantReplacement if nothing is failed or missing.
      Reported-by: Nmajianpeng <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5cfb22a1
  3. 22 5月, 2012 5 次提交
  4. 21 5月, 2012 4 次提交
    • N
      md/raid5: allow for change in data_offset while managing a reshape. · b5254dd5
      NeilBrown 提交于
      The important issue here is incorporating the different in data_offset
      into calculations concerning when we might need to over-write data
      that is still thought to be valid.
      
      To this end we find the minimum offset difference across all devices
      and add that where appropriate.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b5254dd5
    • N
      md/raid5: Use correct data_offset for all IO. · 05616be5
      NeilBrown 提交于
      As there can now be two different data_offsets - an 'old' and
      a 'new' - we need to carefully choose between them.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      05616be5
    • N
      md: add possibility to change data-offset for devices. · c6563a8c
      NeilBrown 提交于
      When reshaping we can avoid costly intermediate backup by
      changing the 'start' address of the array on the device
      (if there is enough room).
      
      So as a first step, allow such a change to be requested
      through sysfs, and recorded in v1.x metadata.
      
      (As we didn't previous check that all 'pad' fields were zero,
       we need a new FEATURE flag for this.
       A (belatedly) check that all remaining 'pad' fields are
       zero to avoid a repeat of this)
      
      The new data offset must be requested separately for each device.
      This allows each to have a different change in the data offset.
      This is not likely to be used often but as data_offset can be
      set per-device, new_data_offset should be too.
      
      This patch also removes the 'acknowledged' arg to rdev_set_badblocks as
      it is never used and never will be.  At the same time we add a new
      arg ('in_new') which is currently always zero but will be used more
      soon.
      
      When a reshape finishes we will need to update the data_offset
      and rdev->sectors.  So provide an exported function to do that.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c6563a8c
    • N
      md: allow a reshape operation to be reversed. · 2c810cdd
      NeilBrown 提交于
      Currently a reshape operation always progresses from the start
      of the array to the end unless the number of devices is being
      reduced, in which case it progressed in the opposite direction.
      
      To reverse a partial reshape which changes the number of devices
      you can stop the array and re-assemble with the raid-disks numbers
      reversed and it will undo.
      
      However for a reshape that does not change the number of devices
      it is not possible to reverse the reshape in the middle - you have to
      wait until it completes.
      
      So add a 'reshape_direction' attribute with is either 'forwards' or
      'backwards' and can be explicitly set when delta_disks is zero.
      
      This will become more important when we allow the data_offset to
      change in a reshape.  Then the explicit statement of what direction is
      being used will be more useful.
      
      This can be enabled in raid5 trivially as it already supports
      reverse reshape and just needs to use a different trigger to request it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2c810cdd
  5. 03 4月, 2012 2 次提交
  6. 19 3月, 2012 2 次提交
    • N
      md: tidy up rdev_for_each usage. · dafb20fa
      NeilBrown 提交于
      md.h has an 'rdev_for_each()' macro for iterating the rdevs in an
      mddev.  However it uses the 'safe' version of list_for_each_entry,
      and so requires the extra variable, but doesn't include 'safe' in the
      name, which is useful documentation.
      
      Consequently some places use this safe version without needing it, and
      many use an explicity list_for_each entry.
      
      So:
       - rename rdev_for_each to rdev_for_each_safe
       - create a new rdev_for_each which uses the plain
         list_for_each_entry,
       - use the 'safe' version only where needed, and convert all other
         list_for_each_entry calls to use rdev_for_each.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dafb20fa
    • N
      md: allow re-add to failed arrays. · dc10c643
      NeilBrown 提交于
      When an array is failed (some data inaccessible) then there is no
      point attempting to add a spare as it could not possibly be recovered.
      
      However that may be value in re-adding a recently removed device.
      e.g. if there is a write-intent-bitmap and it is clear, then access
      to the data could be restored by this action.
      
      So don't reject a re-add to a failed array for RAID10 and RAID5 (the
      only arrays  types that check for a failed array).
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dc10c643
  7. 13 3月, 2012 3 次提交
  8. 23 12月, 2011 13 次提交