1. 03 7月, 2012 4 次提交
  2. 22 5月, 2012 5 次提交
  3. 21 5月, 2012 4 次提交
    • N
      md/raid5: allow for change in data_offset while managing a reshape. · b5254dd5
      NeilBrown 提交于
      The important issue here is incorporating the different in data_offset
      into calculations concerning when we might need to over-write data
      that is still thought to be valid.
      
      To this end we find the minimum offset difference across all devices
      and add that where appropriate.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b5254dd5
    • N
      md/raid5: Use correct data_offset for all IO. · 05616be5
      NeilBrown 提交于
      As there can now be two different data_offsets - an 'old' and
      a 'new' - we need to carefully choose between them.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      05616be5
    • N
      md: add possibility to change data-offset for devices. · c6563a8c
      NeilBrown 提交于
      When reshaping we can avoid costly intermediate backup by
      changing the 'start' address of the array on the device
      (if there is enough room).
      
      So as a first step, allow such a change to be requested
      through sysfs, and recorded in v1.x metadata.
      
      (As we didn't previous check that all 'pad' fields were zero,
       we need a new FEATURE flag for this.
       A (belatedly) check that all remaining 'pad' fields are
       zero to avoid a repeat of this)
      
      The new data offset must be requested separately for each device.
      This allows each to have a different change in the data offset.
      This is not likely to be used often but as data_offset can be
      set per-device, new_data_offset should be too.
      
      This patch also removes the 'acknowledged' arg to rdev_set_badblocks as
      it is never used and never will be.  At the same time we add a new
      arg ('in_new') which is currently always zero but will be used more
      soon.
      
      When a reshape finishes we will need to update the data_offset
      and rdev->sectors.  So provide an exported function to do that.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c6563a8c
    • N
      md: allow a reshape operation to be reversed. · 2c810cdd
      NeilBrown 提交于
      Currently a reshape operation always progresses from the start
      of the array to the end unless the number of devices is being
      reduced, in which case it progressed in the opposite direction.
      
      To reverse a partial reshape which changes the number of devices
      you can stop the array and re-assemble with the raid-disks numbers
      reversed and it will undo.
      
      However for a reshape that does not change the number of devices
      it is not possible to reverse the reshape in the middle - you have to
      wait until it completes.
      
      So add a 'reshape_direction' attribute with is either 'forwards' or
      'backwards' and can be explicitly set when delta_disks is zero.
      
      This will become more important when we allow the data_offset to
      change in a reshape.  Then the explicit statement of what direction is
      being used will be more useful.
      
      This can be enabled in raid5 trivially as it already supports
      reverse reshape and just needs to use a different trigger to request it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2c810cdd
  4. 03 4月, 2012 2 次提交
  5. 19 3月, 2012 2 次提交
    • N
      md: tidy up rdev_for_each usage. · dafb20fa
      NeilBrown 提交于
      md.h has an 'rdev_for_each()' macro for iterating the rdevs in an
      mddev.  However it uses the 'safe' version of list_for_each_entry,
      and so requires the extra variable, but doesn't include 'safe' in the
      name, which is useful documentation.
      
      Consequently some places use this safe version without needing it, and
      many use an explicity list_for_each entry.
      
      So:
       - rename rdev_for_each to rdev_for_each_safe
       - create a new rdev_for_each which uses the plain
         list_for_each_entry,
       - use the 'safe' version only where needed, and convert all other
         list_for_each_entry calls to use rdev_for_each.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dafb20fa
    • N
      md: allow re-add to failed arrays. · dc10c643
      NeilBrown 提交于
      When an array is failed (some data inaccessible) then there is no
      point attempting to add a spare as it could not possibly be recovered.
      
      However that may be value in re-adding a recently removed device.
      e.g. if there is a write-intent-bitmap and it is clear, then access
      to the data could be restored by this action.
      
      So don't reject a re-add to a failed array for RAID10 and RAID5 (the
      only arrays  types that check for a failed array).
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dc10c643
  6. 13 3月, 2012 3 次提交
  7. 23 12月, 2011 13 次提交
  8. 09 12月, 2011 1 次提交
  9. 08 12月, 2011 1 次提交
    • N
      md/raid5: never wait for bad-block acks on failed device. · 9283d8c5
      NeilBrown 提交于
      Once a device is failed we really want to completely ignore it.
      It should go away soon anyway.
      
      In particular the presence of bad blocks on it should not cause us to
      block as we won't be trying to write there anyway.
      
      So as soon as we can check if a device is Faulty, do so and pretend
      that it is already gone if it is Faulty.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      9283d8c5
  10. 08 11月, 2011 2 次提交
  11. 01 11月, 2011 1 次提交
  12. 26 10月, 2011 2 次提交
    • N
      md: Fix some bugs in recovery_disabled handling. · d890fa2b
      NeilBrown 提交于
      In 3.0 we changed the way recovery_disabled was handle so that instead
      of testing against zero, we test an mddev-> value against a conf->
      value.
      Two problems:
        1/ one place in raid1 was missed and still sets to '1'.
        2/ We didn't explicitly set the conf-> value at array creation
           time.
           It defaulted to '0' just like the mddev value does so they
           could appear equal and thus disable recovery.
           This did not affect normal 'md' as it calls bind_rdev_to_array
           which changes the mddev value.  However the dmraid interface
           doesn't call this and so doesn't change ->recovery_disabled; so at
           array start all recovery is incorrectly disabled.
      
      So initialise the 'conf' value to one less that the mddev value, so
      the will only be the same when explicitly set that way.
      Reported-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown  <neilb@suse.de>
      d890fa2b
    • N
      md/raid5: fix bug that could result in reads from a failed device. · 355840e7
      NeilBrown 提交于
      This bug was introduced in 415e72d0
      which was in 2.6.36.
      
      There is a small window of time between when a device fails and when
      it is removed from the array.  During this time we might still read
      from it, but we won't write to it - so it is possible that we could
      read stale data.
      
      We didn't need the test of 'Faulty' before because the test on
      In_sync is sufficient.  Since we started allowing reads from the early
      part of non-In_sync devices we need a test on Faulty too.
      
      This is suitable for any kernel from 2.6.36 onwards, though the patch
      might need a bit of tweaking in 3.0 and earlier.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      355840e7