1. 25 12月, 2009 6 次提交
  2. 16 12月, 2009 5 次提交
  3. 15 12月, 2009 7 次提交
  4. 14 12月, 2009 22 次提交
    • D
      md: add 'recovery_start' per-device sysfs attribute · 06e3c817
      Dan Williams 提交于
      Enable external metadata arrays to manage rebuild checkpointing via a
      md/dev-XXX/recovery_start attribute which reflects rdev->recovery_offset
      
      Also update resync_start_store to allow 'none' to be written, for
      consistency.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      06e3c817
    • D
      md: rcu_read_lock() walk of mddev->disks in md_do_sync() · 4e59ca7d
      Dan Williams 提交于
      Other walks of this list are either under rcu_read_lock() or the list
      mutation lock (mddev_lock()).  This protects against the improbable case of a
      disk being removed from the array at the start of md_do_sync().
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      4e59ca7d
    • N
      md: integrate spares into array at earliest opportunity. · 93be75ff
      NeilBrown 提交于
      As v1.x metadata can record that a member of the array is
      not completely recovered, it make sense to record that a
      spare has become a regular member of the array at the earliest
      opportunity.
      So remove the tests on "recovery_offset > 0" in super_1_sync
      as they really aren't needed, and schedule a metadata update
      immediately after adding spares to a degraded array.
      
      This means that if a crash happens immediately after a recovery
      starts, the new device will be included in the array and recovery will
      continue from wherever it was up to.  Previously this didn't happen
      unless recovery was at least 1/16 of the way through.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      93be75ff
    • A
      md: move compat_ioctl handling into md.c · aa98aa31
      Arnd Bergmann 提交于
      The RAID ioctls are only implemented in md.c, so the
      handling for them should also be moved there from
      fs/compat_ioctl.c.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Andre Noll <maan@systemlinux.org>
      Cc: linux-raid@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      aa98aa31
    • N
      md: revise Kconfig help for MD_MULTIPATH · 93bd89a6
      NeilBrown 提交于
      Make it clear in the config message that MD_MULTIPATH is not under
      active development.
      
      Cc: Oren Held <orenhe@il.ibm.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      93bd89a6
    • N
      md: add MODULE_DESCRIPTION for all md related modules. · 0efb9e61
      NeilBrown 提交于
      Suggested by  Oren Held <orenhe@il.ibm.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      0efb9e61
    • R
      raid: improve MD/raid10 handling of correctable read errors. · 1e50915f
      Robert Becker 提交于
      We've noticed severe lasting performance degradation of our raid
      arrays when we have drives that yield large amounts of media errors.
      The raid10 module will queue each failed read for retry, and also
      will attempt call fix_read_error() to perform the read recovery.
      Read recovery is performed while the array is frozen, so repeated
      recovery attempts can degrade the performance of the array for
      extended periods of time.
      
      With this patch I propose adding a per md device max number of
      corrected read attempts.  Each rdev will maintain a count of
      read correction attempts in the rdev->read_errors field (not
      used currently for raid10). When we enter fix_read_error()
      we'll check to see when the last read error occurred, and
      divide the read error count by 2 for every hour since the
      last read error. If at that point our read error count
      exceeds the read error threshold, we'll fail the raid device.
      
      In addition in this patch I add sysfs nodes (get/set) for
      the per md max_read_errors attribute, the rdev->read_errors
      attribute, and added some printk's to indicate when
      fix_read_error fails to repair an rdev.
      
      For testing I used debugfs->fail_make_request to inject
      IO errors to the rdev while doing IO to the raid array.
      Signed-off-by: NRobert Becker <Rob.Becker@riverbed.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1e50915f
    • R
      md/raid10: print more useful messages on device failure. · 67b8dc4b
      Robert Becker 提交于
      When we get a read error on a device in a RAID10, and attempting to
      repair the error fails, print more useful messages about why it
      failed.
      Signed-off-by: NRobert Becker <Rob.Becker@riverbed.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      67b8dc4b
    • N
      md/bitmap: update dirty flag when bitmap bits are explicitly set. · ffa23322
      NeilBrown 提交于
      There is a sysfs file which allows bits in the write-intent
      bitmap to be explicit set - indicating that the block is thought
      to be 'dirty'.
      When this happens we should really set recovery_cp backwards
      to include the block to reflect this dirtiness.
      
      In particular, a 'resync' process will refuse to start if
      recovery_cp is beyond the end of the array, so this is needed
      to allow a resync to be triggered.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ffa23322
    • N
      md: Support write-intent bitmaps with externally managed metadata. · ece5cff0
      NeilBrown 提交于
      In this case, the metadata needs to not be in the same
      sector as the bitmap.
      md will not read/write any bitmap metadata.  Config must be
      done via sysfs and when a recovery makes the array non-degraded
      again, writing 'true' to 'bitmap/can_clear' will allow bits in
      the bitmap to be cleared again.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ece5cff0
    • N
      md/bitmap: move setting of daemon_lastrun out of bitmap_read_sb · 624ce4f5
      NeilBrown 提交于
      Setting daemon_lastrun really has nothing to do with reading
      the bitmap superblock, it just happens to be needed at the same time.
      bitmap_read_sb is about to become options, so move that code out
      to after the call to bitmap_read_sb.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      624ce4f5
    • N
      md: support updating bitmap parameters via sysfs. · 43a70507
      NeilBrown 提交于
      A new attribute directory 'bitmap' in 'md' is created which
      contains files for configuring the bitmap.
      'location' identifies where the bitmap is, either 'none',
      or 'file' or 'sector offset from metadata'.
      Writing 'location' can create or remove a bitmap.
      Adding a 'file' bitmap this way is not yet supported.
      'chunksize' and 'time_base' must be set before 'location'
      can be set.
      
      'chunksize' can be set before creating a bitmap, but is
      currently always over-ridden by the bitmap superblock.
      
      'time_base' and 'backlog' can be updated at any time.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      43a70507
    • N
      md: factor out parsing of fixed-point numbers · 72e02075
      NeilBrown 提交于
      safe_delay_store can parse fixed point numbers (for fractions
      of a second).  We will want to do that for another sysfs
      file soon, so factor out the code.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      72e02075
    • N
      md: support bitmap offset appropriate for external-metadata arrays. · f6af949c
      NeilBrown 提交于
      For md arrays were metadata is managed externally, the kernel does not
      know about a superblock so the superblock offset is 0.
      If we want to have a write-intent-bitmap near the end of the
      devices of such an array, we should support sector_t sized offset.
      We need offset be possibly negative for when the bitmap is before
      the metadata, so use loff_t instead.
      
      Also add sanity check that bitmap does not overlap with data.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f6af949c
    • N
      md: remove needless setting of thread->timeout in raid10_quiesce · 9cd30fdc
      NeilBrown 提交于
      As bitmap_create and bitmap_destroy already set thread->timeout
      as appropriate, there is no need to do it in raid10_quiesce.
      There is a possible need to wake the thread after the timeout
      has been set low, but it is better to do that where the timeout
      is actually set low, in bitmap_create.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      9cd30fdc
    • N
      md: change daemon_sleep to be in 'jiffies' rather than 'seconds'. · 1b04be96
      NeilBrown 提交于
      This removes a lot of multiplications by HZ.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1b04be96
    • N
      md: move offset, daemon_sleep and chunksize out of bitmap structure · 42a04b50
      NeilBrown 提交于
      ... and into bitmap_info.  These are all configuration parameters
      that need to be set before the bitmap is created.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      42a04b50
    • N
      md: collect bitmap-specific fields into one structure. · c3d9714e
      NeilBrown 提交于
      In preparation for making bitmap fields configurable via sysfs,
      start tidying up by making a single structure to contain the
      configuration fields.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c3d9714e
    • N
      md/raid1: add takeover support for raid5->raid1 · 709ae487
      NeilBrown 提交于
      A 2-device raid5 array can now be converted to raid1.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      709ae487
    • N
      md: add honouring of suspend_{lo,hi} to raid1. · 6eef4b21
      NeilBrown 提交于
      This will allow us to stop writeout to portions of the array
      while  they are resynced by someone else - e.g. another node in
      a cluster.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6eef4b21
    • N
      md/raid5: don't complete make_request on barrier until writes are scheduled · 729a1866
      NeilBrown 提交于
      The post-barrier-flush is sent by md as soon as make_request on the
      barrier write completes.  For raid5, the data might not be in the
      per-device queues yet.  So for barrier requests, wait for any
      pre-reading to be done so that the request will be in the per-device
      queues.
      
      We use the 'preread_active' count to check that nothing is still in
      the preread phase, and delay the decrement of this count until after
      write requests have been submitted to the underlying devices.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      729a1866
    • N
      md: support barrier requests on all personalities. · a2826aa9
      NeilBrown 提交于
      Previously barriers were only supported on RAID1.  This is because
      other levels requires synchronisation across all devices and so needed
      a different approach.
      Here is that approach.
      
      When a barrier arrives, we send a zero-length barrier to every active
      device.  When that completes - and if the original request was not
      empty -  we submit the barrier request itself (with the barrier flag
      cleared) and then submit a fresh load of zero length barriers.
      
      The barrier request itself is asynchronous, but any subsequent
      request will block until the barrier completes.
      
      The reason for clearing the barrier flag is that a barrier request is
      allowed to fail.  If we pass a non-empty barrier through a striping
      raid level it is conceivable that part of it could succeed and part
      could fail.  That would be way too hard to deal with.
      So if the first run of zero length barriers succeed, we assume all is
      sufficiently well that we send the request and ignore errors in the
      second run of barriers.
      
      RAID5 needs extra care as write requests may not have been submitted
      to the underlying devices yet.  So we flush the stripe cache before
      proceeding with the barrier.
      
      Note that the second set of zero-length barriers are submitted
      immediately after the original request is submitted.  Thus when
      a personality finds mddev->barrier to be set during make_request,
      it should not return from make_request until the corresponding
      per-device request(s) have been queued.
      
      That will be done in later patches.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      a2826aa9