1. 24 1月, 2013 1 次提交
    • J
      DM-RAID: Fix RAID10's check for sufficient redundancy · 55ebbb59
      Jonathan Brassow 提交于
      Before attempting to activate a RAID array, it is checked for sufficient
      redundancy.  That is, we make sure that there are not too many failed
      devices - or devices specified for rebuild - to undermine our ability to
      activate the array.  The current code performs this check twice - once to
      ensure there were not too many devices specified for rebuild by the user
      ('validate_rebuild_devices') and again after possibly experiencing a failure
      to read the superblock ('analyse_superblocks').  Neither of these checks are
      sufficient.  The first check is done properly but with insufficient
      information about the possible failure state of the devices to make a good
      determination if the array can be activated.  The second check is simply
      done wrong in the case of RAID10 because it doesn't account for the
      independence of the stripes (i.e. mirror sets).  The solution is to use the
      properly written check ('validate_rebuild_devices'), but perform the check
      after the superblocks have been read and we know which devices have failed.
      This gives us one check instead of two and performs it in a location where
      it can be done right.
      
      Only RAID10 was affected and it was affected in the following ways:
      - the code did not properly catch the condition where a user specified
        a device for rebuild that already had a failed device in the same mirror
        set.  (This condition would, however, be caught at a deeper level in MD.)
      - the code triggers a false positive and denies activation when devices in
        independent mirror sets have failed - counting the failures as though they
        were all in the same set.
      
      The most likely place this error was introduced (or this patch should have
      been included) is in commit 4ec1e369 - first introduced in v3.7-rc1.
      Consequently this fix should also go in v3.7.y, however there is a
      small conflict on the .version in raid_target, so I'll submit a
      separate patch to -stable.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      55ebbb59
  2. 22 12月, 2012 2 次提交
  3. 11 10月, 2012 4 次提交
  4. 01 8月, 2012 1 次提交
  5. 27 7月, 2012 4 次提交
  6. 22 5月, 2012 4 次提交
    • J
      DM RAID: Use md_error() in place of simply setting Faulty bit · c32fb9e7
      Jonathan Brassow 提交于
      When encountering an error while reading the superblock, call md_error.
      
      We are currently setting the 'Faulty' bit on one of the array devices when an
      error is encountered while reading the superblock of a dm-raid array.  We should
      be calling md_error(), as it handles the error more completely.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c32fb9e7
    • J
      DM RAID: Record and handle missing devices · 81f382f9
      Jonathan Brassow 提交于
      Missing dm-raid devices should be recorded in the superblock
      
      When specifying the devices that compose a DM RAID array, it is possible to denote
      failed or missing devices with '-'s.  When this occurs, we must record this in the
      superblock.  We do this by checking if the array position's data device is missing
      and then forcing MD to record the superblock by setting 'MD_CHANGE_DEVS' in
      'raid_resume'.  If we do not cause the superblock to be rewritten by the resume
      function, it is possible for a stale superblock to be written by an out-going
      in-active table (during 'raid_dtr').
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      81f382f9
    • J
      DM RAID: Set recovery flags on resume · 47525e59
      Jonathan Brassow 提交于
      Properly initialize MD recovery flags when resuming device-mapper devices.
      
      When a device-mapper device is suspended, all I/O must stop.  This is done by
      calling 'md_stop_writes' and 'mddev_suspend'.  These calls in-turn manipulate
      the recovery flags - including setting 'MD_RECOVERY_FROZEN'.  The DM device
      may have been suspended while recovery was not yet complete, so the process
      needs to pick-up where it left off.  Since 'mddev_resume' does not unset
      'MD_RECOVERY_FROZEN' and set 'MD_RECOVERY_NEEDED', we must do it ourselves.
      'MD_RECOVERY_NEEDED' can safely be set in 'mddev_resume', but 'MD_RECOVERY_FROZEN'
      must be set outside of 'mddev_resume' due to how MD handles RAID reshaping.
      (e.g.  It is possible for a user to delay reshaping a RAID5->RAID6 by purposefully
      setting 'MD_RECOVERY_FROZEN'.  Clearing it in 'mddev_resume' would override the
      desired behavior.)
      
      Because 'mddev_resume' already unconditionally calls 'md_wakeup_thread(mddev->thread)'
      there is no need to make this call from 'raid_resume' since it calls 'mddev_resume'.
      
      Also clean up where  level_store calls mddev_resume() - it current
      duplicates some of the funcitons of that call. - NB
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      47525e59
    • N
      md: dm-raid should call helper function to clear rdev. · 545c8795
      NeilBrown 提交于
      dm-raid currently open-codes the freeing of some members of
      and rdev.  It is more maintainable to have it call common code
      from md.c which does this for all call-sites.
      
      So remove free_disk_sb to md_rdev_clear, export it, and use it in
      dm-raid.c
      Signed-off-by: NNeilBrown <neilb@suse.de>
      545c8795
  7. 24 4月, 2012 1 次提交
  8. 29 3月, 2012 1 次提交
    • J
      dm raid: handle failed devices during start up · 0447568f
      Jonathan E Brassow 提交于
      The dm-raid code currently fails to create a RAID array if any of the
      superblocks cannot be read.  This was an oversight as there is already
      code to handle this case if the values ('- -') were provided for the
      failed array position.
      
      With this patch, if a superblock cannot be read, the array position's
      fields are initialized as though '- -' was set in the table.  That is,
      the device is failed and the position should not be used, but if there
      is sufficient redundancy, the array should still be activated.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      0447568f
  9. 19 3月, 2012 1 次提交
    • N
      md: tidy up rdev_for_each usage. · dafb20fa
      NeilBrown 提交于
      md.h has an 'rdev_for_each()' macro for iterating the rdevs in an
      mddev.  However it uses the 'safe' version of list_for_each_entry,
      and so requires the extra variable, but doesn't include 'safe' in the
      name, which is useful documentation.
      
      Consequently some places use this safe version without needing it, and
      many use an explicity list_for_each entry.
      
      So:
       - rename rdev_for_each to rdev_for_each_safe
       - create a new rdev_for_each which uses the plain
         list_for_each_entry,
       - use the 'safe' version only where needed, and convert all other
         list_for_each_entry calls to use rdev_for_each.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dafb20fa
  10. 08 3月, 2012 2 次提交
  11. 31 1月, 2012 1 次提交
    • J
      Prevent DM RAID from loading bitmap twice. · 34f8ac6d
      Jonathan Brassow 提交于
      The life cycle of a device-mapper target is:
      1) create
      2) resume
      3) suspend
      *) possibly repeat from 2
      4) destroy
      
      The dm-raid target is unconditionally calling MD's bitmap_load function upon
      every resume.  If steps 2 & 3 above are repeated, bitmap_load is called
      multiple times.  It is only written to be called once; otherwise, it allocates
      new memory for the bitmap (without freeing the old) and incrementing the number
      of pages it thinks it has without zeroing first.  This ultimately leads to
      access beyond allocated memory and lost memory.
      
      Simply avoiding the bitmap_load call upon resume is not sufficient.  If the
      target was suspended while the initial recovery was only partially complete,
      it needs to be restarted when the target is resumed.  This is why
      'md_wakeup_thread' is called before issuing the 'mddev_resume'.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      34f8ac6d
  12. 01 11月, 2011 2 次提交
  13. 11 10月, 2011 3 次提交
  14. 26 9月, 2011 1 次提交
  15. 02 8月, 2011 6 次提交
  16. 18 4月, 2011 1 次提交
  17. 10 3月, 2011 1 次提交
  18. 14 1月, 2011 1 次提交
    • N
      dm: raid456 basic support · 9d09e663
      NeilBrown 提交于
      This patch is the skeleton for the DM target that will be
      the bridge from DM to MD (initially RAID456 and later RAID1).  It
      provides a way to use device-mapper interfaces to the MD RAID456
      drivers.
      
      As with all device-mapper targets, the nominal public interfaces are the
      constructor (CTR) tables and the status outputs (both STATUSTYPE_INFO
      and STATUSTYPE_TABLE).  The CTR table looks like the following:
      
      1: <s> <l> raid \
      2:	<raid_type> <#raid_params> <raid_params> \
      3:	<#raid_devs> <meta_dev1> <dev1> .. <meta_devN> <devN>
      
      Line 1 contains the standard first three arguments to any device-mapper
      target - the start, length, and target type fields.  The target type in
      this case is "raid".
      
      Line 2 contains the arguments that define the particular raid
      type/personality/level, the required arguments for that raid type, and
      any optional arguments.  Possible raid types include: raid4, raid5_la,
      raid5_ls, raid5_rs, raid6_zr, raid6_nr, and raid6_nc.  (again, raid1 is
      planned for the future.)  The list of required and optional parameters
      is the same for all the current raid types.  The required parameters are
      positional, while the optional parameters are given as key/value pairs.
      The possible parameters are as follows:
       <chunk_size>		Chunk size in sectors.
       [[no]sync]		Force/Prevent RAID initialization
       [rebuild <idx>]	Rebuild the drive indicated by the index
       [daemon_sleep <ms>]	Time between bitmap daemon work to clear bits
       [min_recovery_rate <kB/sec/disk>]	Throttle RAID initialization
       [max_recovery_rate <kB/sec/disk>]	Throttle RAID initialization
       [max_write_behind <value>]		See '-write-behind=' (man mdadm)
       [stripe_cache <sectors>]		Stripe cache size for higher RAIDs
      
      Line 3 contains the list of devices that compose the array in
      metadata/data device pairs.  If the metadata is stored separately, a '-'
      is given for the metadata device position.  If a drive has failed or is
      missing at creation time, a '-' can be given for both the metadata and
      data drives for a given position.
      
      Examples:
      # RAID4 - 4 data drives, 1 parity
      # No metadata devices specified to hold superblock/bitmap info
      # Chunk size of 1MiB
      # (Lines separated for easy reading)
      0 1960893648 raid \
      	raid4 1 2048 \
      	5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
      
      # RAID4 - 4 data drives, 1 parity (no metadata devices)
      # Chunk size of 1MiB, force RAID initialization,
      #	min recovery rate at 20 kiB/sec/disk
      0 1960893648 raid \
              raid4 4 2048 min_recovery_rate 20 sync\
              5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
      
      Performing a 'dmsetup table' should display the CTR table used to
      construct the mapping (with possible reordering of optional
      parameters).
      
      Performing a 'dmsetup status' will yield information on the state and
      health of the array.  The output is as follows:
      1: <s> <l> raid \
      2:	<raid_type> <#devices> <1 health char for each dev> <resync_ratio>
      
      Line 1 is standard DM output.  Line 2 is best shown by example:
      	0 1960893648 raid raid4 5 AAAAA 2/490221568
      Here we can see the RAID type is raid4, there are 5 devices - all of
      which are 'A'live, and the array is 2/490221568 complete with recovery.
      
      Cc: linux-raid@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      9d09e663