1. 08 8月, 2010 1 次提交
  2. 26 7月, 2010 8 次提交
  3. 21 7月, 2010 1 次提交
  4. 24 6月, 2010 3 次提交
    • N
      md: Don't update ->recovery_offset when reshaping an array to fewer devices. · 70fffd0b
      NeilBrown 提交于
      When an array is reshaped to have fewer devices, the reshape proceeds
      from the end of the devices to the beginning.
      
      If a device happens to be non-In_sync (which is possible but rare)
      we would normally update the ->recovery_offset as the reshape
      progresses. However that would be wrong as the recover_offset records
      that the early part of the device is in_sync, while in fact it would
      only be the later part that is in_sync, and in any case the offset
      number would be measured from the wrong end of the device.
      
      Relatedly, if after a reshape a spare is discovered to not be
      recoverred all the way to the end, not allow spare_active
      to incorporate it in the array.
      
      This becomes relevant in the following sample scenario:
      
      A 4 drive RAID5 is converted to a 6 drive RAID6 in a combined
      operation.
      The RAID5->RAID6 conversion will cause a 5 drive to be included as a
      spare, then the 5drive -> 6drive reshape will effectively rebuild that
      spare as it progresses.  The 6th drive is treated as in_sync the whole
      time as there is never any case that we might consider reading from
      it, but must not because there is no valid data.
      
      If we interrupt this reshape part-way through and reverse it to return
      to a 5-drive RAID6 (or event a 4-drive RAID5), we don't want to update
      the recovery_offset - as that would be wrong - and we don't want to
      include that spare as active in the 5-drive RAID6 when the reversed
      reshape completed and it will be mostly out-of-sync still.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      70fffd0b
    • N
      md: fix handling of array level takeover that re-arranges devices. · e93f68a1
      NeilBrown 提交于
      Most array level changes leave the list of devices largely unchanged,
      possibly causing one at the end to become redundant.
      However conversions between RAID0 and RAID10 need to renumber
      all devices (except 0).
      
      This renumbering is currently being done in the ->run method when the
      new personality takes over.  However this is too late as the common
      code in md.c might already have invalidated some of the devices if
      they had a ->raid_disk number that appeared to high.
      
      Moving it into the ->takeover method is too early as the array is
      still active at that time and wrong ->raid_disk numbers could cause
      confusion.
      
      So add a ->new_raid_disk field to mdk_rdev_s and use it to communicate
      the new raid_disk number.
      Now the common code knows exactly which devices need to be renumbered,
      and which can be invalidated, and can do it all at a convenient time
      when the array is suspend.
      It can also update some symlinks in sysfs which previously were not be
      updated correctly.
      Reported-by: NMaciej Trela <maciej.trela@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e93f68a1
    • N
      Restore partition detection of newly created md arrays. · f3b99be1
      NeilBrown 提交于
      Commit  b821eaa5 broke partition
      detection for md arrays.
      
      The logic was almost right.  However if revalidate_disk is called
      when the device is not yet open, bdev->bd_disk won't be set, so the
      flush_disk() Call will not set bd_invalidated.
      
      So when md_open is called we still need to ensure that
      ->bd_invalidated gets set.  This is easily done with a call to
      check_disk_size_change in the place where the offending commit removed
      check_disk_change.  At the important times, the size will have changed
      from 0 to non-zero, so check_disk_size_change will set bd_invalidated.
      Tested-by: NDuncan <1i5t5.duncan@cox.net>
      Reported-by: NDuncan <1i5t5.duncan@cox.net>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f3b99be1
  5. 22 5月, 2010 1 次提交
    • E
      sysfs: Implement sysfs tagged directory support. · 3ff195b0
      Eric W. Biederman 提交于
      The problem.  When implementing a network namespace I need to be able
      to have multiple network devices with the same name.  Currently this
      is a problem for /sys/class/net/*, /sys/devices/virtual/net/*, and
      potentially a few other directories of the form /sys/ ... /net/*.
      
      What this patch does is to add an additional tag field to the
      sysfs dirent structure.  For directories that should show different
      contents depending on the context such as /sys/class/net/, and
      /sys/devices/virtual/net/ this tag field is used to specify the
      context in which those directories should be visible.  Effectively
      this is the same as creating multiple distinct directories with
      the same name but internally to sysfs the result is nicer.
      
      I am calling the concept of a single directory that looks like multiple
      directories all at the same path in the filesystem tagged directories.
      
      For the networking namespace the set of directories whose contents I need
      to filter with tags can depend on the presence or absence of hotplug
      hardware or which modules are currently loaded.  Which means I need
      a simple race free way to setup those directories as tagged.
      
      To achieve a reace free design all tagged directories are created
      and managed by sysfs itself.
      
      Users of this interface:
      - define a type in the sysfs_tag_type enumeration.
      - call sysfs_register_ns_types with the type and it's operations
      - sysfs_exit_ns when an individual tag is no longer valid
      
      - Implement mount_ns() which returns the ns of the calling process
        so we can attach it to a sysfs superblock.
      - Implement ktype.namespace() which returns the ns of a syfs kobject.
      
      Everything else is left up to sysfs and the driver layer.
      
      For the network namespace mount_ns and namespace() are essentially
      one line functions, and look to remain that.
      
      Tags are currently represented a const void * pointers as that is
      both generic, prevides enough information for equality comparisons,
      and is trivial to create for current users, as it is just the
      existing namespace pointer.
      
      The work needed in sysfs is more extensive.  At each directory
      or symlink creating I need to check if the directory it is being
      created in is a tagged directory and if so generate the appropriate
      tag to place on the sysfs_dirent.  Likewise at each symlink or
      directory removal I need to check if the sysfs directory it is
      being removed from is a tagged directory and if so figure out
      which tag goes along with the name I am deleting.
      
      Currently only directories which hold kobjects, and
      symlinks are supported.  There is not enough information
      in the current file attribute interfaces to give us anything
      to discriminate on which makes it useless, and there are
      no potential users which makes it an uninteresting problem
      to solve.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NBenjamin Thery <benjamin.thery@bull.net>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      3ff195b0
  6. 18 5月, 2010 22 次提交
  7. 17 5月, 2010 2 次提交
    • N
      md: manage redundancy group in sysfs when changing level. · a64c876f
      NeilBrown 提交于
      Some levels expect the 'redundancy group' to be present,
      others don't.
      So when we change level of an array we might need to
      add or remove this group.
      
      This requires fixing up the current practice of overloading ->private
      to indicate (when ->pers == NULL) that something needs to be removed.
      So create a new ->to_remove to fill that role.
      
      When changing levels, we may need to add or remove attributes.  When
      changing RAID5 -> RAID6, we both add and remove the same thing.  It is
      important to catch this and optimise it out as the removal is delayed
      until a lock is released, so trying to add immediately would cause
      problems.
      
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a64c876f
    • N
      md: remove unneeded sysfs files more promptly · b6eb127d
      NeilBrown 提交于
      When an array is stopped we need to remove some
      sysfs files which are dependent on the type of array.
      
      We need to delay that deletion as deleting them while holding
      reconfig_mutex can lead to deadlocks.
      
      We currently delay them until the array is completely destroyed.
      However it is possible to deactivate and then reactivate the array.
      It is also possible to need to remove sysfs files when changing level,
      which can potentially happen several times before an array is
      destroyed.
      
      So we need to delete these files more promptly: as soon as
      reconfig_mutex is dropped.
      
      We need to ensure this happens before do_md_run can restart the array,
      so we use open_mutex for some extra locking.  This is not deadlock
      prone.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b6eb127d
  8. 12 5月, 2010 1 次提交
    • D
      md: set mddev readonly flag on blkdev BLKROSET ioctl · e2218350
      Dan Williams 提交于
      When the user sets the block device to readwrite then the mddev should
      follow suit.  Otherwise, the BUG_ON in md_write_start() will be set to
      trigger.
      
      The reverse direction, setting mddev->ro to match a set readonly
      request, can be ignored because the blkdev level readonly flag precludes
      the need to have mddev->ro set correctly.  Nevermind the fact that
      setting mddev->ro to 1 may fail if the array is in use.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e2218350
  9. 07 5月, 2010 1 次提交
    • N
      md: restore ability of spare drives to spin down. · 1176568d
      NeilBrown 提交于
      Some time ago we stopped the clean/active metadata updates
      from being written to a 'spare' device in most cases so that
      it could spin down and say spun down.  Device failure/removal
      etc are still recorded on spares.
      
      However commit 51d5668c broke this 50% of the time,
      depending on whether the event count is even or odd.
      The change log entry said:
      
         This means that the alignment between 'odd/even' and
          'clean/dirty' might take a little longer to attain,
      
      how ever the code makes no attempt to create that alignment, so it
      could take arbitrarily long.
      
      So when we find that clean/dirty is not aligned with odd/even,
      force a second metadata-update immediately.  There are already cases
      where a second metadata-update is needed immediately (e.g. when a
      device fails during the metadata update).  We just piggy-back on that.
      Reported-by: NJoe Bryant <tenminjoe@yahoo.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Cc: stable@kernel.org
      1176568d