1. 28 6月, 2008 4 次提交
    • N
      Don't try to make md arrays dirty if that is not meaningful. · 1a0fd497
      Neil Brown 提交于
      Arrays personalities such as 'raid0' and 'linear' have no redundancy,
      and so marking them as 'clean' or 'dirty' is not meaningful.
      So always allow write requests without requiring a superblock update.
      
      Such arrays types are detected by ->sync_request being NULL.  If it is
      not possible to send a sync request we don't need a 'dirty' flag because
      all a dirty flag does is trigger some sync_requests.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      1a0fd497
    • N
      Close race in md_probe · f48ed538
      Neil Brown 提交于
      There is a possible race in md_probe.  If two threads call md_probe
      for the same device, then one could exit (having checked that
      ->gendisk exists) before the other has called kobject_init_and_add,
      thus returning an incomplete kobj which will cause problems when
      we try to add children to it.
      
      So extend the range of protection of disks_mutex slightly to
      avoid this possibility.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      f48ed538
    • N
      Allow setting start point for requested check/repair · 5e96ee65
      Neil Brown 提交于
      This makes it possible to just resync a small part of an array.
      e.g. if a drive reports that it has questionable sectors,
      a 'repair' of just the region covering those sectors will
      cause them to be read and, if there is an error, re-written
      with correct data.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      5e96ee65
    • N
      Fix error paths if md_probe fails. · 9bbbca3a
      Neil Brown 提交于
      md_probe can fail (e.g. alloc_disk could fail) without
      returning an error (as it alway returns NULL).
      So when we call mddev_find immediately afterwards, we need
      to check that md_probe actually succeeded.  This means checking
      that mdev->gendisk is non-NULL.
      
      cc: <stable@kernel.org>
      Cc: Dave Jones <davej@redhat.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      9bbbca3a
  2. 07 6月, 2008 1 次提交
  3. 25 5月, 2008 5 次提交
    • N
      md: restart recovery cleanly after device failure. · dfc70645
      NeilBrown 提交于
      When we get any IO error during a recovery (rebuilding a spare), we abort
      the recovery and restart it.
      
      For RAID6 (and multi-drive RAID1) it may not be best to restart at the
      beginning: when multiple failures can be tolerated, the recovery may be
      able to continue and re-doing all that has already been done doesn't make
      sense.
      
      We already have the infrastructure to record where a recovery is up to
      and restart from there, but it is not being used properly.
      This is because:
        - We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
          which causes the recovery not be be checkpointed.
        - We remove spares and then re-added them which loses important state
          information.
      
      The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
      needed.  If there is an error, the relevant drive will be marked as
      Faulty, and that is enough to ensure correct handling of the error.  So we
      first remove MD_RECOVERY_ERR, changing some of the uses of it to
      MD_RECOVERY_INTR.
      
      Then we cause the attempt to remove a non-faulty device from an array to
      fail (unless recovery is impossible as the array is too degraded).  Then
      when remove_and_add_spares attempts to remove the devices on which
      recovery can continue, it will fail, they will remain in place, and
      recovery will continue on them as desired.
      
      Issue:  If we are halfway through rebuilding a spare and another drive
      fails, and a new spare is immediately available,  do we want to:
       1/ complete the current rebuild, then go back and rebuild the new spare or
       2/ restart the rebuild from the start and rebuild both devices in
          parallel.
      
      Both options can be argued for.  The code currently takes option 2 as
        a/ this requires least code change
        b/ this results in a minimally-degraded array in minimal time.
      
      Cc: "Eivind Sarto" <ivan@kasenna.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dfc70645
    • B
      md: allow parallel resync of md-devices. · 90b08710
      Bernd Schubert 提交于
      In some configurations, a raid6 resync can be limited by CPU speed
      (Calculating P and Q and moving data) rather than by device speed.  In
      these cases there is nothing to be gained byt serialising resync of arrays
      that share a device, and doing the resync in parallel can provide benefit.
       So add a sysfs tunable to flag an array as being allowed to resync in
      parallel with other arrays that use (a different part of) the same device.
      Signed-off-by: NBernd Schubert <bs@q-leap.de>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      90b08710
    • D
      md: notify userspace on 'stop' events · 4f54b0e9
      Dan Williams 提交于
      This additional notification to 'array_state' is needed to allow the
      monitor application to learn about stop events via sysfs.  The
      sysfs_notify("sync_action") call that comes at the end of do_md_stop()
      (via md_new_event) is insufficient since the 'sync_action' attribute has
      been removed by this point.
      
      (Seems like a sysfs-notify-on-removal patch is a better fix.  Currently
      removal updates the event count but does not wake up waiters)
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f54b0e9
    • N
      md: notify userspace on 'write-pending' changes to array_state · 09a44cc1
      NeilBrown 提交于
      When an array enters write pending, 'array_state' changes, so we must be
      sure to sysfs_notify.
      
      Also, when waiting for user-space to acknowledge 'write-pending' by
      marking the metadata as dirty, we don't want to wait for MD_CHANGE_DEVS to
      be cleared as that might not happen.  So explicity test for the bits that
      we are really interested in.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09a44cc1
    • C
      md: kill file_path wrapper · 6bcfd601
      Christoph Hellwig 提交于
      Kill the trivial and rather pointless file_path wrapper around d_path.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6bcfd601
  4. 30 4月, 2008 8 次提交
  5. 29 4月, 2008 2 次提交
  6. 28 4月, 2008 1 次提交
  7. 22 4月, 2008 1 次提交
  8. 20 3月, 2008 1 次提交
  9. 11 3月, 2008 1 次提交
  10. 05 3月, 2008 4 次提交
  11. 15 2月, 2008 1 次提交
  12. 07 2月, 2008 11 次提交