1. 28 6月, 2008 21 次提交
    • D
      md: use stripe_head_state in ops_run_io() · c4e5ac0a
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      In handle_stripe after taking sh->lock we sample some bits into 's' (struct
      stripe_head_state):
      
      	s.syncing = test_bit(STRIPE_SYNCING, &sh->state);
      	s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state);
      	s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state);
      
      Use these values from 's' in ops_run_io() rather than re-sampling the bits.
      This ensures a consistent snapshot (as seen under sh->lock) is used.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      c4e5ac0a
    • D
      md: kill STRIPE_OP_IO flag · 2b7497f0
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      The R5_Want{Read,Write} flags already gate i/o.  So, this flag is
      superfluous and we can unconditionally call ops_run_io().
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      2b7497f0
    • D
      md: kill STRIPE_OP_MOD_DMA in raid5 offload · b203886e
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      This micro-optimization allowed the raid code to skip a re-read of the
      parity block after checking parity.  It took advantage of the fact that
      xor-offload-engines have their own internal result buffer and can check
      parity without writing to memory.  Remove it for the following reasons:
      
      1/ It is a layering violation for MD to need to manage the DMA and
         non-DMA paths within async_xor_zero_sum
      2/ Bad precedent to toggle the 'ops' flags outside the lock
      3/ Hard to realize a performance gain as reads will not need an updated
         parity block and writes will dirty it anyways.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      b203886e
    • C
      Support changing rdev size on running arrays. · 0cd17fec
      Chris Webb 提交于
      From: Chris Webb <chris@arachsys.com>
      
      Allow /sys/block/mdX/md/rdY/size to change on running arrays, moving the
      superblock if necessary for this metadata version. We prevent the available
      space from shrinking to less than the used size, and allow it to be set to zero
      to fill all the available space on the underlying device.
      Signed-off-by: NChris Webb <chris@arachsys.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      0cd17fec
    • N
      Make sure all changes to md/dev-XX/state are notified · 52664732
      Neil Brown 提交于
      The important state change happens during an interrupt
      in md_error.  So just set a flag there and call sysfs_notify
      later in process context.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      52664732
    • N
      Make sure all changes to md/degraded are notified. · a99ac971
      Neil Brown 提交于
      When a device fails, when a spare is activated, when
      an array is reshaped, or when an array is started,
      the extent to which the array is degraded can change.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      a99ac971
    • N
      Make sure all changes to md/sync_action are notified. · 72a23c21
      Neil Brown 提交于
      When the 'resync' thread starts or stops, when we explicitly
      set sync_action, or when we determine that there is definitely nothing
      to do, we notify sync_action.
      
      To stop "sync_action" from occasionally showing the wrong value,
      we introduce a new flags - MD_RECOVERY_RECOVER - to say that a
      recovery is probably needed or happening, and we make sure
      that we set MD_RECOVERY_RUNNING before clearing MD_RECOVERY_NEEDED.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      72a23c21
    • N
      Make sure all changes to md/array_state are notified. · 0fd62b86
      Neil Brown 提交于
      Changes in md/array_state could be of interest to a monitoring
      program.  So make sure all changes trigger a notification.
      
      Exceptions:
         changing active_idle to active is not reported because it
            is frequent and not interesting.
         changing active to active_idle is only reported on arrays
            with externally managed metadata, as it is not interesting
            otherwise.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      0fd62b86
    • N
      Don't reject HOT_REMOVE_DISK request for an array that is not yet started. · c7d0c941
      Neil Brown 提交于
      There is really no need for this test here, and there are valid
      cases for selectively removing devices from an array that
      it not actually active.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      c7d0c941
    • N
      rationalise return value for ->hot_add_disk method. · 199050ea
      Neil Brown 提交于
      For all array types but linear, ->hot_add_disk returns 1 on
      success, 0 on failure.
      For linear, it returns 0 on success and -errno on failure.
      
      This doesn't cause a functional problem because the ->hot_add_disk
      function of linear is used quite differently to the others.
      However it is confusing.
      
      So convert all to return 0 for success or -errno on failure
      and fix call sites to match.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      199050ea
    • N
      Support adding a spare to a live md array with external metadata. · 6c2fce2e
      Neil Brown 提交于
      i.e. extend the 'md/dev-XXX/slot' attribute so that you can
      tell a device to fill an vacant slot in an and md array.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      6c2fce2e
    • N
      Enable setting of 'offset' and 'size' of a hot-added spare. · 8ed0a521
      Neil Brown 提交于
      offset_store and rdev_size_store allow control of the region of a
      device which is to be using in an md/raid array.
      They only allow these values to be set when an array is being assembled,
      as changing them on an active array could be dangerous.
      However when adding a spare device to an array, we might need to
      set the offset and size before starting recovery.  So allow
      these values to be set also if "->raid_disk < 0" which indicates that
      the device is still a spare.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      8ed0a521
    • N
      Don't try to make md arrays dirty if that is not meaningful. · 1a0fd497
      Neil Brown 提交于
      Arrays personalities such as 'raid0' and 'linear' have no redundancy,
      and so marking them as 'clean' or 'dirty' is not meaningful.
      So always allow write requests without requiring a superblock update.
      
      Such arrays types are detected by ->sync_request being NULL.  If it is
      not possible to send a sync request we don't need a 'dirty' flag because
      all a dirty flag does is trigger some sync_requests.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      1a0fd497
    • N
      Close race in md_probe · f48ed538
      Neil Brown 提交于
      There is a possible race in md_probe.  If two threads call md_probe
      for the same device, then one could exit (having checked that
      ->gendisk exists) before the other has called kobject_init_and_add,
      thus returning an incomplete kobj which will cause problems when
      we try to add children to it.
      
      So extend the range of protection of disks_mutex slightly to
      avoid this possibility.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      f48ed538
    • N
      Allow setting start point for requested check/repair · 5e96ee65
      Neil Brown 提交于
      This makes it possible to just resync a small part of an array.
      e.g. if a drive reports that it has questionable sectors,
      a 'repair' of just the region covering those sectors will
      cause them to be read and, if there is an error, re-written
      with correct data.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      5e96ee65
    • N
      Improve setting of "events_cleared" for write-intent bitmaps. · a0da84f3
      Neil Brown 提交于
      When an array is degraded, bits in the write-intent bitmap are not
      cleared, so that if the missing device is re-added, it can be synced
      by only updated those parts of the device that have changed since
      it was removed.
      
      The enable this a 'events_cleared' value is stored. It is the event
      counter for the array the last time that any bits were cleared.
      
      Sometimes - if a device disappears from an array while it is 'clean' -
      the events_cleared value gets updated incorrectly (there are subtle
      ordering issues between updateing events in the main metadata and the
      bitmap metadata) resulting in the missing device appearing to require
      a full resync when it is re-added.
      
      With this patch, we update events_cleared precisely when we are about
      to clear a bit in the bitmap.  We record events_cleared when we clear
      the bit internally, and copy that to the superblock which is written
      out before the bit on storage.  This makes it more "obviously correct".
      
      We also need to update events_cleared when the event_count is going
      backwards (as happens on a dirty->clean transition of a non-degraded
      array).
      
      Thanks to Mike Snitzer for identifying this problem and testing early
      "fixes".
      
      Cc:  "Mike Snitzer" <snitzer@gmail.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      a0da84f3
    • N
      use bio_endio instead of a call to bi_end_io · 0e13fe23
      Neil Brown 提交于
      Turn calls to bi->bi_end_io() into bio_endio(). Apparently bio_endio does
      exactly the same error processing as is hardcoded at these places.
      
      bio_endio() avoids recursion (or will soon), so it should be used.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      0e13fe23
    • N
      linear: correct disk numbering error check · 13864515
      Nikanth Karthikesan 提交于
      From: "Nikanth Karthikesan" <knikanth@novell.com>
      
      Correct disk numbering problem check.
      Signed-off-by: NNikanth Karthikesan <knikanth@suse.de>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      13864515
    • N
      Fix error paths if md_probe fails. · 9bbbca3a
      Neil Brown 提交于
      md_probe can fail (e.g. alloc_disk could fail) without
      returning an error (as it alway returns NULL).
      So when we call mddev_find immediately afterwards, we need
      to check that md_probe actually succeeded.  This means checking
      that mdev->gendisk is non-NULL.
      
      cc: <stable@kernel.org>
      Cc: Dave Jones <davej@redhat.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      9bbbca3a
    • N
      Don't acknowlege that stripe-expand is complete until it really is. · efe31143
      Neil Brown 提交于
      We shouldn't acknowledge that a stripe has been expanded (When
      reshaping a raid5 by adding a device) until the moved data has
      actually been written out.  However we are currently
      acknowledging (by calling md_done_sync) when the POST_XOR
      is complete and before the write.
      
      So track in s.locked whether there are pending writes, and don't
      call md_done_sync yet if there are.
      
      Note: we all set R5_LOCKED on devices which are are about to
      read from.  This probably isn't technically necessary, but is
      usually done when writing a block, and justifies the use of
      s.locked here.
      
      This bug can lead to a crash if an array is stopped while an reshape
      is in progress.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      efe31143
    • N
      Ensure interrupted recovery completed properly (v1 metadata plus bitmap) · 8c2e870a
      Neil Brown 提交于
      If, while assembling an array, we find a device which is not fully
      in-sync with the array, it is important to set the "fullsync" flags.
      This is an exact analog to the setting of this flag in hot_add_disk
      methods.
      
      Currently, only v1.x metadata supports having devices in an array
      which are not fully in-sync (it keep track of how in sync they are).
      The 'fullsync' flag only makes a difference when a write-intent bitmap
      is being used.  In this case it tells recovery to ignore the bitmap
      and recovery all blocks.
      
      This fix is already in place for raid1, but not raid5/6 or raid10.
      
      So without this fix, a raid1 ir raid4/5/6 array with version 1.x
      metadata and a write intent bitmaps, that is stopped in the middle
      of a recovery, will appear to complete the recovery instantly
      after it is reassembled, but the recovery will not be correct.
      
      If you might have an array like that, issueing
         echo repair > /sys/block/mdXX/md/sync_action
      
      will make sure recovery completes properly.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      8c2e870a
  2. 25 6月, 2008 17 次提交
  3. 24 6月, 2008 2 次提交