1. 09 10月, 2008 4 次提交
    • T
      block: move stats from disk to part0 · 074a7aca
      Tejun Heo 提交于
      Move stats related fields - stamp, in_flight, dkstats - from disk to
      part0 and unify stat handling such that...
      
      * part_stat_*() now updates part0 together if the specified partition
        is not part0.  ie. part_stat_*() are now essentially all_stat_*().
      
      * {disk|all}_stat_*() are gone.
      
      * part_round_stats() is updated similary.  It handles part0 stats
        automatically and disk_round_stats() is killed.
      
      * part_{inc|dec}_in_fligh() is implemented which automatically updates
        part0 stats for parts other than part0.
      
      * disk_map_sector_rcu() is updated to return part0 if no part matches.
        Combined with the above changes, this makes NULL special case
        handling in callers unnecessary.
      
      * Separate stats show code paths for disk are collapsed into part
        stats show code paths.
      
      * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()
      
      While at it, reposition stat handling macros a bit and add missing
      parentheses around macro parameters.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      074a7aca
    • T
      block: fix diskstats access · c9959059
      Tejun Heo 提交于
      There are two variants of stat functions - ones prefixed with double
      underbars which don't care about preemption and ones without which
      disable preemption before manipulating per-cpu counters.  It's unclear
      whether the underbarred ones assume that preemtion is disabled on
      entry as some callers don't do that.
      
      This patch unifies diskstats access by implementing disk_stat_lock()
      and disk_stat_unlock() which take care of both RCU (for partition
      access) and preemption (for per-cpu counter access).  diskstats access
      should always be enclosed between the two functions.  As such, there's
      no need for the versions which disables preemption.  They're removed
      and double underbars ones are renamed to drop the underbars.  As an
      extra argument is added, there's no danger of using the old version
      unconverted.
      
      disk_stat_lock() uses get_cpu() and returns the cpu index and all
      diskstat functions which access per-cpu counters now has @cpu
      argument to help RT.
      
      This change adds RCU or preemption operations at some places but also
      collapses several preemption ops into one at others.  Overall, the
      performance difference should be negligible as all involved ops are
      very lightweight per-cpu ones.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c9959059
    • J
      block: make bi_phys_segments an unsigned int instead of short · 5b99c2ff
      Jens Axboe 提交于
      raid5 can overflow with more than 255 stripes, and we can increase it
      to an int for free on both 32 and 64-bit archs due to the padding.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5b99c2ff
    • J
      block: raid fixups for removal of bi_hw_segments · 960e739d
      Jens Axboe 提交于
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      960e739d
  2. 05 8月, 2008 2 次提交
    • N
      Don't let a blocked_rdev interfere with read request in raid5/6 · ac4090d2
      NeilBrown 提交于
      When we have externally managed metadata, we need to mark a failed
      device as 'Blocked' and not allow any writes until that device
      have been marked as faulty in the metadata and the Blocked flag has
      been removed.
      
      However it is perfectly OK to allow read requests when there is a
      Blocked device, and with a readonly array, there may not be any
      metadata-handler watching for blocked devices.
      
      So in raid5/raid6 only allow a Blocked device to interfere with
      Write request or resync.  Read requests go through untouched.
      
      raid1 and raid10 already differentiate between read and write
      properly.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ac4090d2
    • N
      Fail safely when trying to grow an array with a write-intent bitmap. · dba034ee
      NeilBrown 提交于
      We cannot currently change the size of a write-intent bitmap.
      So if we change the size of an array which has such a bitmap, it
      tries to set bits beyond the end of the bitmap.
      
      For now, simply reject any request to change the size of an array
      which has a bitmap.  mdadm can remove the bitmap and add a new one
      after the array has changed size.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dba034ee
  3. 29 7月, 2008 1 次提交
  4. 24 7月, 2008 2 次提交
    • D
      md: fix merge error · 23397883
      Dan Williams 提交于
      The original STRIPE_OP_IO removal patch had the following hunk:
      
      -               for (i = conf->raid_disks; i--; ) {
      +               for (i = conf->raid_disks; i--; )
                              set_bit(R5_Wantwrite, &sh->dev[i].flags);
      -                       if (!test_and_set_bit(STRIPE_OP_IO, &sh->ops.pending))
      -                               sh->ops.count++;
      -               }
      
      However it appears the hunk became broken after merging:
      -               for (i = conf->raid_disks; i--; ) {
      +               for (i = conf->raid_disks; i--; )
                              set_bit(R5_Wantwrite, &sh->dev[i].flags);
                              set_bit(R5_LOCKED, &dev->flags);
                              s.locked++;
      -                       if (!test_and_set_bit(STRIPE_OP_IO, &sh->ops.pending))
      -                               sh->ops.count++;
      -               }
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      23397883
    • D
      md: move async_tx_issue_pending_all outside spin_lock_irq · c9f21aaf
      Dan Williams 提交于
      Some dma drivers need to call spin_lock_bh in their device_issue_pending
      routines.  This change avoids:
      
      WARNING: at kernel/softirq.c:136 local_bh_enable_ip+0x3a/0x85()
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      c9f21aaf
  5. 21 7月, 2008 1 次提交
  6. 10 7月, 2008 1 次提交
  7. 03 7月, 2008 1 次提交
    • A
      Add bvec_merge_data to handle stacked devices and ->merge_bvec() · cc371e66
      Alasdair G Kergon 提交于
      When devices are stacked, one device's merge_bvec_fn may need to perform
      the mapping and then call one or more functions for its underlying devices.
      
      The following bio fields are used:
        bio->bi_sector
        bio->bi_bdev
        bio->bi_size
        bio->bi_rw  using bio_data_dir()
      
      This patch creates a new struct bvec_merge_data holding a copy of those
      fields to avoid having to change them directly in the struct bio when
      going down the stack only to have to change them back again on the way
      back up.  (And then when the bio gets mapped for real, the whole
      exercise gets repeated, but that's a problem for another day...)
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Milan Broz <mbroz@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      cc371e66
  8. 01 7月, 2008 1 次提交
    • D
      md: resolve external metadata handling deadlock in md_allow_write · b5470dc5
      Dan Williams 提交于
      md_allow_write() marks the metadata dirty while holding mddev->lock and then
      waits for the write to complete.  For externally managed metadata this causes a
      deadlock as userspace needs to take the lock to communicate that the metadata
      update has completed.
      
      Change md_allow_write() in the 'external' case to start the 'mark active'
      operation and then return -EAGAIN.  The expected side effects while waiting for
      userspace to write 'active' to 'array_state' are holding off reshape (code
      currently handles -ENOMEM), cause some 'stripe_cache_size' change requests to
      fail, cause some GET_BITMAP_FILE ioctl requests to fall back to GFP_NOIO, and
      cause updates to 'raid_disks' to fail.  Except for 'stripe_cache_size' changes
      these failures can be mitigated by coordinating with mdmon.
      
      md_write_start() still prevents writes from occurring until the metadata
      handler has had a chance to take action as it unconditionally waits for
      MD_CHANGE_CLEAN to be cleared.
      
      [neilb@suse.de: return -EAGAIN, try GFP_NOIO]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      b5470dc5
  9. 28 6月, 2008 16 次提交
    • D
      md: rationalize raid5 function names · 1fe797e6
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Commit a4456856 refactored some of the deep code paths in raid5.c into separate
      functions.  The names chosen at the time do not consistently indicate what is
      going to happen to the stripe.  So, update the names, and since a stripe is a
      cache element use cache semantics like fill, dirty, and clean.
      
      (also, fix up the indentation in fetch_block5)
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      1fe797e6
    • D
      md: handle operation chaining in raid5_run_ops · 7b3a871e
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Neil said:
      > At the end of ops_run_compute5 you have:
      >         /* ack now if postxor is not set to be run */
      >         if (tx && !test_bit(STRIPE_OP_POSTXOR, &s->ops_run))
      >                 async_tx_ack(tx);
      >
      > It looks odd having that test there.  Would it fit in raid5_run_ops
      > better?
      
      The intended global interpretation is that raid5_run_ops can build a chain
      of xor and memcpy operations.  When MD registers the compute-xor it tells
      async_tx to keep the operation handle around so that another item in the
      dependency chain can be submitted. If we are just computing a block to
      satisfy a read then we can terminate the chain immediately.  raid5_run_ops
      gives a better context for this test since it cares about the entire chain.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      7b3a871e
    • D
      md: replace R5_WantPrexor with R5_WantDrain, add 'prexor' reconstruct_states · d8ee0728
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Currently ops_run_biodrain and other locations have extra logic to determine
      which blocks are processed in the prexor and non-prexor cases.  This can be
      eliminated if handle_write_operations5 flags the blocks to be processed in all
      cases via R5_Wantdrain.  The presence of the prexor operation is tracked in
      sh->reconstruct_state.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      d8ee0728
    • D
      md: replace STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} with 'reconstruct_states' · 600aa109
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Track the state of reconstruct operations (recalculating the parity block
      usually due to incoming writes, or as part of array expansion)  Reduces the
      scope of the STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} flags to only tracking whether
      a reconstruct operation has been requested via the ops_request field of struct
      stripe_head_state.
      
      This is the final step in the removal of ops.{pending,ack,complete,count}, i.e.
      the STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} flags only request an operation and do
      not track the state of the operation.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      600aa109
    • D
      md: replace STRIPE_OP_COMPUTE_BLK with STRIPE_COMPUTE_RUN · 976ea8d4
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Track the state of compute operations (recalculating a block from all the other
      blocks in a stripe) with a state flag.  Reduces the scope of the
      STRIPE_OP_COMPUTE_BLK flag to only tracking whether a compute operation has
      been requested via the ops_request field of struct stripe_head_state.
      
      Note, the compute operation that is performed in the course of doing a 'repair'
      operation (check the parity block, recalculate it and write it back if the
      check result is not zero) is tracked separately with the 'check_state'
      variable.  Compute operations are held off while a 'check' is in progress, and
      moving this check out to handle_issuing_new_read_requests5 the helper routine
      __handle_issuing_new_read_requests5 can be simplified.
      
      This is another step towards the removal of ops.{pending,ack,complete,count},
      i.e. STRIPE_OP_COMPUTE_BLK only requests an operation and does not track the
      state of the operation.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      976ea8d4
    • D
      md: replace STRIPE_OP_BIOFILL with STRIPE_BIOFILL_RUN · 83de75cc
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Track the state of read operations (copying data from the stripe cache to bio
      buffers outside the lock) with a state flag.  Reduce the scope of the
      STRIPE_OP_BIOFILL flag to only tracking whether a biofill operation has been
      requested via the ops_request field of struct stripe_head_state.
      
      This is another step towards the removal of ops.{pending,ack,complete,count},
      i.e. STRIPE_OP_BIOFILL only requests an operation and does not track the state
      of the operation.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      83de75cc
    • D
      md: replace STRIPE_OP_CHECK with 'check_states' · ecc65c9b
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      The STRIPE_OP_* flags record the state of stripe operations which are
      performed outside the stripe lock.  Their use in indicating which
      operations need to be run is straightforward; however, interpolating what
      the next state of the stripe should be based on a given combination of
      these flags is not straightforward, and has led to bugs.  An easier to read
      implementation with minimal degrees of freedom is needed.
      
      Towards this goal, this patch introduces explicit states to replace what was
      previously interpolated from the STRIPE_OP_* flags.  For now this only converts
      the handle_parity_checks5 path, removing a user of the
      ops.{pending,ack,complete,count} fields of struct stripe_operations.
      
      This conversion also found a remaining issue with the current code.  There is
      a small window for a drive to fail between when we schedule a repair and when
      the parity calculation for that repair completes.  When this happens we will
      writeback to 'failed_num' when we really want to write back to 'pd_idx'.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      ecc65c9b
    • D
      md: unify raid5/6 i/o submission · f0e43bcd
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Let the raid6 path call ops_run_io to get pending i/o submitted.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      f0e43bcd
    • D
      md: use stripe_head_state in ops_run_io() · c4e5ac0a
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      In handle_stripe after taking sh->lock we sample some bits into 's' (struct
      stripe_head_state):
      
      	s.syncing = test_bit(STRIPE_SYNCING, &sh->state);
      	s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state);
      	s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state);
      
      Use these values from 's' in ops_run_io() rather than re-sampling the bits.
      This ensures a consistent snapshot (as seen under sh->lock) is used.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      c4e5ac0a
    • D
      md: kill STRIPE_OP_IO flag · 2b7497f0
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      The R5_Want{Read,Write} flags already gate i/o.  So, this flag is
      superfluous and we can unconditionally call ops_run_io().
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      2b7497f0
    • D
      md: kill STRIPE_OP_MOD_DMA in raid5 offload · b203886e
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      This micro-optimization allowed the raid code to skip a re-read of the
      parity block after checking parity.  It took advantage of the fact that
      xor-offload-engines have their own internal result buffer and can check
      parity without writing to memory.  Remove it for the following reasons:
      
      1/ It is a layering violation for MD to need to manage the DMA and
         non-DMA paths within async_xor_zero_sum
      2/ Bad precedent to toggle the 'ops' flags outside the lock
      3/ Hard to realize a performance gain as reads will not need an updated
         parity block and writes will dirty it anyways.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      b203886e
    • N
      rationalise return value for ->hot_add_disk method. · 199050ea
      Neil Brown 提交于
      For all array types but linear, ->hot_add_disk returns 1 on
      success, 0 on failure.
      For linear, it returns 0 on success and -errno on failure.
      
      This doesn't cause a functional problem because the ->hot_add_disk
      function of linear is used quite differently to the others.
      However it is confusing.
      
      So convert all to return 0 for success or -errno on failure
      and fix call sites to match.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      199050ea
    • N
      Support adding a spare to a live md array with external metadata. · 6c2fce2e
      Neil Brown 提交于
      i.e. extend the 'md/dev-XXX/slot' attribute so that you can
      tell a device to fill an vacant slot in an and md array.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      6c2fce2e
    • N
      use bio_endio instead of a call to bi_end_io · 0e13fe23
      Neil Brown 提交于
      Turn calls to bi->bi_end_io() into bio_endio(). Apparently bio_endio does
      exactly the same error processing as is hardcoded at these places.
      
      bio_endio() avoids recursion (or will soon), so it should be used.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      0e13fe23
    • N
      Don't acknowlege that stripe-expand is complete until it really is. · efe31143
      Neil Brown 提交于
      We shouldn't acknowledge that a stripe has been expanded (When
      reshaping a raid5 by adding a device) until the moved data has
      actually been written out.  However we are currently
      acknowledging (by calling md_done_sync) when the POST_XOR
      is complete and before the write.
      
      So track in s.locked whether there are pending writes, and don't
      call md_done_sync yet if there are.
      
      Note: we all set R5_LOCKED on devices which are are about to
      read from.  This probably isn't technically necessary, but is
      usually done when writing a block, and justifies the use of
      s.locked here.
      
      This bug can lead to a crash if an array is stopped while an reshape
      is in progress.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      efe31143
    • N
      Ensure interrupted recovery completed properly (v1 metadata plus bitmap) · 8c2e870a
      Neil Brown 提交于
      If, while assembling an array, we find a device which is not fully
      in-sync with the array, it is important to set the "fullsync" flags.
      This is an exact analog to the setting of this flag in hot_add_disk
      methods.
      
      Currently, only v1.x metadata supports having devices in an array
      which are not fully in-sync (it keep track of how in sync they are).
      The 'fullsync' flag only makes a difference when a write-intent bitmap
      is being used.  In this case it tells recovery to ignore the bitmap
      and recovery all blocks.
      
      This fix is already in place for raid1, but not raid5/6 or raid10.
      
      So without this fix, a raid1 ir raid4/5/6 array with version 1.x
      metadata and a write intent bitmaps, that is stopped in the middle
      of a recovery, will appear to complete the recovery instantly
      after it is reassembled, but the recovery will not be correct.
      
      If you might have an array like that, issueing
         echo repair > /sys/block/mdXX/md/sync_action
      
      will make sure recovery completes properly.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      8c2e870a
  10. 07 6月, 2008 2 次提交
    • D
      md: do not compute parity unless it is on a failed drive · c337869d
      Dan Williams 提交于
      If a block is computed (rather than read) then a check/repair operation
      may be lead to believe that the data on disk is correct, when infact it
      isn't.  So only compute blocks for failed devices.
      
      This issue has been around since at least 2.6.12, but has become harder to
      hit in recent kernels since most reads bypass the cache.
      
      echo repair > /sys/block/mdN/md/sync_action will set the parity blocks to the
      correct state.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c337869d
    • D
      md: fix prexor vs sync_request race · e0a115e5
      Dan Williams 提交于
      During the initial array synchronization process there is a window between
      when a prexor operation is scheduled to a specific stripe and when it
      completes for a sync_request to be scheduled to the same stripe.  When
      this happens the prexor completes and the stripe is unconditionally marked
      "insync", effectively canceling the sync_request for the stripe.  Prior to
      2.6.23 this was not a problem because the prexor operation was done under
      sh->lock.  The effect in older kernels being that the prexor would still
      erroneously mark the stripe "insync", but sync_request would be held off
      and re-mark the stripe as "!in_sync".
      
      Change the write completion logic to not mark the stripe "in_sync" if a
      prexor was performed.  The effect of the change is to sometimes not set
      STRIPE_INSYNC.  The worst this can do is cause the resync to stall waiting
      for STRIPE_INSYNC to be set.  If this were happening, then STRIPE_SYNCING
      would be set and handle_issuing_new_read_requests would cause all
      available blocks to eventually be read, at which point prexor would never
      be used on that stripe any more and STRIPE_INSYNC would eventually be set.
      
      echo repair > /sys/block/mdN/md/sync_action will correct arrays that may
      have lost this race.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0a115e5
  11. 25 5月, 2008 2 次提交
    • N
      md: restart recovery cleanly after device failure. · dfc70645
      NeilBrown 提交于
      When we get any IO error during a recovery (rebuilding a spare), we abort
      the recovery and restart it.
      
      For RAID6 (and multi-drive RAID1) it may not be best to restart at the
      beginning: when multiple failures can be tolerated, the recovery may be
      able to continue and re-doing all that has already been done doesn't make
      sense.
      
      We already have the infrastructure to record where a recovery is up to
      and restart from there, but it is not being used properly.
      This is because:
        - We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
          which causes the recovery not be be checkpointed.
        - We remove spares and then re-added them which loses important state
          information.
      
      The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
      needed.  If there is an error, the relevant drive will be marked as
      Faulty, and that is enough to ensure correct handling of the error.  So we
      first remove MD_RECOVERY_ERR, changing some of the uses of it to
      MD_RECOVERY_INTR.
      
      Then we cause the attempt to remove a non-faulty device from an array to
      fail (unless recovery is impossible as the array is too degraded).  Then
      when remove_and_add_spares attempts to remove the devices on which
      recovery can continue, it will fail, they will remain in place, and
      recovery will continue on them as desired.
      
      Issue:  If we are halfway through rebuilding a spare and another drive
      fails, and a new spare is immediately available,  do we want to:
       1/ complete the current rebuild, then go back and rebuild the new spare or
       2/ restart the rebuild from the start and rebuild both devices in
          parallel.
      
      Both options can be argued for.  The code currently takes option 2 as
        a/ this requires least code change
        b/ this results in a minimally-degraded array in minimal time.
      
      Cc: "Eivind Sarto" <ivan@kasenna.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dfc70645
    • B
      md: md: raid5 rate limit error printk · 6be9d494
      Bernd Schubert 提交于
      Last night we had scsi problems and a hardware raid unit was offlined
      during heavy i/o.  While this happened we got for about 3 minutes a huge
      number messages like these
      
      Apr 12 03:36:07 pfs1n14 kernel: [197510.696595] raid5:md7: read error not correctable (sector 2993096568 on sdj2).
      
      I guess the high error rate is responsible for not scheduling other events
      - during this time the system was not pingable and in the end also other
      devices run into scsi command timeouts causing problems on these unrelated
      devices as well.
      Signed-off-by: NBernd Schubert <bernd-schubert@gmx.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6be9d494
  12. 15 5月, 2008 1 次提交
    • N
      Remove blkdev warning triggered by using md · e7e72bf6
      Neil Brown 提交于
      As setting and clearing queue flags now requires that we hold a spinlock
      on the queue, and as blk_queue_stack_limits is called without that lock,
      get the lock inside blk_queue_stack_limits.
      
      For blk_queue_stack_limits to be able to find the right lock, each md
      personality needs to set q->queue_lock to point to the appropriate lock.
      Those personalities which didn't previously use a spin_lock, us
      q->__queue_lock.  So always initialise that lock when allocated.
      
      With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
      longer cause warnings as it will be clear that the proper lock is held.
      
      Thanks to Dan Williams for review and fixing the silly bugs.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Alistair John Strachan <alistair@devzero.co.uk>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Jacek Luczak <difrost.kernel@gmail.com>
      Cc: Prakash Punnoor <prakash@punnoor.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7e72bf6
  13. 13 5月, 2008 1 次提交
  14. 30 4月, 2008 1 次提交
  15. 28 4月, 2008 4 次提交