1. 26 7月, 2010 4 次提交
  2. 21 7月, 2010 1 次提交
  3. 17 2月, 2010 1 次提交
    • T
      percpu: add __percpu sparse annotations to what's left · a29d8b8e
      Tejun Heo 提交于
      Add __percpu sparse annotations to places which didn't make it in one
      of the previous patches.  All converions are trivial.
      
      These annotations are to make sparse consider percpu variables to be
      in a different address space and warn if accessed without going
      through percpu accessors.  This patch doesn't affect normal builds.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Neil Brown <neilb@suse.de>
      a29d8b8e
  4. 16 10月, 2009 2 次提交
    • N
      md: fix problems with RAID6 calculations for DDF. · e4424fee
      NeilBrown 提交于
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e4424fee
    • D
      md/raid456: downlevel multicore operations to raid_run_ops · 417b8d4a
      Dan Williams 提交于
      The percpu conversion allowed a straightforward handoff of stripe
      processing to the async subsytem that initially showed some modest gains
      (+4%).  However, this model is too simplistic and leads to stripes
      bouncing between raid5d and the async thread pool for every invocation
      of handle_stripe().  As reported by Holger this can fall into a
      pathological situation severely impacting throughput (6x performance
      loss).
      
      By downleveling the parallelism to raid_run_ops the pathological
      stripe_head bouncing is eliminated.  This version still exhibits an
      average 11% throughput loss for:
      
      	mdadm --create /dev/md0 /dev/sd[b-q] -n 16 -l 6
      	echo 1024 > /sys/block/md0/md/stripe_cache_size
      	dd if=/dev/zero of=/dev/md0 bs=1024k count=2048
      
      ...but the results are at least stable and can be used as a base for
      further multicore experimentation.
      Reported-by: NHolger Kiehl <Holger.Kiehl@dwd.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      417b8d4a
  5. 30 8月, 2009 4 次提交
    • D
      md/raid6: asynchronous raid6 operations · ac6b53b6
      Dan Williams 提交于
      [ Based on an original patch by Yuri Tikhonov ]
      
      The raid_run_ops routine uses the asynchronous offload api and
      the stripe_operations member of a stripe_head to carry out xor+pq+copy
      operations asynchronously, outside the lock.
      
      The operations performed by RAID-6 are the same as in the RAID-5 case
      except for no support of STRIPE_OP_PREXOR operations. All the others
      are supported:
      STRIPE_OP_BIOFILL
       - copy data into request buffers to satisfy a read request
      STRIPE_OP_COMPUTE_BLK
       - generate missing blocks (1 or 2) in the cache from the other blocks
      STRIPE_OP_BIODRAIN
       - copy data out of request buffers to satisfy a write request
      STRIPE_OP_RECONSTRUCT
       - recalculate parity for new data that has entered the cache
      STRIPE_OP_CHECK
       - verify that the parity is correct
      
      The flow is the same as in the RAID-5 case, and reuses some routines, namely:
      1/ ops_complete_postxor (renamed to ops_complete_reconstruct)
      2/ ops_complete_compute (updated to set up to 2 targets uptodate)
      3/ ops_run_check (renamed to ops_run_check_p for xor parity checks)
      
      [neilb@suse.de: fixes to get it to pass mdadm regression suite]
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NYuri Tikhonov <yur@emcraft.com>
      Signed-off-by: NIlya Yanok <yanok@emcraft.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      
      
      ac6b53b6
    • D
      async_tx: add sum check flags · ad283ea4
      Dan Williams 提交于
      Replace the flat zero_sum_result with a collection of flags to contain
      the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed
      solomon syndrome) zero-sum result.  Use the SUM_CHECK_ namespace instead
      of DMA_ since these flags will be used on non-dma-zero-sum enabled
      platforms.
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      ad283ea4
    • D
      md/raid5,6: add percpu scribble region for buffer lists · d6f38f31
      Dan Williams 提交于
      Use percpu memory rather than stack for storing the buffer lists used in
      parity calculations.  Include space for dma address conversions and pass
      that to async_tx via the async_submit_ctl.scribble pointer.
      
      [ Impact: move memory pressure from stack to heap ]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      
      d6f38f31
    • D
      md/raid6: move the spare page to a percpu allocation · 36d1c647
      Dan Williams 提交于
      In preparation for asynchronous handling of raid6 operations move the
      spare page to a percpu allocation to allow multiple simultaneous
      synchronous raid6 recovery operations.
      
      Make this allocation cpu hotplug aware to maximize allocation
      efficiency.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      
      36d1c647
  6. 18 6月, 2009 1 次提交
  7. 16 6月, 2009 1 次提交
    • N
      md: remove mddev_to_conf "helper" macro · 070ec55d
      NeilBrown 提交于
      Having a macro just to cast a void* isn't really helpful.
      I would must rather see that we are simply de-referencing ->private,
      than have to know what the macro does.
      
      So open code the macro everywhere and remove the pointless cast.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      070ec55d
  8. 31 3月, 2009 13 次提交
    • N
      md/raid5 revise rules for when to update metadata during reshape · c8f517c4
      NeilBrown 提交于
      We currently update the metadata :
       1/ every 3Megabytes
       2/ When the place we will write new-layout data to is recorded in
          the metadata as still containing old-layout data.
      
      Rule one exists to avoid having to re-do too much reshaping in the
      face of a crash/restart.  So it should really be time based rather
      than size based.  So change it to "every 10 seconds".
      
      Rule two turns out to be too harsh when restriping an array
      'in-place', as in that case the metadata much be updates for every
      stripe.
      For the in-place update, it can only possibly be safe from a crash if
      some user-space program data a backup of every e.g. few hundred
      stripes before allowing them to be reshaped.  In that case, the
      constant metadata update is pointless.
      So only update the metadata if the new metadata will report that the
      end of the 'old-layout' data is beyond where we are currently
      writing 'new-layout' data.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c8f517c4
    • N
      md/raid5: prepare for allowing reshape to change layout · e183eaed
      NeilBrown 提交于
      Add prev_algo to raid5_conf_t along the same lines as prev_chunk
      and previous_raid_disks.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e183eaed
    • N
      md/raid5: prepare for allowing reshape to change chunksize. · 784052ec
      NeilBrown 提交于
      Add "prev_chunk" to raid5_conf_t, similar to "previous_raid_disks", to
      remember what the chunk size was before the reshape that is currently
      underway.
      
      This seems like duplication with "chunk_size" and "new_chunk" in
      mddev_t, and to some extent it is, but there are differences.
      The values in mddev_t are always defined and often the same.
      The prev* values are only defined if a reshape is underway.
      
      Also (and more significantly) the raid5_conf_t values will be changed
      at the same time (inside an appropriate lock) that the reshape is
      started by setting reshape_position.  In contrast, the new_chunk value
      is set when the sysfs file is written which could be well before the
      reshape starts.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      784052ec
    • N
      md/raid5: clearly differentiate 'before' and 'after' stripes during reshape. · 86b42c71
      NeilBrown 提交于
      During a raid5 reshape, we have some stripes in the cache that are
      'before' the reshape (and are still to be processed) and some that are
      'after'.  They are currently differentiated by having different
      ->disks values as the only reshape current supported involves changing
      the number of disks.
      
      However we will soon support reshapes that do not change the number
      of disks (chunk parity or chunk size).  So make the difference more
      explicit with a 'generation' number.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      86b42c71
    • N
      md/raid5: change reshape-progress measurement to cope with reshaping backwards. · fef9c61f
      NeilBrown 提交于
      When reducing the number of devices in a raid4/5/6, the reshape
      process has to start at the end of the array and work down to the
      beginning.  So we need to handle expand_progress and expand_lo
      differently.
      
      This patch renames "expand_progress" and "expand_lo" to avoid the
      implication that anything is getting bigger (expand->reshape) and
      every place they are used, we make sure that they are used the right
      way depending on whether delta_disks is positive or negative.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      fef9c61f
    • N
      md/raid5: drop qd_idx from r6_state · 34e04e87
      NeilBrown 提交于
      We now have this value in stripe_head so we don't need to duplicate
      it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      34e04e87
    • D
      md/raid6: move raid6 data processing to raid6_pq.ko · f701d589
      Dan Williams 提交于
      Move the raid6 data processing routines into a standalone module
      (raid6_pq) to prepare them to be called from async_tx wrappers and other
      non-md drivers/modules.  This precludes a circular dependency of raid456
      needing the async modules for data processing while those modules in
      turn depend on raid456 for the base level synchronous raid6 routines.
      
      To support this move:
      1/ The exportable definitions in raid6.h move to include/linux/raid/pq.h
      2/ The raid6_call, recovery calls, and table symbols are exported
      3/ Extra #ifdef __KERNEL__ statements to enable the userspace raid6test to
         compile
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f701d589
    • N
      md/raid5: refactor raid5 "run" · 91adb564
      NeilBrown 提交于
      .. so that the code to create the private data structures is separate.
      This will help with future code to change the level of an active
      array.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      91adb564
    • N
      md/raid5: finish support for DDF/raid6 · 67cc2b81
      NeilBrown 提交于
      DDF requires RAID6 calculations over different devices in a different
      order.
      For md/raid6, we calculate over just the data devices, starting
      immediately after the 'Q' block.
      For ddf/raid6 we calculate over all devices, using zeros in place of
      the P and Q blocks.
      
      This requires unfortunately complex loops...
      Signed-off-by: NNeilBrown <neilb@suse.de>
      67cc2b81
    • N
      md/raid5: Add support for new layouts for raid5 and raid6. · 99c0fb5f
      NeilBrown 提交于
      DDF uses different layouts for P and Q blocks than current md/raid6
      so add those that are missing.
      Also add support for RAID6 layouts that are identical to various
      raid5 layouts with the simple addition of one device to hold all of
      the 'Q' blocks.
      Finally add 'raid5' layouts to match raid4.
      These last to will allow online level conversion.
      
      Note that this does not provide correct support for DDF/raid6 yet
      as the order in which data blocks are summed to produce the Q block
      is significant and different between current md code and DDF
      requirements.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      99c0fb5f
    • N
      md/raid6: remove expectation that Q device is immediately after P device. · d0dabf7e
      NeilBrown 提交于
      
      Code currently assumes that the devices in a raid6 stripe are
        0 1 ... N-1 P Q
      in some rotated order.  We will shortly add new layouts in which
      this strict pattern is broken.
      So remove this expectation.  We still assume that the data disks
      are roughly in-order.  However P and Q can be inserted anywhere within
      that order.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d0dabf7e
    • N
      md: move lots of #include lines out of .h files and into .c · bff61975
      NeilBrown 提交于
      This makes the includes more explicit, and is preparation for moving
      md_k.h to drivers/md/md.h
      
      Remove include/raid/md.h as its only remaining use was to #include
      other files.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bff61975
    • C
      md: move headers out of include/linux/raid/ · ef740c37
      Christoph Hellwig 提交于
      Move the headers with the local structures for the disciplines and
      bitmap.h into drivers/md/ so that they are more easily grepable for
      hacking and not far away.  md.h is left where it is for now as there
      are some uses from the outside.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ef740c37
  9. 28 6月, 2008 5 次提交
    • D
      md: replace R5_WantPrexor with R5_WantDrain, add 'prexor' reconstruct_states · d8ee0728
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Currently ops_run_biodrain and other locations have extra logic to determine
      which blocks are processed in the prexor and non-prexor cases.  This can be
      eliminated if handle_write_operations5 flags the blocks to be processed in all
      cases via R5_Wantdrain.  The presence of the prexor operation is tracked in
      sh->reconstruct_state.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      d8ee0728
    • D
      md: replace STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} with 'reconstruct_states' · 600aa109
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      Track the state of reconstruct operations (recalculating the parity block
      usually due to incoming writes, or as part of array expansion)  Reduces the
      scope of the STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} flags to only tracking whether
      a reconstruct operation has been requested via the ops_request field of struct
      stripe_head_state.
      
      This is the final step in the removal of ops.{pending,ack,complete,count}, i.e.
      the STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} flags only request an operation and do
      not track the state of the operation.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      600aa109
    • D
      md: replace STRIPE_OP_CHECK with 'check_states' · ecc65c9b
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      The STRIPE_OP_* flags record the state of stripe operations which are
      performed outside the stripe lock.  Their use in indicating which
      operations need to be run is straightforward; however, interpolating what
      the next state of the stripe should be based on a given combination of
      these flags is not straightforward, and has led to bugs.  An easier to read
      implementation with minimal degrees of freedom is needed.
      
      Towards this goal, this patch introduces explicit states to replace what was
      previously interpolated from the STRIPE_OP_* flags.  For now this only converts
      the handle_parity_checks5 path, removing a user of the
      ops.{pending,ack,complete,count} fields of struct stripe_operations.
      
      This conversion also found a remaining issue with the current code.  There is
      a small window for a drive to fail between when we schedule a repair and when
      the parity calculation for that repair completes.  When this happens we will
      writeback to 'failed_num' when we really want to write back to 'pd_idx'.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      ecc65c9b
    • D
      md: kill STRIPE_OP_IO flag · 2b7497f0
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      The R5_Want{Read,Write} flags already gate i/o.  So, this flag is
      superfluous and we can unconditionally call ops_run_io().
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      2b7497f0
    • D
      md: kill STRIPE_OP_MOD_DMA in raid5 offload · b203886e
      Dan Williams 提交于
      From: Dan Williams <dan.j.williams@intel.com>
      
      This micro-optimization allowed the raid code to skip a re-read of the
      parity block after checking parity.  It took advantage of the fact that
      xor-offload-engines have their own internal result buffer and can check
      parity without writing to memory.  Remove it for the following reasons:
      
      1/ It is a layering violation for MD to need to manage the DMA and
         non-DMA paths within async_xor_zero_sum
      2/ Bad precedent to toggle the 'ops' flags outside the lock
      3/ Hard to realize a performance gain as reads will not need an updated
         parity block and writes will dirty it anyways.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      b203886e
  10. 28 4月, 2008 1 次提交
    • D
      md: introduce get_priority_stripe() to improve raid456 write performance · 8b3e6cdc
      Dan Williams 提交于
      Improve write performance by preventing the delayed_list from dumping all its
      stripes onto the handle_list in one shot.  Delayed stripes are now further
      delayed by being held on the 'hold_list'.  The 'hold_list' is bypassed when:
      
        * a STRIPE_IO_STARTED stripe is found at the head of 'handle_list'
        * 'handle_list' is empty and i/o is being done to satisfy full stripe-width
          write requests
        * 'bypass_count' is less than 'bypass_threshold'.  By default the threshold
          is 1, i.e. every other stripe handled is a preread stripe provided the
          top two conditions are false.
      
      Benchmark data:
      System: 2x Xeon 5150, 4x SATA, mem=1GB
      Baseline: 2.6.24-rc7
      Configuration: mdadm --create /dev/md0 /dev/sd[b-e] -n 4 -l 5 --assume-clean
      Test1: dd if=/dev/zero of=/dev/md0 bs=1024k count=2048
        * patched:  +33% (stripe_cache_size = 256), +25% (stripe_cache_size = 512)
      
      Test2: tiobench --size 2048 --numruns 5 --block 4096 --block 131072 (XFS)
        * patched: +13%
        * patched + preread_bypass_threshold = 0: +37%
      
      Changes since v1:
      * reduce bypass_threshold from (chunk_size / sectors_per_chunk) to (1) and
        make it configurable.  This defaults to fairness and modest performance
        gains out of the box.
      Changes since v2:
      * [neilb@suse.de]: kill STRIPE_PRIO_HI and preread_needed as they are not
        necessary, the important change was clearing STRIPE_DELAYED in
        add_stripe_bio and this has been moved out to make_request for the hang
        fix.
      * [neilb@suse.de]: simplify get_priority_stripe
      * [dan.j.williams@intel.com]: reset the bypass_count when ->hold_list is
        sampled empty (+11%)
      * [dan.j.williams@intel.com]: decrement the bypass_count at the detection
        of stripes being naturally promoted off of hold_list +2%.  Note, resetting
        bypass_count instead of decrementing on these events yields +4% but that is
        probably too aggressive.
      Changes since v3:
      * cosmetic fixups
      Tested-by: NJames W. Laferriere <babydr@baby-dragons.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8b3e6cdc
  11. 13 7月, 2007 4 次提交
    • D
      md: handle_stripe5 - add request/completion logic for async read ops · b5e98d65
      Dan Williams 提交于
      When a read bio is attached to the stripe and the corresponding block is
      marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy
      the data from the stripe cache to the bio buffer.  handle_stripe flags the
      blocks to be operated on with the R5_Wantfill flag.  If new read requests
      arrive while raid5_run_ops is running they will not be handled until
      handle_stripe is scheduled to run again.
      
      Changelog:
      * cleanup to_read and to_fill accounting
      * do not fail reads that have reached the cache
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      b5e98d65
    • D
      md: handle_stripe5 - add request/completion logic for async compute ops · f38e1219
      Dan Williams 提交于
      handle_stripe will compute a block when a backing disk has failed, or when
      it determines it can save a disk read by computing the block from all the
      other up-to-date blocks.
      
      Previously a block would be computed under the lock and subsequent logic in
      handle_stripe could use the newly up-to-date block.  With the raid5_run_ops
      implementation the compute operation is carried out a later time outside
      the lock.  To preserve the old functionality we take advantage of the
      dependency chain feature of async_tx to flag the block as R5_Wantcompute
      and then let other parts of handle_stripe operate on the block as if it
      were up-to-date.  raid5_run_ops guarantees that the block will be ready
      before it is used in another operation.
      
      However, this only works in cases where the compute and the dependent
      operation are scheduled at the same time.  If a previous call to
      handle_stripe sets the R5_Wantcompute flag there is no facility to pass the
      async_tx dependency chain across successive calls to raid5_run_ops.  The
      req_compute variable protects against this case.
      
      Changelog:
      * remove the req_compute BUG_ON
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      f38e1219
    • D
      md: raid5_run_ops - run stripe operations outside sh->lock · 91c00924
      Dan Williams 提交于
      When the raid acceleration work was proposed, Neil laid out the following
      attack plan:
      
      1/ move the xor and copy operations outside spin_lock(&sh->lock)
      2/ find/implement an asynchronous offload api
      
      The raid5_run_ops routine uses the asynchronous offload api (async_tx) and
      the stripe_operations member of a stripe_head to carry out xor+copy
      operations asynchronously, outside the lock.
      
      To perform operations outside the lock a new set of state flags is needed
      to track new requests, in-flight requests, and completed requests.  In this
      new model handle_stripe is tasked with scanning the stripe_head for work,
      updating the stripe_operations structure, and finally dropping the lock and
      calling raid5_run_ops for processing.  The following flags outline the
      requests that handle_stripe can make of raid5_run_ops:
      
      STRIPE_OP_BIOFILL
       - copy data into request buffers to satisfy a read request
      STRIPE_OP_COMPUTE_BLK
       - generate a missing block in the cache from the other blocks
      STRIPE_OP_PREXOR
       - subtract existing data as part of the read-modify-write process
      STRIPE_OP_BIODRAIN
       - copy data out of request buffers to satisfy a write request
      STRIPE_OP_POSTXOR
       - recalculate parity for new data that has entered the cache
      STRIPE_OP_CHECK
       - verify that the parity is correct
      STRIPE_OP_IO
       - submit i/o to the member disks (note this was already performed outside
         the stripe lock, but it made sense to add it as an operation type
      
      The flow is:
      1/ handle_stripe sets STRIPE_OP_* in sh->ops.pending
      2/ raid5_run_ops reads sh->ops.pending, sets sh->ops.ack, and submits the
         operation to the async_tx api
      3/ async_tx triggers the completion callback routine to set
         sh->ops.complete and release the stripe
      4/ handle_stripe runs again to finish the operation and optionally submit
         new operations that were previously blocked
      
      Note this patch just defines raid5_run_ops, subsequent commits (one per
      major operation type) modify handle_stripe to take advantage of this
      routine.
      
      Changelog:
      * removed ops_complete_biodrain in favor of ops_complete_postxor and
        ops_complete_write.
      * removed the raid5_run_ops workqueue
      * call bi_end_io for reads in ops_complete_biofill, saves a call to
        handle_stripe
      * explicitly handle the 2-disk raid5 case (xor becomes memcpy), Neil Brown
      * fix race between async engines and bi_end_io call for reads, Neil Brown
      * remove unnecessary spin_lock from ops_complete_biofill
      * remove test_and_set/test_and_clear BUG_ONs, Neil Brown
      * remove explicit interrupt handling for channel switching, this feature
        was absorbed (i.e. it is now implicit) by the async_tx api
      * use return_io in ops_complete_biofill
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      91c00924
    • D
      raid5: refactor handle_stripe5 and handle_stripe6 (v3) · a4456856
      Dan Williams 提交于
      handle_stripe5 and handle_stripe6 have very deep logic paths handling the
      various states of a stripe_head.  By introducing the 'stripe_head_state'
      and 'r6_state' objects, large portions of the logic can be moved to
      sub-routines.
      
      'struct stripe_head_state' consumes all of the automatic variables that previously
      stood alone in handle_stripe5,6.  'struct r6_state' contains the handle_stripe6
      specific variables like p_failed and q_failed.
      
      One of the nice side effects of the 'stripe_head_state' change is that it
      allows for further reductions in code duplication between raid5 and raid6.
      The following new routines are shared between raid5 and raid6:
      
      	handle_completed_write_requests
      	handle_requests_to_failed_array
      	handle_stripe_expansion
      
      Changes:
      * v2: fixed 'conf->raid_disk-1' for the raid6 'handle_stripe_expansion' path
      * v3: removed the unused 'dirty' field from struct stripe_head_state
      * v3: coalesced open coded bi_end_io routines into return_io()
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      a4456856
  12. 11 12月, 2006 1 次提交
    • R
      [PATCH] md: allow reads that have bypassed the cache to be retried on failure · 46031f9a
      Raz Ben-Jehuda(caro) 提交于
      If a bypass-the-cache read fails, we simply try again through the cache.  If
      it fails again it will trigger normal recovery precedures.
      
      update 1:
      
      From: NeilBrown <neilb@suse.de>
      
      1/
        chunk_aligned_read and retry_aligned_read assume that
            data_disks == raid_disks - 1
        which is not true for raid6.
        So when an aligned read request bypasses the cache, we can get the wrong data.
      
      2/ The cloned bio is being used-after-free in raid5_align_endio
         (to test BIO_UPTODATE).
      
      3/ We forgot to add rdev->data_offset when submitting
         a bio for aligned-read
      
      4/ clone_bio calls blk_recount_segments and then we change bi_bdev,
         so we need to invalidate the segment counts.
      
      5/ We don't de-reference the rdev when the read completes.
         This means we need to record the rdev to so it is still
         available in the end_io routine.  Fortunately
         bi_next in the original bio is unused at this point so
         we can stuff it in there.
      
      6/ We leak a cloned bio if the target rdev is not usable.
      
      From: NeilBrown <neilb@suse.de>
      
      update 2:
      
      1/ When aligned requests fail (read error) they need to be retried
         via the normal method (stripe cache).  As we cannot be sure that
         we can process a single read in one go (we may not be able to
         allocate all the stripes needed) we store a bio-being-retried
         and a list of bioes-that-still-need-to-be-retried.
         When find a bio that needs to be retried, we should add it to
         the list, not to single-bio...
      
      2/ We were never incrementing 'scnt' when resubmitting failed
         aligned requests.
      
      [akpm@osdl.org: build fix]
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46031f9a
  13. 08 12月, 2006 1 次提交
  14. 03 10月, 2006 1 次提交