1. 13 7月, 2007 1 次提交
    • D
      raid5: refactor handle_stripe5 and handle_stripe6 (v3) · a4456856
      Dan Williams 提交于
      handle_stripe5 and handle_stripe6 have very deep logic paths handling the
      various states of a stripe_head.  By introducing the 'stripe_head_state'
      and 'r6_state' objects, large portions of the logic can be moved to
      sub-routines.
      
      'struct stripe_head_state' consumes all of the automatic variables that previously
      stood alone in handle_stripe5,6.  'struct r6_state' contains the handle_stripe6
      specific variables like p_failed and q_failed.
      
      One of the nice side effects of the 'stripe_head_state' change is that it
      allows for further reductions in code duplication between raid5 and raid6.
      The following new routines are shared between raid5 and raid6:
      
      	handle_completed_write_requests
      	handle_requests_to_failed_array
      	handle_stripe_expansion
      
      Changes:
      * v2: fixed 'conf->raid_disk-1' for the raid6 'handle_stripe_expansion' path
      * v3: removed the unused 'dirty' field from struct stripe_head_state
      * v3: coalesced open coded bi_end_io routines into return_io()
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      a4456856
  2. 11 12月, 2006 1 次提交
    • R
      [PATCH] md: allow reads that have bypassed the cache to be retried on failure · 46031f9a
      Raz Ben-Jehuda(caro) 提交于
      If a bypass-the-cache read fails, we simply try again through the cache.  If
      it fails again it will trigger normal recovery precedures.
      
      update 1:
      
      From: NeilBrown <neilb@suse.de>
      
      1/
        chunk_aligned_read and retry_aligned_read assume that
            data_disks == raid_disks - 1
        which is not true for raid6.
        So when an aligned read request bypasses the cache, we can get the wrong data.
      
      2/ The cloned bio is being used-after-free in raid5_align_endio
         (to test BIO_UPTODATE).
      
      3/ We forgot to add rdev->data_offset when submitting
         a bio for aligned-read
      
      4/ clone_bio calls blk_recount_segments and then we change bi_bdev,
         so we need to invalidate the segment counts.
      
      5/ We don't de-reference the rdev when the read completes.
         This means we need to record the rdev to so it is still
         available in the end_io routine.  Fortunately
         bi_next in the original bio is unused at this point so
         we can stuff it in there.
      
      6/ We leak a cloned bio if the target rdev is not usable.
      
      From: NeilBrown <neilb@suse.de>
      
      update 2:
      
      1/ When aligned requests fail (read error) they need to be retried
         via the normal method (stripe cache).  As we cannot be sure that
         we can process a single read in one go (we may not be able to
         allocate all the stripes needed) we store a bio-being-retried
         and a list of bioes-that-still-need-to-be-retried.
         When find a bio that needs to be retried, we should add it to
         the list, not to single-bio...
      
      2/ We were never incrementing 'scnt' when resubmitting failed
         aligned requests.
      
      [akpm@osdl.org: build fix]
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46031f9a
  3. 08 12月, 2006 1 次提交
  4. 03 10月, 2006 2 次提交
  5. 27 6月, 2006 1 次提交
  6. 28 3月, 2006 6 次提交
    • N
      [PATCH] md: Only checkpoint expansion progress occasionally · b578d55f
      NeilBrown 提交于
      Instead of checkpointing at each stripe, only checkpoint when a new write
      would overwrite uncheckpointed data.  Block any write to the uncheckpointed
      area.  Arbitrarily checkpoint at least every 3Meg.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b578d55f
    • N
      [PATCH] md: Checkpoint and allow restart of raid5 reshape · f6705578
      NeilBrown 提交于
      We allow the superblock to record an 'old' and a 'new' geometry, and a
      position where any conversion is up to.  The geometry allows for changing
      chunksize, layout and level as well as number of devices.
      
      When using verion-0.90 superblock, we convert the version to 0.91 while the
      conversion is happening so that an old kernel will refuse the assemble the
      array.  For version-1, we use a feature bit for the same effect.
      
      When starting an array we check for an incomplete reshape and restart the
      reshape process if needed.  If the reshape stopped at an awkward time (like
      when updating the first stripe) we refuse to assemble the array, and let
      user-space worry about it.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f6705578
    • N
      [PATCH] md: Core of raid5 resize process · ccfcc3c1
      NeilBrown 提交于
      This patch provides the core of the resize/expand process.
      
      sync_request notices if a 'reshape' is happening and acts accordingly.
      
      It allocated new stripe_heads for the next chunk-wide-stripe in the target
      geometry, marking them STRIPE_EXPANDING.
      
      Then it finds which stripe heads in the old geometry can provide data needed
      by these and marks them STRIPE_EXPAND_SOURCE.  This causes stripe_handle to
      read all blocks on those stripes.
      
      Once all blocks on a STRIPE_EXPAND_SOURCE stripe_head are read, any that are
      needed are copied into the corresponding STRIPE_EXPANDING stripe_head.  Once a
      STRIPE_EXPANDING stripe_head is full, it is marks STRIPE_EXPAND_READY and then
      is written out and released.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ccfcc3c1
    • N
      [PATCH] md: Infrastructure to allow normal IO to continue while array is expanding · 7ecaa1e6
      NeilBrown 提交于
      We need to allow that different stripes are of different effective sizes, and
      use the appropriate size.  Also, when a stripe is being expanded, we must
      block any IO attempts until the stripe is stable again.
      
      Key elements in this change are:
       - each stripe_head gets a 'disk' field which is part of the key,
         thus there can sometimes be two stripe heads of the same area of
         the array, but covering different numbers of devices.  One of these
         will be marked STRIPE_EXPANDING and so won't accept new requests.
       - conf->expand_progress tracks how the expansion is progressing and
         is used to determine whether the target part of the array has been
         expanded yet or not.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7ecaa1e6
    • N
      [PATCH] md: Allow stripes to be expanded in preparation for expanding an array · ad01c9e3
      NeilBrown 提交于
      Before a RAID-5 can be expanded, we need to be able to expand the stripe-cache
      data structure.
      
      This requires allocating new stripes in a new kmem_cache.  If this succeeds,
      we copy cache pages over and release the old stripes and kmem_cache.
      
      We then allocate new pages.  If that fails, we leave the stripe cache at it's
      new size.  It isn't worth the effort to shrink it back again.
      
      Unfortuanately this means we need two kmem_cache names as we, for a short
      period of time, we have two kmem_caches.  So they are raid5/%s and
      raid5/%s-alt
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ad01c9e3
    • N
      [PATCH] md: Split disks array out of raid5 conf structure so it is easier to grow · b55e6bfc
      NeilBrown 提交于
      The remainder of this batch implements raid5 reshaping.  Currently the only
      shape change that is supported is added a device, but it is envisioned that
      changing the chunksize and layout will also be supported, as well as changing
      the level (e.g.  1->5, 5->6).
      
      The reshape process naturally has to move all of the data in the array, and so
      should be used with caution.  It is believed to work, and some testing does
      support this, but wider testing would be great for increasing my confidence.
      
      You will need a version of mdadm newer than 2.3.1 to make use of raid5 growth.
       This is because mdadm need to take a copy of a 'critical section' at the
      start of the array incase there is a crash at an awkward moment.  On restart,
      mdadm will restore the critical section and allow reshape to continue.
      
      I hope to release a 2.4-pre by early next week - it still needs a little more
      polishing.
      
      This patch:
      
      Previously the array of disk information was included in the raid5 'conf'
      structure which was allocated to an appropriate size.  This makes it awkward
      to change the size of that array.  So we split it off into a separate
      kmalloced array which will require a little extra indexing, but is much easier
      to grow.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b55e6bfc
  7. 07 1月, 2006 3 次提交
    • N
      [PATCH] md: tidy up raid5/6 hash table code · fccddba0
      NeilBrown 提交于
      - replace open-coded hash chain with hlist macros
      
      - Fix hash-table size at one page - it is already quite generous, so there
        will never be a need to use multiple pages, so no need for __get_free_pages
      
      No functional change.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fccddba0
    • N
      [PATCH] md: fix up some rdev rcu locking in raid5/6 · 9910f16a
      NeilBrown 提交于
      There is this "FIXME" comment with a typo in it!!  that been annoying me for
      days, so I just had to remove it.
      
      conf->disks[i].rdev should only be accessed if
        - we know we hold a reference or
        - the mddev->reconfig_sem is down or
        - we have a rcu_readlock
      
      handle_stripe was referencing rdev in three places without any of these.  For
      the first two, get an rcu_readlock.  For the last, the same access
      (md_sync_acct call) is made a little later after the rdev has been claimed
      under and rcu_readlock, if R5_Syncio is set.  So just use that access...
      However R5_Syncio isn't really needed as the 'syncing' variable contains the
      same information.  So use that instead.
      
      Issues, comment, and fix are identical in raid5 and raid6.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9910f16a
    • N
      [PATCH] md: fix raid6 resync check/repair code · ca65b73b
      NeilBrown 提交于
      raid6 currently does not check the P/Q syndromes when doing a resync, it just
      calculates the correct value and writes it.  Doing the check can reduce writes
      (often to 0) for a resync, and it is needed to properly implement the
      
        echo check > sync_action
      
      operation.
      
      This patch implements the appropriate checks and tidies up some related code.
      
      It also allows raid6 user-requested resync to bypass the intent bitmap.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ca65b73b
  8. 09 11月, 2005 3 次提交
  9. 10 9月, 2005 1 次提交
    • N
      [PATCH] md: add write-intent-bitmap support to raid5 · 72626685
      NeilBrown 提交于
      Most awkward part of this is delaying write requests until bitmap updates have
      been flushed.
      
      To achieve this, we have a sequence number (seq_flush) which is incremented
      each time the raid5 is unplugged.
      
      If the raid thread notices that this has changed, it flushes bitmap changes,
      and assigned the value of seq_flush to seq_write.
      
      When a write request arrives, it is given the number from seq_write, and that
      write request may not complete until seq_flush is larger than the saved seq
      number.
      
      We have a new queue for storing stripes which are waiting for a bitmap flush
      and an extra flag for stripes to record if the write was 'degraded' and so
      should not clear the a bit in the bitmap.
      Signed-off-by: NNeil Brown <neilb@cse.unsw.edu.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      72626685
  10. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4