1. 14 10月, 2014 1 次提交
  2. 29 5月, 2014 1 次提交
    • S
      raid5: add an option to avoid copy data from bio to stripe cache · d592a996
      Shaohua Li 提交于
      The stripe cache has two goals:
      1. cache data, so next time if data can be found in stripe cache, disk access
      can be avoided.
      2. stable data. data is copied from bio to stripe cache and calculated parity.
      data written to disk is from stripe cache, so if upper layer changes bio data,
      data written to disk isn't impacted.
      
      In my environment, I can guarantee 2 will not happen. And BDI_CAP_STABLE_WRITES
      can guarantee 2 too. For 1, it's not common too. block plug mechanism will
      dispatch a bunch of sequentail small requests together. And since I'm using
      SSD, I'm using small chunk size. It's rare case stripe cache is really useful.
      
      So I'd like to avoid the copy from bio to stripe cache and it's very helpful
      for performance. In my 1M randwrite tests, avoid the copy can increase the
      performance more than 30%.
      
      Of course, this shouldn't be enabled by default. It's reported enabling
      BDI_CAP_STABLE_WRITES can harm some workloads before, so I added an option to
      control it.
      
      Neilb:
        changed BUG_ON to WARN_ON
        Removed some assignments from raid5_build_block which are now not needed.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d592a996
  3. 19 11月, 2013 1 次提交
  4. 14 11月, 2013 1 次提交
    • S
      raid5: relieve lock contention in get_active_stripe() · 566c09c5
      Shaohua Li 提交于
      get_active_stripe() is the last place we have lock contention. It has two
      paths. One is stripe isn't found and new stripe is allocated, the other is
      stripe is found.
      
      The first path basically calls __find_stripe and init_stripe. It accesses
      conf->generation, conf->previous_raid_disks, conf->raid_disks,
      conf->prev_chunk_sectors, conf->chunk_sectors, conf->max_degraded,
      conf->prev_algo, conf->algorithm, the stripe_hashtbl and inactive_list. Except
      stripe_hashtbl and inactive_list, other fields are changed very rarely.
      
      With this patch, we split inactive_list and add new hash locks. Each free
      stripe belongs to a specific inactive list. Which inactive list is determined
      by stripe's lock_hash. Note, even a stripe hasn't a sector assigned, it has a
      lock_hash assigned. Stripe's inactive list is protected by a hash lock, which
      is determined by it's lock_hash too. The lock_hash is derivied from current
      stripe_hashtbl hash, which guarantees any stripe_hashtbl list will be assigned
      to a specific lock_hash, so we can use new hash lock to protect stripe_hashtbl
      list too. The goal of the new hash locks introduced is we can only use the new
      locks in the first path of get_active_stripe(). Since we have several hash
      locks, lock contention is relieved significantly.
      
      The first path of get_active_stripe() accesses other fields, since they are
      changed rarely, changing them now need take conf->device_lock and all hash
      locks. For a slow path, this isn't a problem.
      
      If we need lock device_lock and hash lock, we always lock hash lock first. The
      tricky part is release_stripe and friends. We need take device_lock first.
      Neil's suggestion is we put inactive stripes to a temporary list and readd it
      to inactive_list after device_lock is released. In this way, we add stripes to
      temporary list with device_lock hold and remove stripes from the list with hash
      lock hold. So we don't allow concurrent access to the temporary list, which
      means we need allocate temporary list for all participants of release_stripe.
      
      One downside is free stripes are maintained in their inactive list, they can't
      across between the lists. By default, we have total 256 stripes and 8 lists, so
      each list will have 32 stripes. It's possible one list has free stripe but
      other list hasn't. The chance should be rare because stripes allocation are
      even distributed. And we can always allocate more stripes for cache, several
      mega bytes memory isn't a big deal.
      
      This completely removes the lock contention of the first path of
      get_active_stripe(). It slows down the second code path a little bit though
      because we now need takes two locks, but since the hash lock isn't contended,
      the overhead should be quite small (several atomic instructions). The second
      path of get_active_stripe() (basically sequential write or big request size
      randwrite) still has lock contentions.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      566c09c5
  5. 14 10月, 2013 1 次提交
  6. 02 9月, 2013 1 次提交
    • S
      raid5: only wakeup necessary threads · bfc90cb0
      Shaohua Li 提交于
      If there are not enough stripes to handle, we'd better not always
      queue all available work_structs. If one worker can only handle small
      or even none stripes, it will impact request merge and create lock
      contention.
      
      With this patch, the number of work_struct running will depend on
      pending stripes number. Note: some statistics info used in the patch
      are accessed without locking protection. This should doesn't matter,
      we just try best to avoid queue unnecessary work_struct.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bfc90cb0
  7. 28 8月, 2013 3 次提交
    • N
      md/raid5: use seqcount to protect access to shape in make_request. · c46501b2
      NeilBrown 提交于
      make_request() access various shape parameters (raid_disks, chunk_size
      etc) which might be changed by raid5_start_reshape().
      
      If the later is called at and awkward time during the form, the wrong
      stripe_head might be used.
      
      So introduce a 'seqcount' and after finding a stripe_head make sure
      there is no reason to expect that we got the wrong one.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c46501b2
    • S
      raid5: offload stripe handle to workqueue · 851c30c9
      Shaohua Li 提交于
      This is another attempt to create multiple threads to handle raid5 stripes.
      This time I use workqueue.
      
      raid5 handles request (especially write) in stripe unit. A stripe is page size
      aligned/long and acrosses all disks. Writing to any disk sector, raid5 runs a
      state machine for the corresponding stripe, which includes reading some disks
      of the stripe, calculating parity, and writing some disks of the stripe. The
      state machine is running in raid5d thread currently. Since there is only one
      thread, it doesn't scale well for high speed storage. An obvious solution is
      multi-threading.
      
      To get better performance, we have some requirements:
      a. locality. stripe corresponding to request submitted from one cpu is better
      handled in thread in local cpu or local node. local cpu is preferred but some
      times could be a bottleneck, for example, parity calculation is too heavy.
      local node running has wide adaptability.
      b. configurablity. Different setup of raid5 array might need diffent
      configuration. Especially the thread number. More threads don't always mean
      better performance because of lock contentions.
      
      My original implementation is creating some kernel threads. There are
      interfaces to control which cpu's stripe each thread should handle. And
      userspace can set affinity of the threads. This provides biggest flexibility
      and configurability. But it's hard to use and apparently a new thread pool
      implementation is disfavor.
      
      Recent workqueue improvement is quite promising. unbound workqueue will be
      bound to numa node. If WQ_SYSFS is set in workqueue, there are sysfs option to
      do affinity setting. For example, we can only include one HT sibling in
      affinity. Since work is non-reentrant by default, and we can control running
      thread number by limiting dispatched work_struct number.
      
      In this patch, I created several stripe worker group. A group is a numa node.
      stripes from cpus of one node will be added to a group list. Workqueue thread
      of one node will only handle stripes of worker group of the node. In this way,
      stripe handling has numa node locality. And as I said, we can control thread
      number by limiting dispatched work_struct number.
      
      The work_struct callback function handles several stripes in one run. A typical
      work queue usage is to run one unit in each work_struct. In raid5 case, the
      unit is a stripe. But we can't do that:
      a. Though handling a stripe doesn't need lock because of reference accounting
      and stripe isn't in any list, queuing a work_struct for each stripe will make
      workqueue lock contended very heavily.
      b. blk_start_plug()/blk_finish_plug() should surround stripe handle, as we
      might dispatch request. If each work_struct only handles one stripe, such block
      plug is meaningless.
      
      This implementation can't do very fine grained configuration. But the numa
      binding is most popular usage model, should be enough for most workloads.
      
      Note: since we have only one stripe queue, switching to multi-thread might
      decrease request size dispatching down to low level layer. The impact depends
      on thread number, raid configuration and workload. So multi-thread raid5 might
      not be proper for all setups.
      
      Changes V1 -> V2:
      1. remove WQ_NON_REENTRANT
      2. disabling multi-threading by default
      3. Add more descriptions in changelog
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      851c30c9
    • S
      raid5: make release_stripe lockless · 773ca82f
      Shaohua Li 提交于
      release_stripe still has big lock contention. We just add the stripe to a llist
      without taking device_lock. We let the raid5d thread to do the real stripe
      release, which must hold device_lock anyway. In this way, release_stripe
      doesn't hold any locks.
      
      The side effect is the released stripes order is changed. But sounds not a big
      deal, stripes are never handled in order. And I thought block layer can already
      do nice request merge, which means order isn't that important.
      
      I kept the unplug release batch, which is unnecessary with this patch from lock
      contention avoid point of view, and actually if we delete it, the stripe_head
      release_list and lru can share storage. But the unplug release batch is also
      helpful for request merge. We probably can delay wakeup raid5d till unplug, but
      I'm still afraid of the case which raid5d is running.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      773ca82f
  8. 25 7月, 2013 1 次提交
    • N
      md/raid5: fix interaction of 'replace' and 'recovery'. · f94c0b66
      NeilBrown 提交于
      If a device in a RAID4/5/6 is being replaced while another is being
      recovered, then the writes to the replacement device currently don't
      happen, resulting in corruption when the replacement completes and the
      new drive takes over.
      
      This is because the replacement writes are only triggered when
      's.replacing' is set and not when the similar 's.sync' is set (which
      is the case during resync and recovery - it means all devices need to
      be read).
      
      So schedule those writes when s.replacing is set as well.
      
      In this case we cannot use "STRIPE_INSYNC" to record that the
      replacement has happened as that is needed for recording that any
      parity calculation is complete.  So introduce STRIPE_REPLACED to
      record if the replacement has happened.
      
      For safety we should also check that STRIPE_COMPUTE_RUN is not set.
      This has a similar effect to the "s.locked == 0" test.  The latter
      ensure that now IO has been flagged but not started.  The former
      checks if any parity calculation has been flagged by not started.
      We must wait for both of these to complete before triggering the
      'replace'.
      
      Add a similar test to the subsequent check for "are we finished yet".
      This possibly isn't needed (is subsumed in the STRIPE_INSYNC test),
      but it makes it more obvious that the REPLACE will happen before we
      think we are finished.
      
      Finally if a NeedReplace device is not UPTODATE then that is an
      error.  We really must trigger a warning.
      
      This bug was introduced in commit 9a3e1101
      (md/raid5:  detect and handle replacements during recovery.)
      which introduced replacement for raid5.
      That was in 3.3-rc3, so any stable kernel since then would benefit
      from this fix.
      
      Cc: stable@vger.kernel.org (3.3+)
      Reported-by: Nqindehua <13691222965@163.com>
      Tested-by: Nqindehua <qindehua@163.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f94c0b66
  9. 20 3月, 2013 2 次提交
    • P
      md: remove CONFIG_MULTICORE_RAID456 entirely · 238f5908
      Paul Bolle 提交于
      Once instance of this Kconfig macro remained after commit
      51acbcec ("md: remove
      CONFIG_MULTICORE_RAID456"). Remove that one too. And, while we're at it,
      also remove it from the defconfig files that carry it.
      Signed-off-by: NPaul Bolle <pebolle@tiscali.nl>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      238f5908
    • N
      md/raid5: ensure sync and DISCARD don't happen at the same time. · f8dfcffd
      NeilBrown 提交于
      A number of problems can occur due to races between
      resync/recovery and discard.
      
      - if sync_request calls handle_stripe() while a discard is
        happening on the stripe, it might call handle_stripe_clean_event
        before all of the individual discard requests have completed
        (so some devices are still locked, but not all).
        Since commit ca64cae9
           md/raid5: Make sure we clear R5_Discard when discard is finished.
        this will cause R5_Discard to be cleared for the parity device,
        so handle_stripe_clean_event() will not be called when the other
        devices do become unlocked, so their ->written will not be cleared.
        This ultimately leads to a WARN_ON in init_stripe and a lock-up.
      
      - If handle_stripe_clean_event() does clear R5_UPTODATE at an awkward
        time for resync, it can lead to s->uptodate being less than disks
        in handle_parity_checks5(), which triggers a BUG (because it is
        one).
      
      So:
       - keep R5_Discard on the parity device until all other devices have
         completed their discard request
       - make sure we don't try to have a 'discard' and a 'sync' action at
         the same time.
         This involves a new stripe flag to we know when a 'discard' is
         happening, and the use of R5_Overlap on the parity disk so when a
         discard is wanted while a sync is active, so we know to wake up
         the discard at the appropriate time.
      
      Discard support for RAID5 was added in 3.7, so this is suitable for
      any -stable kernel since 3.7.
      
      Cc: stable@vger.kernel.org (v3.7+)
      Reported-by: NJes Sorensen <Jes.Sorensen@redhat.com>
      Tested-by: NJes Sorensen <Jes.Sorensen@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f8dfcffd
  10. 11 10月, 2012 1 次提交
    • S
      MD: raid5 trim support · 620125f2
      Shaohua Li 提交于
      
      Discard for raid4/5/6 has limitation. If discard request size is
      small, we do discard for one disk, but we need calculate parity and
      write parity disk.  To correctly calculate parity, zero_after_discard
      must be guaranteed. Even it's true, we need do discard for one disk
      but write another disks, which makes the parity disks wear out
      fast. This doesn't make sense. So an efficient discard for raid4/5/6
      should discard all data disks and parity disks, which requires the
      write pattern to be (A, A+chunk_size, A+chunk_size*2...). If A's size
      is smaller than chunk_size, such pattern is almost impossible in
      practice. So in this patch, I only handle the case that A's size
      equals to chunk_size. That is discard request should be aligned to
      stripe size and its size is multiple of stripe size.
      
      Since we can only handle request with specific alignment and size (or
      part of the request fitting stripes), we can't guarantee
      zero_after_discard even zero_after_discard is true in low level
      drives.
      
      The block layer doesn't send down correctly aligned requests even
      correct discard alignment is set, so I must filter out.
      
      For raid4/5/6 parity calculation, if data is 0, parity is 0. So if
      zero_after_discard is true for all disks, data is consistent after
      discard.  Otherwise, data might be lost. Let's consider a scenario:
      discard a stripe, write data to one disk and write parity disk. The
      stripe could be still inconsistent till then depending on using data
      from other data disks or parity disks to calculate new parity. If the
      disk is broken, we can't restore it. So in this patch, we only enable
      discard support if all disks have zero_after_discard.
      
      If discard fails in one disk, we face the similar inconsistent issue
      above. The patch will make discard follow the same path as normal
      write request. If discard fails, a resync will be scheduled to make
      the data consistent. This isn't good to have extra writes, but data
      consistency is important.
      
      If a subsequent read/write request hits raid5 cache of a discarded
      stripe, the discarded dev page should have zero filled, so the data is
      consistent. This patch will always zero dev page for discarded request
      stripe. This isn't optimal because discard request doesn't need such
      payload. Next patch will avoid it.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      620125f2
  11. 02 8月, 2012 1 次提交
    • S
      raid5: make_request use batch stripe release · 8811b596
      Shaohua Li 提交于
      make_request() does stripe release for every stripe and the stripe usually has
      count 1, which makes previous release_stripe() optimization not work. In my
      test, this release_stripe() becomes the heaviest pleace to take
      conf->device_lock after previous patches applied.
      
      Below patch makes stripe release batch. All the stripes will be released in
      unplug. The STRIPE_ON_UNPLUG_LIST bit is to protect concurrent access stripe
      lru.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8811b596
  12. 31 7月, 2012 1 次提交
  13. 19 7月, 2012 1 次提交
    • S
      raid5: add a per-stripe lock · b17459c0
      Shaohua Li 提交于
      Add a per-stripe lock to protect stripe specific data. The purpose is to reduce
      lock contention of conf->device_lock.
      
      stripe ->toread, ->towrite are protected by per-stripe lock.  Accessing bio
      list of the stripe is always serialized by this lock, so adding bio to the
      lists (add_stripe_bio()) and removing bio from the lists (like
      ops_run_biofill()) not race.
      
      If bio in ->read, ->written ... list are not shared by multiple stripes, we
      don't need any lock to protect ->read, ->written, because STRIPE_ACTIVE will
      protect them. If the bio are shared,  there are two protections:
      1. bi_phys_segments acts as a reference count
      2. traverse the list uses r5_next_bio, which makes traverse never access bio
      not belonging to the stripe
      
      Let's have an example:
      |  stripe1 |  stripe2    |  stripe3  |
      ...bio1......|bio2|bio3|....bio4.....
      
      stripe2 has 4 bios, when it's finished, it will decrement bi_phys_segments for
      all bios, but only end_bio for bio2 and bio3. bio1->bi_next still points to
      bio2, but this doesn't matter. When stripe1 is finished, it will not touch bio2
      because of r5_next_bio check. Next time stripe1 will end_bio for bio1 and
      stripe3 will end_bio bio4.
      
      before add_stripe_bio() addes a bio to a stripe, we already increament the bio
      bi_phys_segments, so don't worry other stripes release the bio.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b17459c0
  14. 22 5月, 2012 1 次提交
  15. 21 5月, 2012 1 次提交
  16. 23 12月, 2011 4 次提交
    • N
      md/raid5: detect and handle replacements during recovery. · 9a3e1101
      NeilBrown 提交于
      During recovery we want to write to the replacement but not
      the original.  So we have two new flags
       - R5_NeedReplace if this stripe has a replacement that needs to
         be written at some stage
       - R5_WantReplace if NeedReplace, and the data is available, and
         a 'sync' has been requested on this stripe.
      
      We also distinguish between 'sync and replace' which need to read
      all other devices, and 'replace' which only needs to read the
      devices being replaced.
      
      Note that during resync we always write to any replacement device.
      It might not need to be written to, but as we don't read to compare,
      we have to write to be sure.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      9a3e1101
    • N
      md/raid5: writes should get directed to replacement as well as original. · 977df362
      NeilBrown 提交于
      When writing, we need to submit two writes, one to the original, and
      one to the replacement - if there is a replacement.
      
      If the write to the replacement results in a write error, we just fail
      the device.  We only try to record write errors to the original.
      
      When writing for recovery, we shouldn't write to the original.  This
      will be addressed in a subsequent patch that generally addresses
      recovery.
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      977df362
    • N
      md/raid5: raid5.h cleanup · ede7ee8b
      NeilBrown 提交于
      Remove some #defines that are no longer used, and replace some
      others with an enum.
      And remove an unused field.
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ede7ee8b
    • N
      md/raid5: allow each slot to have an extra replacement device · 671488cc
      NeilBrown 提交于
      Just enhance data structures to record a second device per slot to be
      used as a 'replacement' device, replacing the original.
      We also have a second bio in each slot in each stripe_head.  This will
      only be used when writing to the array - we need to write to both the
      original and the replacement at the same time, so will need two bios.
      
      For now, only try using the replacement drive for aligned-reads.
      In this case, we prefer the replacement if it has been recovered far
      enough, otherwise use the original.
      
      This includes a small enhancement.  Previously we would only do
      aligned reads if the target device was fully recovered.  Now we also
      do them if it has recovered far enough.
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      671488cc
  17. 11 10月, 2011 4 次提交
  18. 28 7月, 2011 3 次提交
  19. 26 7月, 2011 4 次提交
    • N
      md/raid5: add some more fields to stripe_head_state · c5709ef6
      NeilBrown 提交于
      Adding these three fields will allow more common code to be moved
      to handle_stripe()
      
      struct field rearrangement by Namhyung Kim.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      c5709ef6
    • N
      md/raid5: unify stripe_head_state and r6_state · f2b3b44d
      NeilBrown 提交于
      'struct stripe_head_state' stores state about the 'current' stripe
      that is passed around while handling the stripe.
      For RAID6 there is an extension structure: r6_state, which is also
      passed around.
      There is no value in keeping these separate, so move the fields from
      the latter into the former.
      
      This means that all code now needs to treat s->failed_num as an small
      array, but this is a small cost.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      f2b3b44d
    • N
      md/raid5: replace sh->lock with an 'active' flag. · c4c1663b
      NeilBrown 提交于
      sh->lock is now mainly used to ensure that two threads aren't running
      in the locked part of handle_stripe[56] at the same time.
      
      That can more neatly be achieved with an 'active' flag which we set
      while running handle_stripe.  If we find the flag is set, we simply
      requeue the stripe for later by setting STRIPE_HANDLE.
      
      For safety we take ->device_lock while examining the state of the
      stripe and creating a summary in 'stripe_head_state / r6_state'.
      This possibly isn't needed but as shared fields like ->toread,
      ->towrite are checked it is safer for now at least.
      
      We leave the label after the old 'unlock' called "unlock" because it
      will disappear in a few patches, so renaming seems pointless.
      
      This leaves the stripe 'locked' for longer as we clear STRIPE_ACTIVE
      later, but that is not a problem.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      c4c1663b
    • N
      md/raid5: Remove use of sh->lock in sync_request · 83206d66
      NeilBrown 提交于
      This is the start of a series of patches to remove sh->lock.
      
      sync_request takes sh->lock before setting STRIPE_SYNCING to ensure
      there is no race with testing it in handle_stripe[56].
      
      Instead, use a new flag STRIPE_SYNC_REQUESTED and test it early
      in handle_stripe[56] (after getting the same lock) and perform the
      same set/clear operations if it was set.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      83206d66
  20. 18 4月, 2011 1 次提交
    • N
      md - remove old plugging code. · 482c0834
      NeilBrown 提交于
      md has some plugging infrastructure for RAID5 to use because the
      normal plugging infrastructure required a 'request_queue', and when
      called from dm, RAID5 doesn't have one of those available.
      
      This relied on the ->unplug_fn callback which doesn't exist any more.
      
      So remove all of that code, both in md and raid5.  Subsequent patches
      with restore the plugging functionality.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      482c0834
  21. 10 3月, 2011 1 次提交
  22. 10 9月, 2010 1 次提交
    • T
      md: implment REQ_FLUSH/FUA support · e9c7469b
      Tejun Heo 提交于
      This patch converts md to support REQ_FLUSH/FUA instead of now
      deprecated REQ_HARDBARRIER.  In the core part (md.c), the following
      changes are notable.
      
      * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
        processing of other requests and thus there is no reason to mark the
        queue congested while FLUSH/FUA is in progress.
      
      * REQ_FLUSH/FUA failures are final and its users don't need retry
        logic.  Retry logic is removed.
      
      * Preflush needs to be issued to all member devices but FUA writes can
        be handled the same way as other writes - their processing can be
        deferred to request_queue of member devices.  md_barrier_request()
        is renamed to md_flush_request() and simplified accordingly.
      
      For linear, raid0 and multipath, the core changes are enough.  raid1,
      5 and 10 need the following conversions.
      
      * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
        request_queues of member devices.  Barrier related logic removed.
      
      * raid5: Queue draining logic dropped.  FUA bit is propagated through
        biodrain and stripe resconstruction such that all the updated parts
        of the stripe are written out with FUA writes if any of the dirtying
        writes was FUA.  preread_active_stripes handling in make_request()
        is updated as suggested by Neil Brown.
      
      * raid10: FUA bit needs to be propagated to write clones.
      
      linear, raid0, 1, 5 and 10 tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e9c7469b
  23. 26 7月, 2010 4 次提交