1. 02 10月, 2015 3 次提交
  2. 01 9月, 2015 9 次提交
    • N
      md/raid5: ensure device failure recorded before write request returns. · c3cce6cd
      NeilBrown 提交于
      When a write to one of the devices of a RAID5/6 fails, the failure is
      recorded in the metadata of the other devices so that after a restart
      the data on the failed drive wont be trusted even if that drive seems
      to be working again (maybe a cable was unplugged).
      
      Similarly when we record a bad-block in response to a write failure,
      we must not let the write complete until the bad-block update is safe.
      
      Currently there is no interlock between the write request completing
      and the metadata update.  So it is possible that the write will
      complete, the app will confirm success in some way, and then the
      machine will crash before the metadata update completes.
      
      This is an extremely small hole for a racy to fit in, but it is
      theoretically possible and so should be closed.
      
      So:
       - set MD_CHANGE_PENDING when requesting a metadata update for a
         failed device, so we can know with certainty when it completes
       - queue requests that completed when MD_CHANGE_PENDING is set to
         only be processed after the metadata update completes
       - call raid_end_bio_io() on bios in that queue when the time comes.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      c3cce6cd
    • N
      md/raid5: use bio_list for the list of bios to return. · 34a6f80e
      NeilBrown 提交于
      This will make it easier to splice two lists together which will
      be needed in future patch.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      34a6f80e
    • N
      md/raid5: handle possible race as reshape completes. · 6cbd8148
      NeilBrown 提交于
      It is possible (though unlikely) for a reshape to be
      interrupted between the time that end_reshape is called
      and the time when raid5_finish_reshape is called.
      
      This can leave conf->reshape_progress set to MaxSector,
      but mddev->reshape_position not.
      
      This combination confused reshape_request() when ->reshape_backwards.
      As conf->reshape_progress is so high, it seems the reshape hasn't
      really begun.  But assuming MaxSector is a valid address only
      leads to sorrow.
      
      So ensure reshape_position and reshape_progress both agree,
      and add an extra check in reshape_request() just in case they don't.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      6cbd8148
    • N
      md: be careful when testing resync_max against curr_resync_completed. · c5e19d90
      NeilBrown 提交于
      While it generally shouldn't happen, it is not impossible for
      curr_resync_completed to exceed resync_max.
      This can particularly happen when reshaping RAID5 - the current
      status isn't copied to curr_resync_completed promptly, so when it
      is, it can exceed resync_max.
      This happens when the reshape is 'frozen', resync_max is set low,
      and reshape is re-enabled.
      
      Taking a difference between two unsigned numbers is always dangerous
      anyway, so add a test to behave correctly if
         curr_resync_completed > resync_max
      Signed-off-by: NNeilBrown <neilb@suse.com>
      c5e19d90
    • N
      md/raid5: remove incorrect "min_t()" when calculating writepos. · c74c0d76
      NeilBrown 提交于
      This code is calculating:
        writepos, which is the furthest along address (device-space) that we
           *will* be writing to
        readpos, which is the earliest address that we *could* possible read
           from, and
        safepos, which is the earliest address in the 'old' section that we
           might read from after a crash when the reshape position is
           recovered from metadata.
      
        The first is a precise calculation, so clipping at zero doesn't
        make sense.  As the reshape position is now guaranteed to always be
        a multiple of reshape_sectors and as we already BUG_ON when
        reshape_progress is zero, there is no point in this min_t() call.
      
        The readpos and safepos are worst case - actual value depends on
        precise geometry.  That worst case could be negative, which is only
        a problem because we are storing the value in an unsigned.
        So leave the min_t() for those.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      
      c74c0d76
    • N
      md/raid5: strengthen check on reshape_position at run. · 05256d98
      NeilBrown 提交于
      When reshaping, we work in units of the largest chunk size.
      If changing from a larger to a smaller chunk size, that means we
      reshape more than one stripe at a time.  So the required alignment
      of reshape_position needs to take into account both the old
      and new chunk size.
      
      This means that both 'here_new' and 'here_old' are calculated with
      respect to the same (maximum) chunk size, so testing if they are the
      same when delta_disks is zero becomes pointless.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      05256d98
    • N
      md/raid5: switch to use conf->chunk_sectors in place of mddev->chunk_sectors where possible · 3cb5edf4
      NeilBrown 提交于
      The chunk_sectors and new_chunk_sectors fields of mddev can be changed
      any time (via sysfs) that the reconfig mutex can be taken.  So raid5
      keeps internal copies in 'conf' which are stable except for a short
      locked moment when reshape stops/starts.
      
      So any access that does not hold reconfig_mutex should use the 'conf'
      values, not the 'mddev' values.
      Several don't.
      
      This could result in corruption if new values were written at awkward
      times.
      
      Also use min() or max() rather than open-coding.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      3cb5edf4
    • N
      md/raid5: always set conf->prev_chunk_sectors and ->prev_algo · 5cac6bcb
      NeilBrown 提交于
      These aren't really needed when no reshape is happening,
      but it is safer to have them always set to a meaningful value.
      The next patch will use ->prev_chunk_sectors without checking
      if a reshape is happening (because that makes the code simpler),
      and this patch makes that safe.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      5cac6bcb
    • N
      md/raid5: consider updating reshape_position at start of reshape. · 92140480
      NeilBrown 提交于
      md/raid5 only updates ->reshape_position (which is stored in
      metadata and is authoritative) occasionally, but particularly
      when getting closed to ->resync_max as it must be correct
      when ->resync_max is reached.
      
      When mdadm tries to stop an array which is reshaping it will:
       - freeze the reshape,
       - set resync_max to where the reshape has reached.
       - unfreeze the reshape.
      When this happens, the reshape is aborted and then restarted.
      
      The restart doesn't check that resync_max is close, and so doesn't
      update ->reshape_position like it should.
      This results in the reshape stopping, but ->reshape_position being
      incorrect.
      
      So on that first call to reshape_request, make sure ->reshape_position
      is updated if needed.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      92140480
  3. 14 8月, 2015 3 次提交
    • K
      block: kill merge_bvec_fn() completely · 8ae12666
      Kent Overstreet 提交于
      As generic_make_request() is now able to handle arbitrarily sized bios,
      it's no longer necessary for each individual block driver to define its
      own ->merge_bvec_fn() callback. Remove every invocation completely.
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: drbd-user@lists.linbit.com
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@kernel.org>
      Cc: ceph-devel@vger.kernel.org
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Neil Brown <neilb@suse.de>
      Cc: linux-raid@vger.kernel.org
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Acked-by: NeilBrown <neilb@suse.de> (for the 'md' bits)
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com>
      [dpark: also remove ->merge_bvec_fn() in dm-thin as well as
       dm-era-target, and resolve merge conflicts]
      Signed-off-by: NDongsu Park <dpark@posteo.net>
      Signed-off-by: NMing Lin <ming.l@ssi.samsung.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8ae12666
    • K
      md/raid5: get rid of bio_fits_rdev() · 7140aafc
      Kent Overstreet 提交于
      Remove bio_fits_rdev() as sufficient merge_bvec_fn() handling is now
      performed by blk_queue_split() in md_make_request().
      
      Cc: Neil Brown <neilb@suse.de>
      Cc: linux-raid@vger.kernel.org
      Acked-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com>
      [dpark: add more description in commit message]
      Signed-off-by: NDongsu Park <dpark@posteo.net>
      Signed-off-by: NMing Lin <ming.l@ssi.samsung.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7140aafc
    • M
      md/raid5: split bio for chunk_aligned_read · 7ef6b12a
      Ming Lin 提交于
      If a read request fits entirely in a chunk, it will be passed directly to the
      underlying device (providing it hasn't failed of course).  If it doesn't fit,
      the slightly less efficient path that uses the stripe_cache is used.
      Requests that get to the stripe cache are always completely split up as
      necessary.
      
      So with RAID5, ripping out the merge_bvec_fn doesn't cause it to stop work,
      but could cause it to take the less efficient path more often.
      
      All that is needed to manage this is for 'chunk_aligned_read' do some bio
      splitting, much like the RAID0 code does.
      
      Cc: Neil Brown <neilb@suse.de>
      Cc: linux-raid@vger.kernel.org
      Acked-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NMing Lin <ming.l@ssi.samsung.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7ef6b12a
  4. 12 8月, 2015 1 次提交
    • S
      block: don't access bio->bi_error after bio_put() · 9b81c842
      Sasha Levin 提交于
      Commit 4246a0b6 ("block: add a bi_error field to struct bio") has added a few
      dereferences of 'bio' after a call to bio_put(). This causes use-after-frees
      such as:
      
      [521120.719695] BUG: KASan: use after free in dio_bio_complete+0x2b3/0x320 at addr ffff880f36b38714
      [521120.720638] Read of size 4 by task mount.ocfs2/9644
      [521120.721212] =============================================================================
      [521120.722056] BUG kmalloc-256 (Not tainted): kasan: bad access detected
      [521120.722968] -----------------------------------------------------------------------------
      [521120.722968]
      [521120.723915] Disabling lock debugging due to kernel taint
      [521120.724539] INFO: Slab 0xffffea003cdace00 objects=32 used=25 fp=0xffff880f36b38600 flags=0x46fffff80004080
      [521120.726037] INFO: Object 0xffff880f36b38700 @offset=1792 fp=0xffff880f36b38800
      [521120.726037]
      [521120.726974] Bytes b4 ffff880f36b386f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.727898] Object ffff880f36b38700: 00 88 b3 36 0f 88 ff ff 00 00 d8 de 0b 88 ff ff  ...6............
      [521120.728822] Object ffff880f36b38710: 02 00 00 f0 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.729705] Object ffff880f36b38720: 01 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00  ................
      [521120.730623] Object ffff880f36b38730: 00 00 00 00 00 00 00 00 01 00 00 00 00 02 00 00  ................
      [521120.731621] Object ffff880f36b38740: 00 02 00 00 01 00 00 00 d0 f7 87 ad ff ff ff ff  ................
      [521120.732776] Object ffff880f36b38750: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.733640] Object ffff880f36b38760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.734508] Object ffff880f36b38770: 01 00 03 00 01 00 00 00 88 87 b3 36 0f 88 ff ff  ...........6....
      [521120.735385] Object ffff880f36b38780: 00 73 22 ad 02 88 ff ff 40 13 e0 3c 00 ea ff ff  .s".....@..<....
      [521120.736667] Object ffff880f36b38790: 00 02 00 00 00 04 00 00 00 00 00 00 00 00 00 00  ................
      [521120.737596] Object ffff880f36b387a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.738524] Object ffff880f36b387b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.739388] Object ffff880f36b387c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.740277] Object ffff880f36b387d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.741187] Object ffff880f36b387e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.742233] Object ffff880f36b387f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      [521120.743229] CPU: 41 PID: 9644 Comm: mount.ocfs2 Tainted: G    B           4.2.0-rc6-next-20150810-sasha-00039-gf909086 #2420
      [521120.744274]  ffff880f36b38000 ffff880d89c8f638 ffffffffb6e9ba8a ffff880101c0e5c0
      [521120.745025]  ffff880d89c8f668 ffffffffad76a313 ffff880101c0e5c0 ffffea003cdace00
      [521120.745908]  ffff880f36b38700 ffff880f36b38798 ffff880d89c8f690 ffffffffad772854
      [521120.747063] Call Trace:
      [521120.747520] dump_stack (lib/dump_stack.c:52)
      [521120.748053] print_trailer (mm/slub.c:653)
      [521120.748582] object_err (mm/slub.c:660)
      [521120.749079] kasan_report_error (include/linux/kasan.h:20 mm/kasan/report.c:152 mm/kasan/report.c:194)
      [521120.750834] __asan_report_load4_noabort (mm/kasan/report.c:250)
      [521120.753580] dio_bio_complete (fs/direct-io.c:478)
      [521120.755752] do_blockdev_direct_IO (fs/direct-io.c:494 fs/direct-io.c:1291)
      [521120.759765] __blockdev_direct_IO (fs/direct-io.c:1322)
      [521120.761658] blkdev_direct_IO (fs/block_dev.c:162)
      [521120.762993] generic_file_read_iter (mm/filemap.c:1738)
      [521120.767405] blkdev_read_iter (fs/block_dev.c:1649)
      [521120.768556] __vfs_read (fs/read_write.c:423 fs/read_write.c:434)
      [521120.772126] vfs_read (fs/read_write.c:454)
      [521120.773118] SyS_pread64 (fs/read_write.c:607 fs/read_write.c:594)
      [521120.776062] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:186)
      [521120.777375] Memory state around the buggy address:
      [521120.778118]  ffff880f36b38600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [521120.779211]  ffff880f36b38680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [521120.780315] >ffff880f36b38700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [521120.781465]                          ^
      [521120.782083]  ffff880f36b38780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [521120.783717]  ffff880f36b38800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [521120.784818] ==================================================================
      
      This patch fixes a few of those places that I caught while auditing the patch, but the
      original patch should be audited further for more occurences of this issue since I'm
      not too familiar with the code.
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9b81c842
  5. 03 8月, 2015 1 次提交
    • N
      md/raid5: don't let shrink_slab shrink too far. · 49895bcc
      NeilBrown 提交于
      I have a report of drop_one_stripe() called from
      raid5_cache_scan() apparently finding ->max_nr_stripes == 0.
      
      This should not be allowed.
      
      So add a test to keep max_nr_stripes above min_nr_stripes.
      
      Also use a 'mask' rather than a 'mod' in drop_one_stripe
      to ensure 'hash' is valid even if max_nr_stripes does reach zero.
      
      
      Fixes: edbe83ab ("md/raid5: allow the stripe_cache to grow and shrink.")
      Cc: stable@vger.kernel.org (4.1 - please release with 2d5b569b)
      Reported-by: NTomas Papan <tomas.papan@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.com>
      49895bcc
  6. 29 7月, 2015 2 次提交
    • J
      block: manipulate bio->bi_flags through helpers · b7c44ed9
      Jens Axboe 提交于
      Some places use helpers now, others don't. We only have the 'is set'
      helper, add helpers for setting and clearing flags too.
      
      It was a bit of a mess of atomic vs non-atomic access. With
      BIO_UPTODATE gone, we don't have any risk of concurrent access to the
      flags. So relax the restriction and don't make any of them atomic. The
      flags that do have serialization issues (reffed and chained), we
      already handle those separately.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b7c44ed9
    • C
      block: add a bi_error field to struct bio · 4246a0b6
      Christoph Hellwig 提交于
      Currently we have two different ways to signal an I/O error on a BIO:
      
       (1) by clearing the BIO_UPTODATE flag
       (2) by returning a Linux errno value to the bi_end_io callback
      
      The first one has the drawback of only communicating a single possible
      error (-EIO), and the second one has the drawback of not beeing persistent
      when bios are queued up, and are not passed along from child to parent
      bio in the ever more popular chaining scenario.  Having both mechanisms
      available has the additional drawback of utterly confusing driver authors
      and introducing bugs where various I/O submitters only deal with one of
      them, and the others have to add boilerplate code to deal with both kinds
      of error returns.
      
      So add a new bi_error field to store an errno value directly in struct
      bio and remove the existing mechanisms to clean all this up.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4246a0b6
  7. 24 7月, 2015 1 次提交
  8. 22 7月, 2015 1 次提交
    • N
      md/raid5: avoid races when changing cache size. · 2d5b569b
      NeilBrown 提交于
      Cache size can grow or shrink due to various pressures at
      any time.  So when we resize the cache as part of a 'grow'
      operation (i.e. change the size to allow more devices) we need
      to blocks that automatic growing/shrinking.
      
      So introduce a mutex.  auto grow/shrink uses mutex_trylock()
      and just doesn't bother if there is a blockage.
      Resizing the whole cache holds the mutex to ensure that
      the correct number of new stripes is allocated.
      
      This bug can result in some stripes not being freed when an
      array is stopped.  This leads to the kmem_cache not being
      freed and a subsequent array can try to use the same kmem_cache
      and get confused.
      
      Fixes: edbe83ab ("md/raid5: allow the stripe_cache to grow and shrink.")
      Cc: stable@vger.kernel.org (4.1 - please delay until 2 weeks after release of 4.2)
      Signed-off-by: NNeilBrown <neilb@suse.com>
      2d5b569b
  9. 17 6月, 2015 3 次提交
    • S
      md/raid5: ignore released_stripes check · 713bc5c2
      Shaohua Li 提交于
      conf->released_stripes list isn't always related to where there are
      free stripes pending. Active stripes can be in the list too.
      And even free stripes were active very recently.
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      713bc5c2
    • Y
      md/raid5: per hash value and exclusive wait_for_stripe · e9e4c377
      Yuanhan Liu 提交于
      I noticed heavy spin lock contention at get_active_stripe() with fsmark
      multiple thread write workloads.
      
      Here is how this hot contention comes from. We have limited stripes, and
      it's a multiple thread write workload. Hence, those stripes will be taken
      soon, which puts later processes to sleep for waiting free stripes. When
      enough stripes(>= 1/4 total stripes) are released, all process are woken,
      trying to get the lock. But there is one only being able to get this lock
      for each hash lock, making other processes spinning out there for acquiring
      the lock.
      
      Thus, it's effectiveless to wakeup all processes and let them battle for
      a lock that permits one to access only each time. Instead, we could make
      it be a exclusive wake up: wake up one process only. That avoids the heavy
      spin lock contention naturally.
      
      To do the exclusive wake up, we've to split wait_for_stripe into multiple
      wait queues, to make it per hash value, just like the hash lock.
      
      Here are some test results I have got with this patch applied(all test run
      3 times):
      
      `fsmark.files_per_sec'
      =====================
      
      next-20150317                 this patch
      -------------------------     -------------------------
      metric_value     ±stddev      metric_value     ±stddev     change      testbox/benchmark/testcase-params
      -------------------------     -------------------------   --------     ------------------------------
            25.600     ±0.0              92.700     ±2.5          262.1%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-btrfs-4M-30G-fsyncBeforeClose
            25.600     ±0.0              77.800     ±0.6          203.9%     ivb44/fsmark/1x-64t-9BRD_6G-RAID5-btrfs-4M-30G-fsyncBeforeClose
            32.000     ±0.0              93.800     ±1.7          193.1%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-ext4-4M-30G-fsyncBeforeClose
            32.000     ±0.0              81.233     ±1.7          153.9%     ivb44/fsmark/1x-64t-9BRD_6G-RAID5-ext4-4M-30G-fsyncBeforeClose
            48.800     ±14.5             99.667     ±2.0          104.2%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-xfs-4M-30G-fsyncBeforeClose
             6.400     ±0.0              12.800     ±0.0          100.0%     ivb44/fsmark/1x-64t-3HDD-RAID5-btrfs-4M-40G-fsyncBeforeClose
            63.133     ±8.2              82.800     ±0.7           31.2%     ivb44/fsmark/1x-64t-9BRD_6G-RAID5-xfs-4M-30G-fsyncBeforeClose
           245.067     ±0.7             306.567     ±7.9           25.1%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-f2fs-4M-30G-fsyncBeforeClose
            17.533     ±0.3              21.000     ±0.8           19.8%     ivb44/fsmark/1x-1t-3HDD-RAID5-xfs-4M-40G-fsyncBeforeClose
           188.167     ±1.9             215.033     ±3.1           14.3%     ivb44/fsmark/1x-1t-4BRD_12G-RAID5-btrfs-4M-30G-NoSync
           254.500     ±1.8             290.733     ±2.4           14.2%     ivb44/fsmark/1x-1t-9BRD_6G-RAID5-btrfs-4M-30G-NoSync
      
      `time.system_time'
      =====================
      
      next-20150317                 this patch
      -------------------------    -------------------------
      metric_value     ±stddev     metric_value     ±stddev     change       testbox/benchmark/testcase-params
      -------------------------    -------------------------    --------     ------------------------------
          7235.603     ±1.2             185.163     ±1.9          -97.4%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-btrfs-4M-30G-fsyncBeforeClose
          7666.883     ±2.9             202.750     ±1.0          -97.4%     ivb44/fsmark/1x-64t-9BRD_6G-RAID5-btrfs-4M-30G-fsyncBeforeClose
         14567.893     ±0.7             421.230     ±0.4          -97.1%     ivb44/fsmark/1x-64t-3HDD-RAID5-btrfs-4M-40G-fsyncBeforeClose
          3697.667     ±14.0            148.190     ±1.7          -96.0%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-xfs-4M-30G-fsyncBeforeClose
          5572.867     ±3.8             310.717     ±1.4          -94.4%     ivb44/fsmark/1x-64t-9BRD_6G-RAID5-ext4-4M-30G-fsyncBeforeClose
          5565.050     ±0.5             313.277     ±1.5          -94.4%     ivb44/fsmark/1x-64t-4BRD_12G-RAID5-ext4-4M-30G-fsyncBeforeClose
          2420.707     ±17.1            171.043     ±2.7          -92.9%     ivb44/fsmark/1x-64t-9BRD_6G-RAID5-xfs-4M-30G-fsyncBeforeClose
          3743.300     ±4.6             379.827     ±3.5          -89.9%     ivb44/fsmark/1x-64t-3HDD-RAID5-ext4-4M-40G-fsyncBeforeClose
          3308.687     ±6.3             363.050     ±2.0          -89.0%     ivb44/fsmark/1x-64t-3HDD-RAID5-xfs-4M-40G-fsyncBeforeClose
      
      Where,
      
           1x: where 'x' means iterations or loop, corresponding to the 'L' option of fsmark
      
           1t, 64t: where 't' means thread
      
           4M: means the single file size, corresponding to the '-s' option of fsmark
           40G, 30G, 120G: means the total test size
      
           4BRD_12G: BRD is the ramdisk, where '4' means 4 ramdisk, and where '12G' means
                     the size of one ramdisk. So, it would be 48G in total. And we made a
                     raid on those ramdisk
      
      As you can see, though there are no much performance gain for hard disk
      workload, the system time is dropped heavily, up to 97%. And as expected,
      the performance increased a lot, up to 260%, for fast device(ram disk).
      
      v2: use bits instead of array to note down wait queue need to wake up.
      Signed-off-by: NYuanhan Liu <yuanhan.liu@linux.intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e9e4c377
    • Y
      md/raid5: split wait_for_stripe and introduce wait_for_quiescent · b1b46486
      Yuanhan Liu 提交于
      I noticed heavy spin lock contention at get_active_stripe(), introduced
      at being wake up stage, where a bunch of processes try to re-hold the
      spin lock again.
      
      After giving some thoughts on this issue, I found the lock could be
      relieved(and even avoided) if we turn the wait_for_stripe to per
      waitqueue for each lock hash and make the wake up exclusive: wake up
      one process each time, which avoids the lock contention naturally.
      
      Before go hacking with wait_for_stripe, I found it actually has 2
      usages: for the array to enter or leave the quiescent state, and also
      to wait for an available stripe in each of the hash lists.
      
      So this patch splits the first usage off into a separate wait_queue,
      wait_for_quiescent, and the next patch will turn the second usage into
      one waitqueue for each hash value, and make it exclusive, to relieve
      the lock contention.
      
      v2: wake_up(wait_for_quiescent) when (active_stripes == 0)
          Commit log refactor suggestion from Neil.
      Signed-off-by: NYuanhan Liu <yuanhan.liu@linux.intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b1b46486
  10. 12 6月, 2015 1 次提交
    • N
      md: make sure MD_RECOVERY_DONE is clear before starting recovery/resync · ea358cd0
      NeilBrown 提交于
      MD_RECOVERY_DONE is normally cleared by md_check_recovery after a
      resync etc finished.  However it is possible for raid5_start_reshape
      to race and start a reshape before MD_RECOVERY_DONE is cleared.  This
      can lean to multiple reshapes running at the same time, which isn't
      good.
      
      To make sure it is cleared before starting a reshape, and also clear
      it when reaping a thread, just to be safe.
      Signed-off-by: NNeilBrown  <neilb@suse.de>
      ea358cd0
  11. 28 5月, 2015 9 次提交
    • N
      md/raid5: break stripe-batches when the array has failed. · 626f2092
      NeilBrown 提交于
      Once the array has too much failure, we need to break
      stripe-batches up so they can all be dealt with.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      626f2092
    • N
      md/raid5: call break_stripe_batch_list from handle_stripe_clean_event · 787b76fa
      NeilBrown 提交于
      Now that the code in break_stripe_batch_list() is nearly identical
      to the end of handle_stripe_clean_event, replace the later
      with a function call.
      
      The only remaining difference of any interest is the masking that is
      applieds to dev[i].flags copied from head_sh.
      R5_WriteError certainly isn't wanted as it is set per-stripe, not
      per-patch.  R5_Overlap isn't wanted as it is explicitly handled.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      787b76fa
    • N
      md/raid5: be more selective about distributing flags across batch. · 1b956f7a
      NeilBrown 提交于
      When a batch of stripes is broken up, we keep some of the flags
      that were per-stripe, and copy other flags from the head to all
      others.
      
      This only happens while a stripe is being handled, so many of the
      flags are irrelevant.
      
      The "SYNC_FLAGS" (which I've renamed to make it clear there are
      several) and STRIPE_DEGRADED are set per-stripe and so need to be
      preserved.  STRIPE_INSYNC is the only flag that is set on the head
      that needs to be propagated to all others.
      
      For safety, add a WARN_ON if others are set, except:
       STRIPE_HANDLE - this is safe and per-stripe and we are going to set
            in several cases anyway
       STRIPE_INSYNC
       STRIPE_IO_STARTED - this is just a hint and doesn't hurt.
       STRIPE_ON_PLUG_LIST
       STRIPE_ON_RELEASE_LIST - It is a point pointless for a batched
                 stripe to be on one of these lists, but it can happen
                 as can be safely ignored.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1b956f7a
    • N
      md/raid5: add handle_flags arg to break_stripe_batch_list. · 3960ce79
      NeilBrown 提交于
      When we break a stripe_batch_list we sometimes want to set
      STRIPE_HANDLE on the individual stripes, and sometimes not.
      
      So pass a 'handle_flags' arg.  If it is zero, always set STRIPE_HANDLE
      (on non-head stripes).  If not zero, only set it if any of the given
      flags are present.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      3960ce79
    • N
      md/raid5: duplicate some more handle_stripe_clean_event code in break_stripe_batch_list · fb642b92
      NeilBrown 提交于
      break_stripe_batch list didn't clear head_sh->batch_head.
      This was probably a bug.
      
      Also clear all R5_Overlap flags and if any were cleared, wake up
      'wait_for_overlap'.
      This isn't always necessary but the worst effect is a little
      extra checking for code that is waiting on wait_for_overlap.
      
      Also, don't use wake_up_nr() because that does the wrong thing
      if 'nr' is zero, and it number of flags cleared doesn't
      strongly correlate with the number of threads to wake.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      fb642b92
    • N
      md/raid5: remove condition test from check_break_stripe_batch_list. · 4e3d62ff
      NeilBrown 提交于
      handle_stripe_clean_event() contains a chunk of code very
      similar to check_break_stripe_batch_list().
      If we make the latter more like the former, we can end up
      with just one copy of this code.
      
      This  first step removed the condition (and the 'check_') part
      of the name.  This has the added advantage of making it clear
      what check is being performed at the point where the function is
      called.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4e3d62ff
    • N
      md/raid5: Ensure a batch member is not handled prematurely. · b15a9dbd
      NeilBrown 提交于
      If a stripe is a member of a batch, but not the head, it must
      not be handled separately from the rest of the batch.
      
      'clear_batch_ready()' handles this requirement to some
      extent but not completely.  If a member is passed to handle_stripe()
      a second time it returns '0' indicating the stripe can be handled,
      which is wrong.
      So add an extra test.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b15a9dbd
    • N
      md/raid5: close race between STRIPE_BIT_DELAY and batching. · d0852df5
      NeilBrown 提交于
      When we add a write to a stripe we need to make sure the bitmap
      bit is set.  While doing that the stripe is not locked so it could
      be added to a batch after which further changes to STRIPE_BIT_DELAY
      and ->bm_seq are ineffective.
      
      So we need to hold off adding to a stripe until bitmap_startwrite has
      completed at least once, and we need to avoid further changes to
      STRIPE_BIT_DELAY once the stripe has been added to a batch.
      
      If a bitmap_startwrite() completes after the stripe was added to a
      batch, it will not have set the bit, only incremented a counter, so no
      extra delay of the stripe is needed.
      Reported-by: NShaohua Li <shli@kernel.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d0852df5
    • N
      md/raid5: ensure whole batch is delayed for all required bitmap updates. · 2b6b2457
      NeilBrown 提交于
      When we add a stripe to a batch, we need to be sure that
      head stripe will wait for the bitmap update required for the new
      stripe.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2b6b2457
  12. 21 5月, 2015 1 次提交
  13. 08 5月, 2015 5 次提交
    • N
      md/raid5: fix handling of degraded stripes in batches. · bb27051f
      NeilBrown 提交于
      There is no need for special handling of stripe-batches when the array
      is degraded.
      
      There may be if there is a failure in the batch, but STRIPE_DEGRADED
      does not imply an error.
      
      So don't set STRIPE_BATCH_ERR in ops_run_io just because the array is
      degraded.
      This actually causes a bug: the STRIPE_DEGRADED flag gets cleared in
      check_break_stripe_batch_list() and so the bitmap bit gets cleared
      when it shouldn't.
      
      So in check_break_stripe_batch_list(), split the batch up completely -
      again STRIPE_DEGRADED isn't meaningful.
      
      Also don't set STRIPE_BATCH_ERR when there is a write error to a
      replacement device.  This simply removes the replacement device and
      requires no extra handling.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bb27051f
    • N
      md/raid5: fix allocation of 'scribble' array. · 738a2738
      NeilBrown 提交于
      As the new 'scribble' array is sized based on chunk size,
      we need to make sure the size matches the largest of 'old'
      and 'new' chunk sizes when the array is undergoing reshape.
      
      We also potentially need to resize it even when not resizing
      the stripe cache, as chunk size can change without changing
      number of devices.
      
      So move the 'resize' code into a separate function, and
      consider old and new sizes when allocating.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Fixes: 46d5b785 ("raid5: use flex_array for scribble data")
      738a2738
    • N
      md/raid5: don't record new size if resize_stripes fails. · 6e9eac2d
      NeilBrown 提交于
      If any memory allocation in resize_stripes fails we will return
      -ENOMEM, but in some cases we update conf->pool_size anyway.
      
      This means that if we try again, the allocations will be assumed
      to be larger than they are, and badness results.
      
      So only update pool_size if there is no error.
      
      This bug was introduced in 2.6.17 and the patch is suitable for
      -stable.
      
      Fixes: ad01c9e3 ("[PATCH] md: Allow stripes to be expanded in preparation for expanding an array")
      Cc: stable@vger.kernel.org (v2.6.17+)
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6e9eac2d
    • N
      md/raid5: avoid reading parity blocks for full-stripe write to degraded array · 10d82c5f
      NeilBrown 提交于
      When performing a reconstruct write, we need to read all blocks
      that are not being over-written .. except the parity (P and Q) blocks.
      
      The code currently reads these (as they are not being over-written!)
      unnecessarily.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Fixes: ea664c82 ("md/raid5: need_this_block: tidy/fix last condition.")
      10d82c5f
    • N
      md/raid5: more incorrect BUG_ON in handle_stripe_fill. · b0c783b3
      NeilBrown 提交于
      It is not incorrect to call handle_stripe_fill() when
      a batch of full-stripe writes is active.
      It is, however, a BUG if fetch_block() then decides
      it needs to actually fetch anything.
      
      So move the 'BUG_ON' to where it belongs.
      Signed-off-by: NNeilBrown  <neilb@suse.de>
      Fixes: 59fc630b ("RAID5: batch adjacent full stripe write")
      b0c783b3