1. 28 8月, 2017 8 次提交
  2. 12 8月, 2017 1 次提交
  3. 08 8月, 2017 4 次提交
    • S
      md/r5cache: fix io_unit handling in r5l_log_endio() · a9501d74
      Song Liu 提交于
      In r5l_log_endio(), once log->io_list_lock is released, the io unit
      may be accessed (or even freed) by other threads. Current code
      doesn't handle the io_unit properly, which leads to potential race
      conditions.
      
      This patch solves this race condition by:
      
      1. Add a pending_stripe count flush_payload. Multiple flush_payloads
         are counted as only one pending_stripe. Flag has_flush_payload is
         added to show whether the io unit has flush_payload;
      2. In r5l_log_endio(), check flags has_null_flush and
         has_flush_payload with log->io_list_lock held. After the lock
         is released, this IO unit is only accessed when we know the
         pending_stripe counter cannot be zeroed by other threads.
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      a9501d74
    • S
      md/r5cache: call mddev_lock/unlock() in r5c_journal_mode_set · b44886c5
      Song Liu 提交于
      In r5c_journal_mode_set(), it is necessary to call mddev_lock()
      before accessing conf and conf->log. Otherwise, the conf->log
      may change (and become NULL).
      
      Shaohua: fix unlock in failure cases
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      b44886c5
    • N
      md: fix test in md_write_start() · 81fe48e9
      NeilBrown 提交于
      md_write_start() needs to clear the in_sync flag is it is set, or if
      there might be a race with set_in_sync() such that the later will
      set it very soon.  In the later case it is sufficient to take the
      spinlock to synchronize with set_in_sync(), and then set the flag
      if needed.
      
      The current test is incorrect.
      It should be:
        if "flag is set" or "race is possible"
      
      "flag is set" is trivially "mddev->in_sync".
      "race is possible" should be tested by "mddev->sync_checkers".
      
      If sync_checkers is 0, then there can be no race.  set_in_sync() will
      wait in percpu_ref_switch_to_atomic_sync() for an RCU grace period,
      and as md_write_start() holds the rcu_read_lock(), set_in_sync() will
      be sure ot see the update to writes_pending.
      
      If sync_checkers is > 0, there could be race.  If md_write_start()
      happened entirely between
      		if (!mddev->in_sync &&
      		    percpu_ref_is_zero(&mddev->writes_pending)) {
      and
      			mddev->in_sync = 1;
      in set_in_sync(), then it would not see that is_sync had been set,
      and set_in_sync() would not see that writes_pending had been
      incremented.
      
      This bug means that in_sync is sometimes not set when it should be.
      Consequently there is a small chance that the array will be marked as
      "clean" when in fact it is inconsistent.
      
      Fixes: 4ad23a97 ("MD: use per-cpu counter for writes_pending")
      cc: stable@vger.kernel.org (v4.12+)
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      81fe48e9
    • N
      md: always clear ->safemode when md_check_recovery gets the mddev lock. · 33182d15
      NeilBrown 提交于
      If ->safemode == 1, md_check_recovery() will try to get the mddev lock
      and perform various other checks.
      If mddev->in_sync is zero, it will call set_in_sync, and clear
      ->safemode.  However if mddev->in_sync is not zero, ->safemode will not
      be cleared.
      
      When md_check_recovery() drops the mddev lock, the thread is woken
      up again.  Normally it would just check if there was anything else to
      do, find nothing, and go to sleep.  However as ->safemode was not
      cleared, it will take the mddev lock again, then wake itself up
      when unlocking.
      
      This results in an infinite loop, repeatedly calling
      md_check_recovery(), which RCU or the soft-lockup detector
      will eventually complain about.
      
      Prior to commit 4ad23a97 ("MD: use per-cpu counter for
      writes_pending"), safemode would only be set to one when the
      writes_pending counter reached zero, and would be cleared again
      when writes_pending is incremented.  Since that patch, safemode
      is set more freely, but is not reliably cleared.
      
      So in md_check_recovery() clear ->safemode before checking ->in_sync.
      
      Fixes: 4ad23a97 ("MD: use per-cpu counter for writes_pending")
      Cc: stable@vger.kernel.org (4.12+)
      Reported-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reported-by: NDavid R <david@unsolicited.net>
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      33182d15
  4. 27 7月, 2017 3 次提交
  5. 26 7月, 2017 6 次提交
  6. 25 7月, 2017 3 次提交
  7. 24 7月, 2017 1 次提交
  8. 22 7月, 2017 5 次提交
  9. 20 7月, 2017 2 次提交
  10. 13 7月, 2017 1 次提交
  11. 11 7月, 2017 2 次提交
    • X
      Raid5 should update rdev->sectors after reshape · b5d27718
      Xiao Ni 提交于
      The raid5 md device is created by the disks which we don't use the total size. For example,
      the size of the device is 5G and it just uses 3G of the devices to create one raid5 device.
      Then change the chunksize and wait reshape to finish. After reshape finishing stop the raid
      and assemble it again. It fails.
      mdadm -CR /dev/md0 -l5 -n3 /dev/loop[0-2] --size=3G --chunk=32 --assume-clean
      mdadm /dev/md0 --grow --chunk=64
      wait reshape to finish
      mdadm -S /dev/md0
      mdadm -As
      The error messages:
      [197519.814302] md: loop1 does not have a valid v1.2 superblock, not importing!
      [197519.821686] md: md_import_device returned -22
      
      After reshape the data offset is changed. It selects backwards direction in this condition.
      In function super_1_load it compares the available space of the underlying device with
      sb->data_size. The new data offset gets bigger after reshape. So super_1_load returns -EINVAL.
      rdev->sectors is updated in md_finish_reshape. Then sb->data_size is set in super_1_sync based
      on rdev->sectors. So add md_finish_reshape in end_reshape.
      Signed-off-by: NXiao Ni <xni@redhat.com>
      Acked-by: NGuoqing Jiang <gqjiang@suse.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NShaohua Li <shli@fb.com>
      b5d27718
    • G
      md/bitmap: don't read page from device with Bitmap_sync · 4aaf7694
      Guoqing Jiang 提交于
      The device owns Bitmap_sync flag needs recovery
      to become in sync, and read page from this type
      device could get stale status.
      
      Also add comments for Bitmap_sync bit per the
      suggestion from Shaohua and Neil.
      
      Previous disscussion can be found here:
      https://marc.info/?t=149760428900004&r=1&w=2Signed-off-by: NGuoqing Jiang <gqjiang@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      4aaf7694
  12. 06 7月, 2017 1 次提交
  13. 04 7月, 2017 2 次提交
  14. 30 6月, 2017 1 次提交