1. 02 12月, 2017 3 次提交
    • N
      md: limit mdstat resync progress to max_sectors · d2e2ec82
      Nate Dailey 提交于
      There is a small window near the end of md_do_sync where mddev->curr_resync
      can be equal to MaxSector.
      
      If status_resync is called during this window, the resulting /proc/mdstat
      output contains a HUGE number of = signs due to the very large curr_resync:
      
      Personalities : [raid1]
      md123 : active raid1 sdd3[2] sdb3[0]
        204736 blocks super 1.0 [2/1] [U_]
        [=====================================================================
         ... (82 MB more) ...
         ================>]  recovery =429496729.3% (9223372036854775807/204736)
         finish=0.2min speed=12796K/sec
        bitmap: 0/1 pages [0KB], 65536KB chunk
      
      Modify status_resync to ensure the resync variable doesn't exceed
      the array's max_sectors.
      Signed-off-by: NNate Dailey <nate.dailey@stratus.com>
      Acked-by: NGuoqing Jiang <gqjiang@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      d2e2ec82
    • S
      md/r5cache: move mddev_lock() out of r5c_journal_mode_set() · ff35f58e
      Song Liu 提交于
      r5c_journal_mode_set() is called by r5c_journal_mode_store() and
      raid_ctr() in dm-raid. We don't need mddev_lock() when calling from
      raid_ctr(). This patch fixes this by moves the mddev_lock() to
      r5c_journal_mode_store().
      
      Cc: stable@vger.kernel.org (v4.13+)
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      ff35f58e
    • B
      md/raid5: correct degraded calculation in raid5_error · aff69d89
      bingjingc 提交于
      When disk failure occurs on new disks for reshape, mddev->degraded
      is not calculated correctly. Faulty bit of the failure device is not
      set before raid5_calc_degraded(conf).
      
      mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/loop[012]
      mdadm /dev/md0 -a /dev/loop3
      mdadm /dev/md0 --grow -n4
      mdadm /dev/md0 -f /dev/loop3 # simulating disk failure
      
      cat /sys/block/md0/md/degraded # it outputs 0, but it should be 1.
      
      However, mdadm -D /dev/md0 will show that it is degraded. It's a bug.
      It can be fixed by moving the resources raid5_calc_degraded() depends
      on before it.
      Reported-by: NRoy Chung <roychung@synology.com>
      Reviewed-by: NAlex Wu <alexwu@synology.com>
      Signed-off-by: NBingJing Chang <bingjingc@synology.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      aff69d89
  2. 30 11月, 2017 21 次提交
  3. 29 11月, 2017 16 次提交