1. 25 8月, 2016 4 次提交
  2. 18 8月, 2016 2 次提交
  3. 17 8月, 2016 1 次提交
  4. 06 8月, 2016 2 次提交
    • A
      md: Prevent IO hold during accessing to faulty raid5 array · 11367799
      Alexey Obitotskiy 提交于
      After array enters in faulty state (e.g. number of failed drives
      becomes more then accepted for raid5 level) it sets error flags
      (one of this flags is MD_CHANGE_PENDING). For internal metadata
      arrays MD_CHANGE_PENDING cleared into md_update_sb, but not for
      external metadata arrays. MD_CHANGE_PENDING flag set prevents to
      finish all new or non-finished IOs to array and hold them in
      pending state. In some cases this can leads to deadlock situation.
      
      For example, we have faulty array (2 of 4 drives failed) and
      udev handle array state changes and blkid started (or other
      userspace application that used array to read/write) but unable
      to finish reads due to IO hold. At the same time we unable to get
      exclusive access to array (to stop array in our case) because
      another external application still use this array.
      
      Fix makes possible to return IO with errors immediately.
      So external application can finish working with array and
      give exclusive access to other applications to perform
      required management actions with array.
      Signed-off-by: NAlexey Obitotskiy <aleksey.obitotskiy@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      11367799
    • S
      MD: hold mddev lock to change bitmap location · d9dd26b2
      Shaohua Li 提交于
      Changing the location changes a lot of things. Holding the lock to avoid race.
      This makes the .quiesce called with mddev lock hold too.
      Acked-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      d9dd26b2
  5. 02 8月, 2016 1 次提交
    • Z
      raid5: fix incorrectly counter of conf->empty_inactive_list_nr · ff00d3b4
      ZhengYuan Liu 提交于
      The counter conf->empty_inactive_list_nr is only used for determine if the
      raid5 is congested which is deal with in function raid5_congested().
      It was increased in get_free_stripe() when conf->inactive_list got to be
      empty and decreased in release_inactive_stripe_list() when splice
      temp_inactive_list to conf->inactive_list. However, this may have a
      problem when raid5_get_active_stripe or stripe_add_to_batch_list was called,
      because these two functions may call list_del_init(&sh->lru) to delete sh from
      "conf->inactive_list + hash" which may cause "conf->inactive_list + hash" to
      be empty when atomic_inc_not_zero(&sh->count) got false. So a check should be
      done at these two point and increase empty_inactive_list_nr accordingly.
      Otherwise the counter may get to be negative number which would influence
      async readahead from VFS.
      Signed-off-by: NZhengYuan Liu <liuzhengyuan@kylinos.cn>
      Signed-off-by: NShaohua Li <shli@fb.com>
      ff00d3b4
  6. 31 7月, 2016 1 次提交
  7. 29 7月, 2016 1 次提交
  8. 21 7月, 2016 10 次提交
  9. 20 7月, 2016 4 次提交
  10. 19 7月, 2016 14 次提交