1. 19 3月, 2018 2 次提交
    • C
      bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set · fadd94e0
      Coly Li 提交于
      In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
      cached_dev_get() is called when creating dc->writeback_thread, and
      cached_dev_put() is called when exiting dc->writeback_thread. This
      modification works well unless people detach the bcache device manually by
          'echo 1 > /sys/block/bcache<N>/bcache/detach'
      Because this sysfs interface only calls bch_cached_dev_detach() which wakes
      up dc->writeback_thread but does not stop it. The reason is, before patch
      "bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
      bch_writeback_thread(), if cache is not dirty after writeback,
      cached_dev_put() will be called here. And in cached_dev_make_request() when
      a new write request makes cache from clean to dirty, cached_dev_get() will
      be called there. Since we don't operate dc->count in these locations,
      refcount d->count cannot be dropped after cache becomes clean, and
      cached_dev_detach_finish() won't be called to detach bcache device.
      
      This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
      set inside bch_writeback_thread(). If this bit is set and cache is clean
      (no existing writeback_keys), break the while-loop, call cached_dev_put()
      and quit the writeback thread.
      
      Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
      writeback thread should continue to perform writeback, this is the original
      design of manually detach.
      
      It is safe to do the following check without locking, let me explain why,
      +	if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
      +	    (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
      
      If the kenrel thread does not sleep and continue to run due to conditions
      are not updated in time on the running CPU core, it just consumes more CPU
      cycles and has no hurt. This should-sleep-but-run is safe here. We just
      focus on the should-run-but-sleep condition, which means the writeback
      thread goes to sleep in mistake while it should continue to run.
      1, First of all, no matter the writeback thread is hung or not,
         kthread_stop() from cached_dev_detach_finish() will wake up it and
         terminate by making kthread_should_stop() return true. And in normal
         run time, bit on index BCACHE_DEV_DETACHING is always cleared, the
         condition
      	!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)
         is always true and can be ignored as constant value.
      2, If one of the following conditions is true, the writeback thread should
         go to sleep,
         "!atomic_read(&dc->has_dirty)" or "!dc->writeback_running)"
         each of them independently controls the writeback thread should sleep or
         not, let's analyse them one by one.
      2.1 condition "!atomic_read(&dc->has_dirty)"
         If dc->has_dirty is set from 0 to 1 on another CPU core, bcache will
         call bch_writeback_queue() immediately or call bch_writeback_add() which
         indirectly calls bch_writeback_queue() too. In bch_writeback_queue(),
         wake_up_process(dc->writeback_thread) is called. It sets writeback
         thread's task state to TASK_RUNNING and following an implicit memory
         barrier, then tries to wake up the writeback thread.
         In writeback thread, its task state is set to TASK_INTERRUPTIBLE before
         doing the condition check. If other CPU core sets the TASK_RUNNING state
         after writeback thread setting TASK_INTERRUPTIBLE, the writeback thread
         will be scheduled to run very soon because its state is not
         TASK_INTERRUPTIBLE. If other CPU core sets the TASK_RUNNING state before
         writeback thread setting TASK_INTERRUPTIBLE, the implict memory barrier
         of wake_up_process() will make sure modification of dc->has_dirty on
         other CPU core is updated and observed on the CPU core of writeback
         thread. Therefore the condition check will correctly be false, and
         continue writeback code without sleeping.
      2.2 condition "!dc->writeback_running)"
         dc->writeback_running can be changed via sysfs file, every time it is
         modified, a following bch_writeback_queue() is alwasy called. So the
         change is always observed on the CPU core of writeback thread. If
         dc->writeback_running is changed from 0 to 1 on other CPU core, this
         condition check will observe the modification and allow writeback
         thread to continue to run without sleeping.
      Now we can see, even without a locking protection, multiple conditions
      check is safe here, no deadlock or process hang up will happen.
      
      I compose a separte patch because that patch "bcache: fix cached_dev->count
      usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
      Reinecke. Also this fix is not trivial and good for a separate patch.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Huijun Tang <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fadd94e0
    • C
      bcache: fix cached_dev->count usage for bch_cache_set_error() · 804f3c69
      Coly Li 提交于
      When bcache metadata I/O fails, bcache will call bch_cache_set_error()
      to retire the whole cache set. The expected behavior to retire a cache
      set is to unregister the cache set, and unregister all backing device
      attached to this cache set, then remove sysfs entries of the cache set
      and all attached backing devices, finally release memory of structs
      cache_set, cache, cached_dev and bcache_device.
      
      In my testing when journal I/O failure triggered by disconnected cache
      device, sometimes the cache set cannot be retired, and its sysfs
      entry /sys/fs/bcache/<uuid> still exits and the backing device also
      references it. This is not expected behavior.
      
      When metadata I/O failes, the call senquence to retire whole cache set is,
              bch_cache_set_error()
              bch_cache_set_unregister()
              bch_cache_set_stop()
              __cache_set_unregister()     <- called as callback by calling
                                              clousre_queue(&c->caching)
              cache_set_flush()            <- called as a callback when refcount
                                              of cache_set->caching is 0
              cache_set_free()             <- called as a callback when refcount
                                              of catch_set->cl is 0
              bch_cache_set_release()      <- called as a callback when refcount
                                              of catch_set->kobj is 0
      
      I find if kernel thread bch_writeback_thread() quits while-loop when
      kthread_should_stop() is true and searched_full_index is false, clousre
      callback cache_set_flush() set by continue_at() will never be called. The
      result is, bcache fails to retire whole cache set.
      
      cache_set_flush() will be called when refcount of closure c->caching is 0,
      and in function bcache_device_detach() refcount of closure c->caching is
      released to 0 by clousre_put(). In metadata error code path, function
      bcache_device_detach() is called by cached_dev_detach_finish(). This is a
      callback routine being called when cached_dev->count is 0. This refcount
      is decreased by cached_dev_put().
      
      The above dependence indicates, cache_set_flush() will be called when
      refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
      when refcount of cache_dev->count is 0.
      
      The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
      and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
      of cache_dev is not decreased properly.
      
      In bch_writeback_thread(), cached_dev_put() is called only when
      searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
      there is no dirty data on cache. In most of run time it is correct, but
      when bch_writeback_thread() quits the while-loop while cache is still
      dirty, current code forget to call cached_dev_put() before this kernel
      thread exits. This is why sometimes cache_set_flush() is not executed and
      cache set fails to be retired.
      
      The reason to call cached_dev_put() in bch_writeback_rate() is, when the
      cache device changes from clean to dirty, cached_dev_get() is called, to
      make sure during writeback operatiions both backing and cache devices
      won't be released.
      
      Adding following code in bch_writeback_thread() does not work,
         static int bch_writeback_thread(void *arg)
              }
      
      +       if (atomic_read(&dc->has_dirty))
      +               cached_dev_put()
      +
              return 0;
       }
      because writeback kernel thread can be waken up and start via sysfs entry:
              echo 1 > /sys/block/bcache<N>/bcache/writeback_running
      It is difficult to check whether backing device is dirty without race and
      extra lock. So the above modification will introduce potential refcount
      underflow in some conditions.
      
      The correct fix is, to take cached dev refcount when creating the kernel
      thread, and put it before the kernel thread exits. Then bcache does not
      need to take a cached dev refcount when cache turns from clean to dirty,
      or to put a cached dev refcount when cache turns from ditry to clean. The
      writeback kernel thread is alwasy safe to reference data structure from
      cache set, cache and cached device (because a refcount of cache device is
      taken for it already), and no matter the kernel thread is stopped by I/O
      errors or system reboot, cached_dev->count can always be used correctly.
      
      The patch is simple, but understanding how it works is quite complicated.
      
      Changelog:
      v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
      v1: initial version for review.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Cc: Michael Lyle <mlyle@lyle.org>
      Cc: Junhui Tang <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      804f3c69
  2. 18 3月, 2018 3 次提交
  3. 17 3月, 2018 2 次提交
    • J
      blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir() · 4c699480
      Joseph Qi 提交于
      We've triggered a WARNING in blk_throtl_bio() when throttling writeback
      io, which complains blkg->refcnt is already 0 when calling blkg_get(),
      and then kernel crashes with invalid page request.
      After investigating this issue, we've found it is caused by a race
      between blkcg_bio_issue_check() and cgroup_rmdir(), which is described
      below:
      
      writeback kworker               cgroup_rmdir
                                        cgroup_destroy_locked
                                          kill_css
                                            css_killed_ref_fn
                                              css_killed_work_fn
                                                offline_css
                                                  blkcg_css_offline
        blkcg_bio_issue_check
          rcu_read_lock
          blkg_lookup
                                                    spin_trylock(q->queue_lock)
                                                    blkg_destroy
                                                    spin_unlock(q->queue_lock)
          blk_throtl_bio
          spin_lock_irq(q->queue_lock)
          ...
          spin_unlock_irq(q->queue_lock)
        rcu_read_unlock
      
      Since rcu can only prevent blkg from releasing when it is being used,
      the blkg->refcnt can be decreased to 0 during blkg_destroy() and schedule
      blkg release.
      Then trying to blkg_get() in blk_throtl_bio() will complains the WARNING.
      And then the corresponding blkg_put() will schedule blkg release again,
      which result in double free.
      This race is introduced by commit ae118896 ("blkcg: consolidate blkg
      creation in blkcg_bio_issue_check()"). Before this commit, it will
      lookup first and then try to lookup/create again with queue_lock. Since
      revive this logic is a bit drastic, so fix it by only offlining pd during
      blkcg_css_offline(), and move the rest destruction (especially
      blkg_put()) into blkcg_css_free(), which should be the right way as
      discussed.
      
      Fixes: ae118896 ("blkcg: consolidate blkg creation in blkcg_bio_issue_check()")
      Reported-by: NJiufei Xue <jiufei.xue@linux.alibaba.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4c699480
    • J
      block: sed-opal: fix u64 short atom length · 5f990d31
      Jonas Rabenstein 提交于
      The length must be given as bytes and not as 4 bit tuples.
      Reviewed-by: NScott Bauer <scott.bauer@intel.com>
      Signed-off-by: NJonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5f990d31
  4. 14 3月, 2018 3 次提交
  5. 13 3月, 2018 2 次提交
  6. 10 3月, 2018 1 次提交
  7. 09 3月, 2018 15 次提交
  8. 07 3月, 2018 2 次提交
  9. 01 3月, 2018 10 次提交