1. 15 1月, 2018 1 次提交
  2. 11 1月, 2018 6 次提交
  3. 10 1月, 2018 2 次提交
  4. 09 1月, 2018 13 次提交
    • M
      bcache: fix writeback target calc on large devices · 616486ab
      Michael Lyle 提交于
      Bcache needs to scale the dirty data in the cache over the multiple
      backing disks in order to calculate writeback rates for each.
      The previous code did this by multiplying the target number of dirty
      sectors by the backing device size, and expected it to fit into a
      uint64_t; this blows up on relatively small backing devices.
      
      The new approach figures out the bdev's share in 16384ths of the overall
      cached data.  This is chosen to cope well when bdevs drastically vary in
      size and to ensure that bcache can cross the petabyte boundary for each
      backing device.
      
      This has been improved based on Tang Junhui's feedback to ensure that
      every device gets a share of dirty data, no matter how small it is
      compared to the total backing pool.
      
      The existing mechanism is very limited; this is purely a bug fix to
      remove limits on volume size.  However, there still needs to be change
      to make this "fair" over many volumes where some are idle.
      Reported-by: NJack Douglas <jack@douglastechnology.co.uk>
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      616486ab
    • C
      bcache: fix misleading error message in bch_count_io_errors() · 5138ac67
      Coly Li 提交于
      Bcache only does recoverable I/O for read operations by calling
      cached_dev_read_error(). For write opertions there is no I/O recovery for
      failed requests.
      
      But in bch_count_io_errors() no matter read or write I/Os, before errors
      counter reaches io error limit, pr_err() always prints "IO error on %,
      recoverying". For write requests this information is misleading, because
      there is no I/O recovery at all.
      
      This patch adds a parameter 'is_read' to bch_count_io_errors(), and only
      prints "recovering" by pr_err() when the bio direction is READ.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5138ac67
    • C
      bcache: reduce cache_set devices iteration by devices_max_used · 2831231d
      Coly Li 提交于
      Member devices of struct cache_set is used to reference all attached
      bcache devices to this cache set. If it is treated as array of pointers,
      size of devices[] is indicated by member nr_uuids of struct cache_set.
      
      nr_uuids is calculated in drivers/md/super.c:bch_cache_set_alloc(),
      	bucket_bytes(c) / sizeof(struct uuid_entry)
      Bucket size is determined by user space tool "make-bcache", by default it
      is 1024 sectors (defined in bcache-tools/make-bcache.c:main()). So default
      nr_uuids value is 4096 from the above calculation.
      
      Every time when bcache code iterates bcache devices of a cache set, all
      the 4096 pointers are checked even only 1 bcache device is attached to the
      cache set, that's a wast of time and unncessary.
      
      This patch adds a member devices_max_used to struct cache_set. Its value
      is 1 + the maximum used index of devices[] in a cache set. When iterating
      all valid bcache devices of a cache set, use c->devices_max_used in
      for-loop may reduce a lot of useless checking.
      
      Personally, my motivation of this patch is not for performance, I use it
      in bcache debugging, which helps me to narrow down the scape to check
      valid bcached devices of a cache set.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2831231d
    • Z
      bcache: fix unmatched generic_end_io_acct() & generic_start_io_acct() · b40503ea
      Zhai Zhaoxuan 提交于
      The function cached_dev_make_request() and flash_dev_make_request() call
      generic_start_io_acct() with (struct bcache_device)->disk when they start a
      closure. Then the function bio_complete() calls generic_end_io_acct() with
      (struct search)->orig_bio->bi_disk when the closure has done.
      Since the `bi_disk` is not the bcache device, the generic_end_io_acct() is
      called with a wrong device queue.
      
      It causes the "inflight" (in struct hd_struct) counter keep increasing
      without decreasing.
      
      This patch fix the problem by calling generic_end_io_acct() with
      (struct bcache_device)->disk.
      Signed-off-by: NZhai Zhaoxuan <kxuanobj@gmail.com>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NColy Li <colyli@suse.de>
      Reviewed-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b40503ea
    • K
      bcache: mark closure_sync() __sched · ce439bf7
      Kent Overstreet 提交于
      [edit by mlyle: include sched/debug.h to get __sched]
      Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ce439bf7
    • K
      bcache: Fix, improve efficiency of closure_sync() · e4bf7919
      Kent Overstreet 提交于
      Eliminates cases where sync can race and fail to complete / get stuck.
      Removes many status flags and simplifies entering-and-exiting closure
      sleeping behaviors.
      
      [mlyle: fixed conflicts due to changed return behavior in mainline.
      extended commit comment, and squashed down two commits that were mostly
      contradictory to get to this state.  Changed __set_current_state to
      set_current_state per Jens review comment]
      Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e4bf7919
    • M
      bcache: allow quick writeback when backing idle · b1092c9a
      Michael Lyle 提交于
      If the control system would wait for at least half a second, and there's
      been no reqs hitting the backing disk for awhile: use an alternate mode
      where we have at most one contiguous set of writebacks in flight at a
      time. (But don't otherwise delay).  If front-end IO appears, it will
      still be quick, as it will only have to contend with one real operation
      in flight.  But otherwise, we'll be sending data to the backing disk as
      quickly as it can accept it (with one op at a time).
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NTang Junhui <tang.junhui@zte.com.cn>
      Acked-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b1092c9a
    • M
      bcache: writeback: properly order backing device IO · 6e6ccc67
      Michael Lyle 提交于
      Writeback keys are presently iterated and dispatched for writeback in
      order of the logical block address on the backing device.  Multiple may
      be, in parallel, read from the cache device and then written back
      (especially when there are contiguous I/O).
      
      However-- there was no guarantee with the existing code that the writes
      would be issued in LBA order, as the reads from the cache device are
      often re-ordered.  In turn, when writing back quickly, the backing disk
      often has to seek backwards-- this slows writeback and increases
      utilization.
      
      This patch introduces an ordering mechanism that guarantees that the
      original order of issue is maintained for the write portion of the I/O.
      Performance for writeback is significantly improved when there are
      multiple contiguous keys or high writeback rates.
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Reviewed-by: NTang Junhui <tang.junhui@zte.com.cn>
      Tested-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6e6ccc67
    • T
      bcache: fix wrong return value in bch_debug_init() · 539d39eb
      Tang Junhui 提交于
      in bch_debug_init(), ret is always 0, and the return value is useless,
      change it to return 0 if be success after calling debugfs_create_dir(),
      else return a non-zero value.
      Signed-off-by: NTang Junhui <tang.junhui@zte.com.cn>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      539d39eb
    • T
      bcache: segregate flash only volume write streams · 4eca1cb2
      Tang Junhui 提交于
      In such scenario that there are some flash only volumes
      , and some cached devices, when many tasks request these devices in
      writeback mode, the write IOs may fall to the same bucket as bellow:
      | cached data | flash data | cached data | cached data| flash data|
      then after writeback of these cached devices, the bucket would
      be like bellow bucket:
      | free | flash data | free | free | flash data |
      
      So, there are many free space in this bucket, but since data of flash
      only volumes still exists, so this bucket cannot be reclaimable,
      which would cause waste of bucket space.
      
      In this patch, we segregate flash only volume write streams from
      cached devices, so data from flash only volumes and cached devices
      can store in different buckets.
      
      Compare to v1 patch, this patch do not add a additionally open bucket
      list, and it is try best to segregate flash only volume write streams
      from cached devices, sectors of flash only volumes may still be mixed
      with dirty sectors of cached device, but the number is very small.
      
      [mlyle: fixed commit log formatting, permissions, line endings]
      Signed-off-by: NTang Junhui <tang.junhui@zte.com.cn>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4eca1cb2
    • V
      bcache: Use PTR_ERR_OR_ZERO() · 9d134117
      Vasyl Gomonovych 提交于
      Fix ptr_ret.cocci warnings:
      drivers/md/bcache/btree.c:1800:1-3: WARNING: PTR_ERR_OR_ZERO can be used
      
      Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR
      
      Generated by: scripts/coccinelle/api/ptr_ret.cocci
      Signed-off-by: NVasyl Gomonovych <gomonovych@gmail.com>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9d134117
    • T
      bcache: stop writeback thread after detaching · 8d29c442
      Tang Junhui 提交于
      Currently, when a cached device detaching from cache, writeback thread is
      not stopped, and writeback_rate_update work is not canceled. For example,
      after the following command:
      echo 1 >/sys/block/sdb/bcache/detach
      you can still see the writeback thread. Then you attach the device to the
      cache again, bcache will create another writeback thread, for example,
      after below command:
      echo  ba0fb5cd-658a-4533-9806-6ce166d883b9 > /sys/block/sdb/bcache/attach
      then you will see 2 writeback threads.
      This patch stops writeback thread and cancels writeback_rate_update work
      when cached device detaching from cache.
      
      Compare with patch v1, this v2 patch moves code down into the register
      lock for safety in case of any future changes as Coly and Mike suggested.
      
      [edit by mlyle: commit log spelling/formatting]
      Signed-off-by: NTang Junhui <tang.junhui@zte.com.cn>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8d29c442
    • R
      bcache: ret IOERR when read meets metadata error · b221fc13
      Rui Hua 提交于
      The read request might meet error when searching the btree, but the error
      was not handled in cache_lookup(), and this kind of metadata failure will
      not go into cached_dev_read_error(), finally, the upper layer will receive
      bi_status=0.  In this patch we judge the metadata error by the return
      value of bch_btree_map_keys(), there are two potential paths give rise to
      the error:
      
      1. Because the btree is not totally cached in memery, we maybe get error
         when read btree node from cache device (see bch_btree_node_get()), the
         likely errno is -EIO, -ENOMEM
      
      2. When read miss happens, bch_btree_insert_check_key() will be called to
         insert a "replace_key" to btree(see cached_dev_cache_miss(), just for
         doing preparatory work before insert the missed data to cache device),
         a failure can also happen in this situation, the likely errno is
         -ENOMEM
      
      bch_btree_map_keys() will return MAP_DONE in normal scenario, but we will
      get either -EIO or -ENOMEM in above two cases. if this happened, we should
      NOT recover data from backing device (when cache device is dirty) because
      we don't know whether bkeys the read request covered are all clean.  And
      after that happened, s->iop.status is still its initially value(0) before
      we submit s->bio.bio, we set it to BLK_STS_IOERR, so it can go into
      cached_dev_read_error(), and finally it can be passed to upper layer, or
      recovered by reread from backing device.
      
      [edit by mlyle: patch formatting, word-wrap, comment spelling,
      commit log format]
      Signed-off-by: NHua Rui <huarui.dev@gmail.com>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NMichael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b221fc13
  5. 08 1月, 2018 17 次提交
  6. 07 1月, 2018 1 次提交