1. 03 10月, 2020 6 次提交
  2. 25 7月, 2020 3 次提交
    • C
      bcache: handle cache prio_buckets and disk_buckets properly for bucket size > 8MB · c954ac8d
      Coly Li 提交于
      Similar to c->uuids, struct cache's prio_buckets and disk_buckets also
      have the potential memory allocation failure during cache registration
      if the bucket size > 8MB.
      
      ca->prio_buckets can be stored on cache device in multiple buckets, its
      in-memory space is allocated by kzalloc() interface but normally
      allocated by alloc_pages() because the size > KMALLOC_MAX_CACHE_SIZE.
      
      So allocation of ca->prio_buckets has the MAX_ORDER restriction too. If
      the bucket size > 8MB, by default the page allocator will fail because
      the page order > 11 (default MAX_ORDER value). ca->prio_buckets should
      also use meta_bucket_bytes(), meta_bucket_pages() to decide its memory
      size and use alloc_meta_bucket_pages() to allocate pages, to avoid the
      allocation failure during cache set registration when bucket size > 8MB.
      
      ca->disk_buckets is a single bucket size memory buffer, it is used to
      iterate each bucket of ca->prio_buckets, and compose the bio based on
      memory of ca->disk_buckets, then write ca->disk_buckets memory to cache
      disk one-by-one for each bucket of ca->prio_buckets. ca->disk_buckets
      should have in-memory size exact to the meta_bucket_pages(), this is the
      size that ca->prio_buckets will be stored into each on-disk bucket.
      
      This patch fixes the above issues and handle cache's prio_buckets and
      disk_buckets properly for bucket size larger than 8MB.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c954ac8d
    • C
      bcache: introduce meta_bucket_pages() related helper routines · de1fafab
      Coly Li 提交于
      Currently the in-memory meta data like c->uuids or c->disk_buckets
      are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages()
      calls __get_free_pages() to allocated continuous pages with order
      indicated by ilog2(bucket_pages(c)),
       #define alloc_bucket_pages(gfp, c)                      \
           ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
      
      The maximum order is defined as MAX_ORDER, the default value is 11 (and
      can be overwritten by CONFIG_FORCE_MAX_ZONEORDER). In bcache code the
      maximum bucket size width is 16bits, this is restricted both by KEY_SIZE
      size and bucket_size size from struct cache_sb_disk. The maximum 16bits
      width and power-of-2 value is (1<<15) in unit of sector (512byte). It
      means the maximum value of bucket size in bytes is (1<<24) bytes a.k.a
      4096 pages.
      
      When the bucket size is set to maximum permitted value, ilog2(4096) is
      12, which exceeds the default maximum order __get_free_pages() can
      accepted, the failed pages allocation will fail cache set registration
      procedure and print a kernel oops message for the exceeded pages order.
      
      This patch introduces meta_bucket_pages(), meta_bucket_bytes(), and
      alloc_bucket_pages() helper routines. meta_bucket_pages() indicates the
      maximum pages can be allocated to meta data bucket, meta_bucket_bytes()
      indicates the according maximum bytes, and alloc_bucket_pages() does
      the pages allocation for meta bucket. Because meta_bucket_pages()
      chooses the smaller value among the bucket size and MAX_ORDER_NR_PAGES,
      it still works when MAX_ORDER overwritten by CONFIG_FORCE_MAX_ZONEORDER.
      
      Following patches will use these helper routines to decide maximum pages
      can be allocated for different meta data buckets. If the bucket size is
      larger than meta_bucket_bytes(), the bcache registration can continue to
      success, just the space more than meta_bucket_bytes() inside the bucket
      is wasted. Comparing bcache failed for large bucket size, wasting some
      space for meta data buckets is acceptable at this moment.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      de1fafab
    • C
      bcache: fix overflow in offset_to_stripe() · 7a148126
      Coly Li 提交于
      offset_to_stripe() returns the stripe number (in type unsigned int) from
      an offset (in type uint64_t) by the following calculation,
      	do_div(offset, d->stripe_size);
      For large capacity backing device (e.g. 18TB) with small stripe size
      (e.g. 4KB), the result is 4831838208 and exceeds UINT_MAX. The actual
      returned value which caller receives is 536870912, due to the overflow.
      
      Indeed in bcache_device_init(), bcache_device->nr_stripes is limited in
      range [1, INT_MAX]. Therefore all valid stripe numbers in bcache are
      in range [0, bcache_dev->nr_stripes - 1].
      
      This patch adds a upper limition check in offset_to_stripe(): the max
      valid stripe number should be less than bcache_device->nr_stripes. If
      the calculated stripe number from do_div() is equal to or larger than
      bcache_device->nr_stripe, -EINVAL will be returned. (Normally nr_stripes
      is less than INT_MAX, exceeding upper limitation doesn't mean overflow,
      therefore -EOVERFLOW is not used as error code.)
      
      This patch also changes nr_stripes' type of struct bcache_device from
      'unsigned int' to 'int', and return value type of offset_to_stripe()
      from 'unsigned int' to 'int', to match their exact data ranges.
      
      All locations where bcache_device->nr_stripes and offset_to_stripe() are
      referenced also get updated for the above type change.
      Reported-and-tested-by: NKen Raeburn <raeburn@redhat.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.redhat.com/show_bug.cgi?id=1783075Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7a148126
  3. 01 7月, 2020 1 次提交
  4. 27 5月, 2020 1 次提交
    • J
      bcache: Convert pr_<level> uses to a more typical style · 46f5aa88
      Joe Perches 提交于
      Remove the trailing newline from the define of pr_fmt and add newlines
      to the uses.
      
      Miscellanea:
      
      o Convert bch_bkey_dump from multiple uses of pr_err to pr_cont
        as the earlier conversion was inappropriate done causing multiple
        lines to be emitted where only a single output line was desired
      o Use vsprintf extension %pV in bch_cache_set_error to avoid multiple
        line output where only a single line output was desired
      o Coalesce formats
      
      Fixes: 6ae63e35 ("bcache: replace printk() by pr_*() routines")
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      46f5aa88
  5. 01 2月, 2020 1 次提交
    • C
      bcache: add readahead cache policy options via sysfs interface · 038ba8cc
      Coly Li 提交于
      In year 2007 high performance SSD was still expensive, in order to
      save more space for real workload or meta data, the readahead I/Os
      for non-meta data was bypassed and not cached on SSD.
      
      In now days, SSD price drops a lot and people can find larger size
      SSD with more comfortable price. It is unncessary to alway bypass
      normal readahead I/Os to save SSD space for now.
      
      This patch adds options for readahead data cache policies via sysfs
      file /sys/block/bcache<N>/readahead_cache_policy, the options are,
      - "all": cache all readahead data I/Os.
      - "meta-only": only cache meta data, and bypass other regular I/Os.
      
      If users want to make bcache continue to only cache readahead request
      for metadata and bypass regular data readahead, please set "meta-only"
      to this sysfs file. By default, bcache will back to cache all read-
      ahead requests now.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NColy Li <colyli@suse.de>
      Acked-by: NEric Wheeler <bcache@linux.ewheeler.net>
      Cc: Michael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      038ba8cc
  6. 24 1月, 2020 1 次提交
  7. 14 11月, 2019 3 次提交
    • C
      bcache: add idle_max_writeback_rate sysfs interface · c5fcdedc
      Coly Li 提交于
      For writeback mode, if there is no regular I/O request for a while,
      the writeback rate will be set to the maximum value (1TB/s for now).
      This is good for most of the storage workload, but there are still
      people don't what the maximum writeback rate in I/O idle time.
      
      This patch adds a sysfs interface file idle_max_writeback_rate to
      permit people to disable maximum writeback rate. Then the minimum
      writeback rate can be advised by writeback_rate_minimum in the
      bcache device's sysfs interface.
      Reported-by: NChristian Balzer <chibi@gol.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c5fcdedc
    • A
      bcache: fix deadlock in bcache_allocator · 84c529ae
      Andrea Righi 提交于
      bcache_allocator can call the following:
      
       bch_allocator_thread()
        -> bch_prio_write()
           -> bch_bucket_alloc()
              -> wait on &ca->set->bucket_wait
      
      But the wake up event on bucket_wait is supposed to come from
      bch_allocator_thread() itself => deadlock:
      
      [ 1158.490744] INFO: task bcache_allocato:15861 blocked for more than 10 seconds.
      [ 1158.495929]       Not tainted 5.3.0-050300rc3-generic #201908042232
      [ 1158.500653] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [ 1158.504413] bcache_allocato D    0 15861      2 0x80004000
      [ 1158.504419] Call Trace:
      [ 1158.504429]  __schedule+0x2a8/0x670
      [ 1158.504432]  schedule+0x2d/0x90
      [ 1158.504448]  bch_bucket_alloc+0xe5/0x370 [bcache]
      [ 1158.504453]  ? wait_woken+0x80/0x80
      [ 1158.504466]  bch_prio_write+0x1dc/0x390 [bcache]
      [ 1158.504476]  bch_allocator_thread+0x233/0x490 [bcache]
      [ 1158.504491]  kthread+0x121/0x140
      [ 1158.504503]  ? invalidate_buckets+0x890/0x890 [bcache]
      [ 1158.504506]  ? kthread_park+0xb0/0xb0
      [ 1158.504510]  ret_from_fork+0x35/0x40
      
      Fix by making the call to bch_prio_write() non-blocking, so that
      bch_allocator_thread() never waits on itself.
      
      Moreover, make sure to wake up the garbage collector thread when
      bch_prio_write() is failing to allocate buckets.
      
      BugLink: https://bugs.launchpad.net/bugs/1784665
      BugLink: https://bugs.launchpad.net/bugs/1796292Signed-off-by: NAndrea Righi <andrea.righi@canonical.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      84c529ae
    • G
      bcache: fix a lost wake-up problem caused by mca_cannibalize_lock · 34cf78bf
      Guoju Fang 提交于
      This patch fix a lost wake-up problem caused by the race between
      mca_cannibalize_lock and bch_cannibalize_unlock.
      
      Consider two processes, A and B. Process A is executing
      mca_cannibalize_lock, while process B takes c->btree_cache_alloc_lock
      and is executing bch_cannibalize_unlock. The problem happens that after
      process A executes cmpxchg and will execute prepare_to_wait. In this
      timeslice process B executes wake_up, but after that process A executes
      prepare_to_wait and set the state to TASK_INTERRUPTIBLE. Then process A
      goes to sleep but no one will wake up it. This problem may cause bcache
      device to dead.
      Signed-off-by: NGuoju Fang <fangguoju@gmail.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      34cf78bf
  8. 28 6月, 2019 4 次提交
  9. 13 12月, 2018 2 次提交
    • C
      bcache: option to automatically run gc thread after writeback · 7a671d8e
      Coly Li 提交于
      The option gc_after_writeback is disabled by default, because garbage
      collection will discard SSD data which drops cached data.
      
      Echo 1 into /sys/fs/bcache/<UUID>/internal/gc_after_writeback will
      enable this option, which wakes up gc thread when writeback accomplished
      and all cached data is clean.
      
      This option is helpful for people who cares writing performance more. In
      heavy writing workload, all cached data can be clean only happens when
      writeback thread cleans all cached data in I/O idle time. In such
      situation a following gc running may help to shrink bcache B+ tree and
      discard more clean data, which may be helpful for future writing
      requests.
      
      If you are not sure whether this is helpful for your own workload,
      please leave it as disabled by default.
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7a671d8e
    • S
      bcache: add comment for cache_set->fill_iter · d2f96f48
      Shenghui Wang 提交于
      We have the following define for btree iterator:
      	struct btree_iter {
      		size_t size, used;
      	#ifdef CONFIG_BCACHE_DEBUG
      		struct btree_keys *b;
      	#endif
      		struct btree_iter_set {
      			struct bkey *k, *end;
      		} data[MAX_BSETS];
      	};
      
      We can see that the length of data[] field is static MAX_BSETS, which is
      defined as 4 currently.
      
      But a btree node on disk could have too many bsets for an iterator to fit
      on the stack - maybe far more that MAX_BSETS. Have to dynamically allocate
      space to host more btree_iter_sets.
      
      bch_cache_set_alloc() will make sure the pool cache_set->fill_iter can
      allocate an iterator equipped with enough room that can host
      	(sb.bucket_size / sb.block_size)
      btree_iter_sets, which is more than static MAX_BSETS.
      
      bch_btree_node_read_done() will use that pool to allocate one iterator, to
      host many bsets in one btree node.
      
      Add more comment around cache_set->fill_iter to make code less confusing.
      Signed-off-by: NShenghui Wang <shhuiw@foxmail.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d2f96f48
  10. 08 10月, 2018 1 次提交
  11. 27 9月, 2018 1 次提交
    • G
      bcache: add separate workqueue for journal_write to avoid deadlock · 0f843e65
      Guoju Fang 提交于
      After write SSD completed, bcache schedules journal_write work to
      system_wq, which is a public workqueue in system, without WQ_MEM_RECLAIM
      flag. system_wq is also a bound wq, and there may be no idle kworker on
      current processor. Creating a new kworker may unfortunately need to
      reclaim memory first, by shrinking cache and slab used by vfs, which
      depends on bcache device. That's a deadlock.
      
      This patch create a new workqueue for journal_write with WQ_MEM_RECLAIM
      flag. It's rescuer thread will work to avoid the deadlock.
      Signed-off-by: NGuoju Fang <fangguoju@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0f843e65
  12. 12 8月, 2018 6 次提交
  13. 09 8月, 2018 3 次提交
    • C
      bcache: set max writeback rate when I/O request is idle · ea8c5356
      Coly Li 提交于
      Commit b1092c9a ("bcache: allow quick writeback when backing idle")
      allows the writeback rate to be faster if there is no I/O request on a
      bcache device. It works well if there is only one bcache device attached
      to the cache set. If there are many bcache devices attached to a cache
      set, it may introduce performance regression because multiple faster
      writeback threads of the idle bcache devices will compete the btree level
      locks with the bcache device who have I/O requests coming.
      
      This patch fixes the above issue by only permitting fast writebac when
      all bcache devices attached on the cache set are idle. And if one of the
      bcache devices has new I/O request coming, minimized all writeback
      throughput immediately and let PI controller __update_writeback_rate()
      to decide the upcoming writeback rate for each bcache device.
      
      Also when all bcache devices are idle, limited wrieback rate to a small
      number is wast of thoughput, especially when backing devices are slower
      non-rotation devices (e.g. SATA SSD). This patch sets a max writeback
      rate for each backing device if the whole cache set is idle. A faster
      writeback rate in idle time means new I/Os may have more available space
      for dirty data, and people may observe a better write performance then.
      
      Please note bcache may change its cache mode in run time, and this patch
      still works if the cache mode is switched from writeback mode and there
      is still dirty data on cache.
      
      Fixes: Commit b1092c9a ("bcache: allow quick writeback when backing idle")
      Cc: stable@vger.kernel.org #4.16+
      Signed-off-by: NColy Li <colyli@suse.de>
      Tested-by: NKai Krakow <kai@kaishome.de>
      Tested-by: NStefan Priebe <s.priebe@profihost.ag>
      Cc: Michael Lyle <mlyle@lyle.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ea8c5356
    • C
      bcache: fix mistaken code comments in bcache.h · cb329dec
      Coly Li 提交于
      This patch updates the code comment in struct cache with correct array
      names, to make the code to be more comprehensible.
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cb329dec
    • C
      bcache: do not check return value of debugfs_create_dir() · 78ac2107
      Coly Li 提交于
      Greg KH suggests that normal code should not care about debugfs. Therefore
      no matter successful or failed of debugfs_create_dir() execution, it is
      unncessary to check its return value.
      
      There are two functions called debugfs_create_dir() and check the return
      value, which are bch_debug_init() and closure_debug_init(). This patch
      changes these two functions from int to void type, and ignore return values
      of debugfs_create_dir().
      
      This patch does not fix exact bug, just makes things work as they should.
      Signed-off-by: NColy Li <colyli@suse.de>
      Suggested-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: stable@vger.kernel.org
      Cc: Kai Krakow <kai@kaishome.de>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      78ac2107
  14. 27 7月, 2018 2 次提交
    • T
      bcache: finish incremental GC · 5c25c4fc
      Tang Junhui 提交于
      In GC thread, we record the latest GC key in gc_done, which is expected
      to be used for incremental GC, but in currently code, we didn't realize
      it. When GC runs, front side IO would be blocked until the GC over, it
      would be a long time if there is a lot of btree nodes.
      
      This patch realizes incremental GC, the main ideal is that, when there
      are front side I/Os, after GC some nodes (100), we stop GC, release locker
      of the btree node, and go to process the front side I/Os for some times
      (100 ms), then go back to GC again.
      
      By this patch, when we doing GC, I/Os are not blocked all the time, and
      there is no obvious I/Os zero jump problem any more.
      
      Patch v2: Rename some variables and macros name as Coly suggested.
      Signed-off-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5c25c4fc
    • T
      bcache: simplify the calculation of the total amount of flash dirty data · 99a27d59
      Tang Junhui 提交于
      Currently we calculate the total amount of flash only devices dirty data
      by adding the dirty data of each flash only device under registering
      locker. It is very inefficient.
      
      In this patch, we add a member flash_dev_dirty_sectors in struct cache_set
      to record the total amount of flash only devices dirty data in real time,
      so we didn't need to calculate the total amount of dirty data any more.
      Signed-off-by: NTang Junhui <tang.junhui@zte.com.cn>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      99a27d59
  15. 31 5月, 2018 1 次提交
  16. 29 5月, 2018 2 次提交
    • A
      bcache: Move couple of string arrays to sysfs.c · 04cbc211
      Andy Shevchenko 提交于
      There is couple of string arrays that are used exclusively in sysfs.c.
      Move it to there and make them static.
      
      Besides above, it will allow further clean up.
      
      No functional change intended.
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      04cbc211
    • C
      bcache: stop bcache device when backing device is offline · 0f0709e6
      Coly Li 提交于
      Currently bcache does not handle backing device failure, if backing
      device is offline and disconnected from system, its bcache device can still
      be accessible. If the bcache device is in writeback mode, I/O requests even
      can success if the requests hit on cache device. That is to say, when and
      how bcache handles offline backing device is undefined.
      
      This patch tries to handle backing device offline in a rather simple way,
      - Add cached_dev->status_update_thread kernel thread to update backing
        device status in every 1 second.
      - Add cached_dev->offline_seconds to record how many seconds the backing
        device is observed to be offline. If the backing device is offline for
        BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and
        call bcache_device_stop() to stop the bache device which linked to the
        offline backing device.
      
      Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds,
      its bcache device will be removed, then user space application writing on
      it will get error immediately, and handler the device failure in time.
      
      This patch is quite simple, does not handle more complicated situations.
      Once the bcache device is stopped, users need to recovery the backing
      device, register and attach it manually.
      
      Changelog:
      v3: call wait_for_kthread_stop() before exits kernel thread.
      v2: remove "bcache: " prefix when calling pr_warn().
      v1: initial version.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Cc: Michael Lyle <mlyle@lyle.org>
      Cc: Junhui Tang <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0f0709e6
  17. 03 5月, 2018 1 次提交
    • C
      bcache: store disk name in struct cache and struct cached_dev · 6e916a7e
      Coly Li 提交于
      Current code uses bdevname() or bio_devname() to reference gendisk
      disk name when bcache needs to display the disk names in kernel message.
      It was safe before bcache device failure handling patch set merged in,
      because when devices are failed, there was deadlock to prevent bcache
      printing error messages with gendisk disk name. But after the failure
      handling patch set merged, the deadlock is fixed, so it is possible
      that the gendisk structure bdev->hd_disk is released when bdevname() is
      called to reference bdev->bd_disk->disk_name[]. This is why I receive
      bug report of NULL pointers deference panic.
      
      This patch stores gendisk disk name in a buffer inside struct cache and
      struct cached_dev, then print out the offline device name won't reference
      bdev->hd_disk anymore. And this patch also avoids extra function calls
      of bdevname() and bio_devnmae().
      
      Changelog:
      v3, add Reviewed-by from Hannes.
      v2, call bdevname() earlier in register_bdev()
      v1, first version with segguestion from Junhui Tang.
      
      Fixes: c7b7bd07 ("bcache: add io_disable to struct cached_dev")
      Fixes: 5138ac67 ("bcache: fix misleading error message in bch_count_io_errors()")
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6e916a7e
  18. 19 3月, 2018 1 次提交
    • C
      bcache: add io_disable to struct cached_dev · c7b7bd07
      Coly Li 提交于
      If a bcache device is configured to writeback mode, current code does not
      handle write I/O errors on backing devices properly.
      
      In writeback mode, write request is written to cache device, and
      latter being flushed to backing device. If I/O failed when writing from
      cache device to the backing device, bcache code just ignores the error and
      upper layer code is NOT noticed that the backing device is broken.
      
      This patch tries to handle backing device failure like how the cache device
      failure is handled,
      - Add a error counter 'io_errors' and error limit 'error_limit' in struct
        cached_dev. Add another io_disable to struct cached_dev to disable I/Os
        on the problematic backing device.
      - When I/O error happens on backing device, increase io_errors counter. And
        if io_errors reaches error_limit, set cache_dev->io_disable to true, and
        stop the bcache device.
      
      The result is, if backing device is broken of disconnected, and I/O errors
      reach its error limit, backing device will be disabled and the associated
      bcache device will be removed from system.
      
      Changelog:
      v2: remove "bcache: " prefix in pr_error(), and use correct name string to
          print out bcache device gendisk name.
      v1: indeed this is new added in v2 patch set.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NMichael Lyle <mlyle@lyle.org>
      Cc: Michael Lyle <mlyle@lyle.org>
      Cc: Junhui Tang <tang.junhui@zte.com.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c7b7bd07