1. 18 3月, 2022 4 次提交
    • Y
      block: cancel all throttled bios in del_gendisk() · 8f9e7b65
      Yu Kuai 提交于
      Throttled bios can't be issued after del_gendisk() is done, thus
      it's better to cancel them immediately rather than waiting for
      throttle is done.
      
      For example, if user thread is throttled with low bps while it's
      issuing large io, and the device is deleted. The user thread will
      wait for a long time for io to return.
      Signed-off-by: NYu Kuai <yukuai3@huawei.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20220318130144.1066064-4-ming.lei@redhat.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      8f9e7b65
    • M
      block: let blkcg_gq grab request queue's refcnt · 0a9a25ca
      Ming Lei 提交于
      In the whole lifetime of blkcg_gq instance, ->q will be referred, such
      as, ->pd_free_fn() is called in blkg_free, and throtl_pd_free() still
      may touch the request queue via &tg->service_queue.pending_timer which
      is handled by throtl_pending_timer_fn(), so it is reasonable to grab
      request queue's refcnt by blkcg_gq instance.
      
      Previously blkcg_exit_queue() is called from blk_release_queue, and it
      is hard to avoid the use-after-free. But recently commit 1059699f ("block:
      move blkcg initialization/destroy into disk allocation/release handler")
      is merged to for-5.18/block, it becomes simple to fix the issue by simply
      grabbing request queue's refcnt.
      Reported-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20220318130144.1066064-3-ming.lei@redhat.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      0a9a25ca
    • M
      block: avoid use-after-free on throttle data · ee37eddb
      Ming Lei 提交于
      In throtl_pending_timer_fn(), request queue is retrieved from throttle
      data. And tg's pending timer is deleted synchronously when releasing the
      associated blkg, at that time, throttle data may have been freed since
      commit 1059699f ("block: move blkcg initialization/destroy into disk
      allocation/release handler") moves freeing q->td to disk_release() from
      blk_release_queue(). So use-after-free on q->td in throtl_pending_timer_fn
      can be triggered.
      
      Fixes the issue by:
      
      - do nothing in case that disk is released, when there isn't any bio to
        dispatch
      
      - retrieve request queue from blkg instead of throttle data for
      non top-level pending timer.
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20220318130144.1066064-2-ming.lei@redhat.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      ee37eddb
    • S
      block: limit request dispatch loop duration · 572299f0
      Shin'ichiro Kawasaki 提交于
      When IO requests are made continuously and the target block device
      handles requests faster than request arrival, the request dispatch loop
      keeps on repeating to dispatch the arriving requests very long time,
      more than a minute. Since the loop runs as a workqueue worker task, the
      very long loop duration triggers workqueue watchdog timeout and BUG [1].
      
      To avoid the very long loop duration, break the loop periodically. When
      opportunity to dispatch requests still exists, check need_resched(). If
      need_resched() returns true, the dispatch loop already consumed its time
      slice, then reschedule the dispatch work and break the loop. With heavy
      IO load, need_resched() does not return true for 20~30 seconds. To cover
      such case, check time spent in the dispatch loop with jiffies. If more
      than 1 second is spent, reschedule the dispatch work and break the loop.
      
      [1]
      
      [  609.691437] BUG: workqueue lockup - pool cpus=10 node=1 flags=0x0 nice=-20 stuck for 35s!
      [  609.701820] Showing busy workqueues and worker pools:
      [  609.707915] workqueue events: flags=0x0
      [  609.712615]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
      [  609.712626]     pending: drm_fb_helper_damage_work [drm_kms_helper]
      [  609.712687] workqueue events_freezable: flags=0x4
      [  609.732943]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
      [  609.732952]     pending: pci_pme_list_scan
      [  609.732968] workqueue events_power_efficient: flags=0x80
      [  609.751947]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
      [  609.751955]     pending: neigh_managed_work
      [  609.752018] workqueue kblockd: flags=0x18
      [  609.769480]   pwq 21: cpus=10 node=1 flags=0x0 nice=-20 active=3/256 refcnt=4
      [  609.769488]     in-flight: 1020:blk_mq_run_work_fn
      [  609.769498]     pending: blk_mq_timeout_work, blk_mq_run_work_fn
      [  609.769744] pool 21: cpus=10 node=1 flags=0x0 nice=-20 hung=35s workers=2 idle: 67
      [  639.899730] BUG: workqueue lockup - pool cpus=10 node=1 flags=0x0 nice=-20 stuck for 66s!
      [  639.909513] Showing busy workqueues and worker pools:
      [  639.915404] workqueue events: flags=0x0
      [  639.920197]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
      [  639.920215]     pending: drm_fb_helper_damage_work [drm_kms_helper]
      [  639.920365] workqueue kblockd: flags=0x18
      [  639.939932]   pwq 21: cpus=10 node=1 flags=0x0 nice=-20 active=3/256 refcnt=4
      [  639.939942]     in-flight: 1020:blk_mq_run_work_fn
      [  639.939955]     pending: blk_mq_timeout_work, blk_mq_run_work_fn
      [  639.940212] pool 21: cpus=10 node=1 flags=0x0 nice=-20 hung=66s workers=2 idle: 67
      
      Fixes: 6e6fcbc2 ("blk-mq: support batching dispatch in case of io")
      Signed-off-by: NShin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
      Cc: stable@vger.kernel.org # v5.10+
      Link: https://lore.kernel.org/linux-block/20220310091649.zypaem5lkyfadymg@shindev/
      Link: https://lore.kernel.org/r/20220318022641.133484-1-shinichiro.kawasaki@wdc.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      572299f0
  2. 16 3月, 2022 2 次提交
  3. 15 3月, 2022 2 次提交
    • T
      block: don't merge across cgroup boundaries if blkcg is enabled · 6b2b0459
      Tejun Heo 提交于
      blk-iocost and iolatency are cgroup aware rq-qos policies but they didn't
      disable merges across different cgroups. This obviously can lead to
      accounting and control errors but more importantly to priority inversions -
      e.g. an IO which belongs to a higher priority cgroup or IO class may end up
      getting throttled incorrectly because it gets merged to an IO issued from a
      low priority cgroup.
      
      Fix it by adding blk_cgroup_mergeable() which is called from merge paths and
      rejects cross-cgroup and cross-issue_as_root merges.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: d7067512 ("block: introduce blk-iolatency io controller")
      Cc: stable@vger.kernel.org # v4.19+
      Cc: Josef Bacik <jbacik@fb.com>
      Link: https://lore.kernel.org/r/Yi/eE/6zFNyWJ+qd@slm.duckdns.orgSigned-off-by: NJens Axboe <axboe@kernel.dk>
      6b2b0459
    • T
      block: fix rq-qos breakage from skipping rq_qos_done_bio() · aa1b46dc
      Tejun Heo 提交于
      a647a524 ("block: don't call rq_qos_ops->done_bio if the bio isn't
      tracked") made bio_endio() skip rq_qos_done_bio() if BIO_TRACKED is not set.
      While this fixed a potential oops, it also broke blk-iocost by skipping the
      done_bio callback for merged bios.
      
      Before, whether a bio goes through rq_qos_throttle() or rq_qos_merge(),
      rq_qos_done_bio() would be called on the bio on completion with BIO_TRACKED
      distinguishing the former from the latter. rq_qos_done_bio() is not called
      for bios which wenth through rq_qos_merge(). This royally confuses
      blk-iocost as the merged bios never finish and are considered perpetually
      in-flight.
      
      One reliably reproducible failure mode is an intermediate cgroup geting
      stuck active preventing its children from being activated due to the
      leaf-only rule, leading to loss of control. The following is from
      resctl-bench protection scenario which emulates isolating a web server like
      workload from a memory bomb run on an iocost configuration which should
      yield a reasonable level of protection.
      
        # cat /sys/block/nvme2n1/device/model
        Samsung SSD 970 PRO 512GB
        # cat /sys/fs/cgroup/io.cost.model
        259:0 ctrl=user model=linear rbps=834913556 rseqiops=93622 rrandiops=102913 wbps=618985353 wseqiops=72325 wrandiops=71025
        # cat /sys/fs/cgroup/io.cost.qos
        259:0 enable=1 ctrl=user rpct=95.00 rlat=18776 wpct=95.00 wlat=8897 min=60.00 max=100.00
        # resctl-bench -m 29.6G -r out.json run protection::scenario=mem-hog,loops=1
        ...
        Memory Hog Summary
        ==================
      
        IO Latency: R p50=242u:336u/2.5m p90=794u:1.4m/7.5m p99=2.7m:8.0m/62.5m max=8.0m:36.4m/350m
                    W p50=221u:323u/1.5m p90=709u:1.2m/5.5m p99=1.5m:2.5m/9.5m max=6.9m:35.9m/350m
      
        Isolation and Request Latency Impact Distributions:
      
                      min   p01   p05   p10   p25   p50   p75   p90   p95   p99   max  mean stdev
        isol%       15.90 15.90 15.90 40.05 57.24 59.07 60.01 74.63 74.63 90.35 90.35 58.12 15.82
        lat-imp%        0     0     0     0     0  4.55 14.68 15.54 233.5 548.1 548.1 53.88 143.6
      
        Result: isol=58.12:15.82% lat_imp=53.88%:143.6 work_csv=100.0% missing=3.96%
      
      The isolation result of 58.12% is close to what this device would show
      without any IO control.
      
      Fix it by introducing a new flag BIO_QOS_MERGED to mark merged bios and
      calling rq_qos_done_bio() on them too. For consistency and clarity, rename
      BIO_TRACKED to BIO_QOS_THROTTLED. The flag checks are moved into
      rq_qos_done_bio() so that it's next to the code paths that set the flags.
      
      With the patch applied, the above same benchmark shows:
      
        # resctl-bench -m 29.6G -r out.json run protection::scenario=mem-hog,loops=1
        ...
        Memory Hog Summary
        ==================
      
        IO Latency: R p50=123u:84.4u/985u p90=322u:256u/2.5m p99=1.6m:1.4m/9.5m max=11.1m:36.0m/350m
                    W p50=429u:274u/995u p90=1.7m:1.3m/4.5m p99=3.4m:2.7m/11.5m max=7.9m:5.9m/26.5m
      
        Isolation and Request Latency Impact Distributions:
      
                      min   p01   p05   p10   p25   p50   p75   p90   p95   p99   max  mean stdev
        isol%       84.91 84.91 89.51 90.73 92.31 94.49 96.36 98.04 98.71 100.0 100.0 94.42  2.81
        lat-imp%        0     0     0     0     0  2.81  5.73 11.11 13.92 17.53 22.61  4.10  4.68
      
        Result: isol=94.42:2.81% lat_imp=4.10%:4.68 work_csv=58.34% missing=0%
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: a647a524 ("block: don't call rq_qos_ops->done_bio if the bio isn't tracked")
      Cc: stable@vger.kernel.org # v5.15+
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Yu Kuai <yukuai3@huawei.com>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/Yi7rdrzQEHjJLGKB@slm.duckdns.orgSigned-off-by: NJens Axboe <axboe@kernel.dk>
      aa1b46dc
  4. 12 3月, 2022 2 次提交
  5. 09 3月, 2022 21 次提交
  6. 07 3月, 2022 9 次提交