提交 06ff79d5 编写于 作者: Y Yu Kuai 提交者: Yongqiang Liu

blk-throtl: fix race in io dispatching

hulk inclusion
category: bugfix
bugzilla: 186449, https://gitee.com/openeuler/kernel/issues/I4YSPC
CVE: NA

--------------------------------

If io is throttled, such io will be issued by blk_throtl_dispatch_work_fn()
or blk_throtl_drain(), and the io is fetched by throtl_pop_queued().
throtl_pop_queued() should be protected by 'queue_lock', as what
blk_throtl_dispatch_work_fn() does. However, it's not protected in
blk_throtl_drain(), which may lead to concurrent bio_list_pop(), and may
end up crashing the kernel.

Fix the problem by protecting throtl_pop_queued() through 'queue_lock'
in blk_throtl_drain().
Signed-off-by: NYu Kuai <yukuai3@huawei.com>
Reviewed-by: NJason Yan <yanaijie@huawei.com>
Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
上级 f91c9577
......@@ -2422,8 +2422,11 @@ void blk_throtl_drain(struct request_queue *q)
struct blkcg_gq *blkg;
struct cgroup_subsys_state *pos_css;
struct bio *bio;
struct bio_list bio_list_on_stack;
int rw;
bio_list_init(&bio_list_on_stack);
queue_lockdep_assert_held(q);
rcu_read_lock();
......@@ -2440,12 +2443,16 @@ void blk_throtl_drain(struct request_queue *q)
tg_drain_bios(&td->service_queue);
rcu_read_unlock();
spin_unlock_irq(q->queue_lock);
/* all bios now should be in td->service_queue, issue them */
for (rw = READ; rw <= WRITE; rw++)
while ((bio = throtl_pop_queued(&td->service_queue.queued[rw],
NULL)))
bio_list_add(&bio_list_on_stack, bio);
spin_unlock_irq(q->queue_lock);
if (!bio_list_empty(&bio_list_on_stack))
while ((bio = bio_list_pop(&bio_list_on_stack)))
generic_make_request(bio);
spin_lock_irq(q->queue_lock);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册