提交 bbfc3c5d 编写于 作者: T Tahsin Erdogan 提交者: Jens Axboe

block: queue lock must be acquired when iterating over rls

blk_set_queue_dying() does not acquire queue lock before it calls
blk_queue_for_each_rl(). This allows a racing blkg_destroy() to
remove blkg->q_node from the linked list and have
blk_queue_for_each_rl() loop infitely over the removed blkg->q_node
list node.
Signed-off-by: NTahsin Erdogan <tahsin@google.com>
Signed-off-by: NJens Axboe <axboe@fb.com>
上级 5fad1b64
...@@ -527,12 +527,14 @@ void blk_set_queue_dying(struct request_queue *q) ...@@ -527,12 +527,14 @@ void blk_set_queue_dying(struct request_queue *q)
else { else {
struct request_list *rl; struct request_list *rl;
spin_lock_irq(q->queue_lock);
blk_queue_for_each_rl(rl, q) { blk_queue_for_each_rl(rl, q) {
if (rl->rq_pool) { if (rl->rq_pool) {
wake_up(&rl->wait[BLK_RW_SYNC]); wake_up(&rl->wait[BLK_RW_SYNC]);
wake_up(&rl->wait[BLK_RW_ASYNC]); wake_up(&rl->wait[BLK_RW_ASYNC]);
} }
} }
spin_unlock_irq(q->queue_lock);
} }
} }
EXPORT_SYMBOL_GPL(blk_set_queue_dying); EXPORT_SYMBOL_GPL(blk_set_queue_dying);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册