提交 15096078 编写于 作者: Y Ye Bin 提交者: Zheng Zengkai

block: avoid quiesce while elevator init

hulk inclusion
category: bugfix
bugzilla: 185781 https://gitee.com/openeuler/kernel/issues/I4DDEL

-----------------------------------------------

As blk_mq_quiesce_queue in elevator_init_mq will wait a RCU gap which want to
make sure no IO will happen while blk_mq_init_sched. If there is lots of
device will lead to boot slowly.
To address this issue, according to Lei Ming's suggestion:
"We are called before adding disk, when there isn't any FS I/O, so freezing
queue plus canceling dispatch work is enough to drain any dispatch activities
originated from passthrough requests, then no need to quiesce queue which may
add long boot latency, especially when lots of disks are involved."
Signed-off-by: NYe Bin <yebin10@huawei.com>
Reviewed-by: NJason Yan <yanaijie@huawei.com>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 7c24452c
...@@ -3990,6 +3990,19 @@ unsigned int blk_mq_rq_cpu(struct request *rq) ...@@ -3990,6 +3990,19 @@ unsigned int blk_mq_rq_cpu(struct request *rq)
} }
EXPORT_SYMBOL(blk_mq_rq_cpu); EXPORT_SYMBOL(blk_mq_rq_cpu);
void blk_mq_cancel_work_sync(struct request_queue *q)
{
if (queue_is_mq(q)) {
struct blk_mq_hw_ctx *hctx;
int i;
cancel_delayed_work_sync(&q->requeue_work);
queue_for_each_hw_ctx(q, hctx, i)
cancel_delayed_work_sync(&hctx->run_work);
}
}
static int __init blk_mq_init(void) static int __init blk_mq_init(void)
{ {
int i; int i;
......
...@@ -129,6 +129,7 @@ extern int blk_mq_sysfs_register(struct request_queue *q); ...@@ -129,6 +129,7 @@ extern int blk_mq_sysfs_register(struct request_queue *q);
extern void blk_mq_sysfs_unregister(struct request_queue *q); extern void blk_mq_sysfs_unregister(struct request_queue *q);
extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx); extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx);
void blk_mq_cancel_work_sync(struct request_queue *q);
void blk_mq_release(struct request_queue *q); void blk_mq_release(struct request_queue *q);
static inline struct blk_mq_ctx *__blk_mq_get_ctx(struct request_queue *q, static inline struct blk_mq_ctx *__blk_mq_get_ctx(struct request_queue *q,
......
...@@ -684,12 +684,18 @@ void elevator_init_mq(struct request_queue *q) ...@@ -684,12 +684,18 @@ void elevator_init_mq(struct request_queue *q)
if (!e) if (!e)
return; return;
/*
* We are called before adding disk, when there isn't any FS I/O,
* so freezing queue plus canceling dispatch work is enough to
* drain any dispatch activities originated from passthrough
* requests, then no need to quiesce queue which may add long boot
* latency, especially when lots of disks are involved.
*/
blk_mq_freeze_queue(q); blk_mq_freeze_queue(q);
blk_mq_quiesce_queue(q); blk_mq_cancel_work_sync(q);
err = blk_mq_init_sched(q, e); err = blk_mq_init_sched(q, e);
blk_mq_unquiesce_queue(q);
blk_mq_unfreeze_queue(q); blk_mq_unfreeze_queue(q);
if (err) { if (err) {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册