提交 f809fef1 编写于 作者: J Jens Axboe 提交者: Joseph Qi

io_uring: fix sporadic double CQE entry for close

to #26323588

commit 1a417f4e618e05fba29ba222f1e8555c302376ce upstream.

We punt close to async for the final fput(), but we log the completion
even before that even in that case. We rely on the request not having
a files table assigned to detect what the final async close should do.
However, if we punt the async queue to __io_queue_sqe(), we'll get
->files assigned and this makes io_close_finish() think it should both
close the filp again (which does no harm) AND log a new CQE event for
this request. This causes duplicate CQEs.

Queue the request up for async manually so we don't grab files
needlessly and trigger this condition.
Signed-off-by: NJens Axboe <axboe@kernel.dk>
Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
Acked-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
上级 e0280e19
...@@ -2840,16 +2840,13 @@ static void io_close_finish(struct io_wq_work **workptr) ...@@ -2840,16 +2840,13 @@ static void io_close_finish(struct io_wq_work **workptr)
int ret; int ret;
ret = filp_close(req->close.put_file, req->work.files); ret = filp_close(req->close.put_file, req->work.files);
if (ret < 0) { if (ret < 0)
req_set_fail_links(req); req_set_fail_links(req);
}
io_cqring_add_event(req, ret); io_cqring_add_event(req, ret);
} }
fput(req->close.put_file); fput(req->close.put_file);
/* we bypassed the re-issue, drop the submission reference */
io_put_req(req);
io_put_req_find_next(req, &nxt); io_put_req_find_next(req, &nxt);
if (nxt) if (nxt)
io_wq_assign_next(workptr, nxt); io_wq_assign_next(workptr, nxt);
...@@ -2891,7 +2888,13 @@ static int io_close(struct io_kiocb *req, struct io_kiocb **nxt, ...@@ -2891,7 +2888,13 @@ static int io_close(struct io_kiocb *req, struct io_kiocb **nxt,
eagain: eagain:
req->work.func = io_close_finish; req->work.func = io_close_finish;
return -EAGAIN; /*
* Do manual async queue here to avoid grabbing files - we don't
* need the files, and it'll cause io_close_finish() to close
* the file again and cause a double CQE entry for this request
*/
io_queue_async_work(req);
return 0;
} }
static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe) static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册