提交 bcb4e11e 编写于 作者: J Jens Axboe 提交者: Cheng Jian

io_uring: fix sporadic double CQE entry for close

mainline inclusion
from mainline-5.6-rc1
commit 1a417f4e
category: feature
bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=27
CVE: NA
---------------------------

We punt close to async for the final fput(), but we log the completion
even before that even in that case. We rely on the request not having
a files table assigned to detect what the final async close should do.
However, if we punt the async queue to __io_queue_sqe(), we'll get
->files assigned and this makes io_close_finish() think it should both
close the filp again (which does no harm) AND log a new CQE event for
this request. This causes duplicate CQEs.

Queue the request up for async manually so we don't grab files
needlessly and trigger this condition.
Signed-off-by: NJens Axboe <axboe@kernel.dk>
Signed-off-by: Nyangerkun <yangerkun@huawei.com>
Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com>
Signed-off-by: NCheng Jian <cj.chengjian@huawei.com>
上级 c810acfa
...@@ -2795,16 +2795,13 @@ static void io_close_finish(struct io_wq_work **workptr) ...@@ -2795,16 +2795,13 @@ static void io_close_finish(struct io_wq_work **workptr)
int ret; int ret;
ret = filp_close(req->close.put_file, req->work.files); ret = filp_close(req->close.put_file, req->work.files);
if (ret < 0) { if (ret < 0)
req_set_fail_links(req); req_set_fail_links(req);
}
io_cqring_add_event(req, ret); io_cqring_add_event(req, ret);
} }
fput(req->close.put_file); fput(req->close.put_file);
/* we bypassed the re-issue, drop the submission reference */
io_put_req(req);
io_put_req_find_next(req, &nxt); io_put_req_find_next(req, &nxt);
if (nxt) if (nxt)
io_wq_assign_next(workptr, nxt); io_wq_assign_next(workptr, nxt);
...@@ -2846,7 +2843,13 @@ static int io_close(struct io_kiocb *req, struct io_kiocb **nxt, ...@@ -2846,7 +2843,13 @@ static int io_close(struct io_kiocb *req, struct io_kiocb **nxt,
eagain: eagain:
req->work.func = io_close_finish; req->work.func = io_close_finish;
return -EAGAIN; /*
* Do manual async queue here to avoid grabbing files - we don't
* need the files, and it'll cause io_close_finish() to close
* the file again and cause a double CQE entry for this request
*/
io_queue_async_work(req);
return 0;
} }
static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe) static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册