提交 4a07723f 编写于 作者: P Pavel Begunkov 提交者: Jens Axboe

io_uring: limit the number of cancellation buckets

Don't allocate to many hash/cancellation buckets, there might be too
many, clamp it to 8 bits, or 256 * 64B = 16KB. We don't usually have too
many requests, and 256 buckets should be enough, especially since we
do hash search only in the cancellation path.
Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b9620c8072ba61a2d50eba894b89bd93a94a9abd.1655371007.git.asml.silence@gmail.comReviewed-by: NHao Xu <howeyxu@tencent.com>
Signed-off-by: NJens Axboe <axboe@kernel.dk>
上级 4dfab8ab
...@@ -254,12 +254,12 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) ...@@ -254,12 +254,12 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
/* /*
* Use 5 bits less than the max cq entries, that should give us around * Use 5 bits less than the max cq entries, that should give us around
* 32 entries per hash list if totally full and uniformly spread. * 32 entries per hash list if totally full and uniformly spread, but
* don't keep too many buckets to not overconsume memory.
*/ */
hash_bits = ilog2(p->cq_entries); hash_bits = ilog2(p->cq_entries) - 5;
hash_bits -= 5; hash_bits = clamp(hash_bits, 1, 8);
if (hash_bits <= 0)
hash_bits = 1;
ctx->cancel_hash_bits = hash_bits; ctx->cancel_hash_bits = hash_bits;
ctx->cancel_hash = ctx->cancel_hash =
kmalloc((1U << hash_bits) * sizeof(struct io_hash_bucket), kmalloc((1U << hash_bits) * sizeof(struct io_hash_bucket),
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册