提交 c7b28555 编写于 作者: A Al Viro 提交者: Linus Torvalds

aio: fix the "too late munmap()" race

Current code has put_ioctx() called asynchronously from aio_fput_routine();
that's done *after* we have killed the request that used to pin ioctx,
so there's nothing to stop io_destroy() waiting in wait_for_all_aios()
from progressing.  As the result, we can end up with async call of
put_ioctx() being the last one and possibly happening during exit_mmap()
or elf_core_dump(), neither of which expects stray munmap() being done
to them...

We do need to prevent _freeing_ ioctx until aio_fput_routine() is done
with that, but that's all we care about - neither io_destroy() nor
exit_aio() will progress past wait_for_all_aios() until aio_fput_routine()
does really_put_req(), so the ioctx teardown won't be done until then
and we don't care about the contents of ioctx past that point.

Since actual freeing of these suckers is RCU-delayed, we don't need to
bump ioctx refcount when request goes into list for async removal.
All we need is rcu_read_lock held just over the ->ctx_lock-protected
area in aio_fput_routine().
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
Acked-by: NBenjamin LaHaise <bcrl@kvack.org>
Cc: stable@vger.kernel.org
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 86b62a2c
...@@ -228,12 +228,6 @@ static void __put_ioctx(struct kioctx *ctx) ...@@ -228,12 +228,6 @@ static void __put_ioctx(struct kioctx *ctx)
call_rcu(&ctx->rcu_head, ctx_rcu_free); call_rcu(&ctx->rcu_head, ctx_rcu_free);
} }
static inline void get_ioctx(struct kioctx *kioctx)
{
BUG_ON(atomic_read(&kioctx->users) <= 0);
atomic_inc(&kioctx->users);
}
static inline int try_get_ioctx(struct kioctx *kioctx) static inline int try_get_ioctx(struct kioctx *kioctx)
{ {
return atomic_inc_not_zero(&kioctx->users); return atomic_inc_not_zero(&kioctx->users);
...@@ -609,11 +603,16 @@ static void aio_fput_routine(struct work_struct *data) ...@@ -609,11 +603,16 @@ static void aio_fput_routine(struct work_struct *data)
fput(req->ki_filp); fput(req->ki_filp);
/* Link the iocb into the context's free list */ /* Link the iocb into the context's free list */
rcu_read_lock();
spin_lock_irq(&ctx->ctx_lock); spin_lock_irq(&ctx->ctx_lock);
really_put_req(ctx, req); really_put_req(ctx, req);
/*
* at that point ctx might've been killed, but actual
* freeing is RCU'd
*/
spin_unlock_irq(&ctx->ctx_lock); spin_unlock_irq(&ctx->ctx_lock);
rcu_read_unlock();
put_ioctx(ctx);
spin_lock_irq(&fput_lock); spin_lock_irq(&fput_lock);
} }
spin_unlock_irq(&fput_lock); spin_unlock_irq(&fput_lock);
...@@ -644,7 +643,6 @@ static int __aio_put_req(struct kioctx *ctx, struct kiocb *req) ...@@ -644,7 +643,6 @@ static int __aio_put_req(struct kioctx *ctx, struct kiocb *req)
* this function will be executed w/out any aio kthread wakeup. * this function will be executed w/out any aio kthread wakeup.
*/ */
if (unlikely(!fput_atomic(req->ki_filp))) { if (unlikely(!fput_atomic(req->ki_filp))) {
get_ioctx(ctx);
spin_lock(&fput_lock); spin_lock(&fput_lock);
list_add(&req->ki_list, &fput_head); list_add(&req->ki_list, &fput_head);
spin_unlock(&fput_lock); spin_unlock(&fput_lock);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册