• K
    aio: percpu reqs_available · e1bdd5f2
    Kent Overstreet 提交于
    See the previous patch ("aio: reqs_active -> reqs_available") for why we
    want to do this - this basically implements a per cpu allocator for
    reqs_available that doesn't actually allocate anything.
    
    Note that we need to increase the size of the ringbuffer we allocate,
    since a single thread won't necessarily be able to use all the
    reqs_available slots - some (up to about half) might be on other per cpu
    lists, unavailable for the current thread.
    
    We size the ringbuffer based on the nr_events userspace passed to
    io_setup(), so this is a slight behaviour change - but nr_events wasn't
    being used as a hard limit before, it was being rounded up to the next
    page before so this doesn't change the actual semantics.
    Signed-off-by: NKent Overstreet <koverstreet@google.com>
    Cc: Zach Brown <zab@redhat.com>
    Cc: Felipe Balbi <balbi@ti.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Mark Fasheh <mfasheh@suse.com>
    Cc: Joel Becker <jlbec@evilplan.org>
    Cc: Rusty Russell <rusty@rustcorp.com.au>
    Cc: Jens Axboe <axboe@kernel.dk>
    Cc: Asai Thambi S P <asamymuthupa@micron.com>
    Cc: Selvan Mani <smani@micron.com>
    Cc: Sam Bradshaw <sbradshaw@micron.com>
    Cc: Jeff Moyer <jmoyer@redhat.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Benjamin LaHaise <bcrl@kvack.org>
    Reviewed-by: N"Theodore Ts'o" <tytso@mit.edu>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org>
    e1bdd5f2
aio.c 37.6 KB