1. 17 1月, 2020 40 次提交
    • A
      arch: add io_uring syscalls everywhere · cb6fa366
      Arnd Bergmann 提交于
      Cherry-pick from commit 39036cd2727395c3369b1051005da74059a85317
      upstream.
      
      Add the io_uring system calls to all architectures.
      
      These system calls are designed to handle both native and compat tasks,
      so all entries are the same across architectures, only arm-compat and
      the generic tale still use an old format.
      
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> (s390)
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      cb6fa366
    • J
      io_uring: drop io_file_put() 'file' argument · 52460705
      Jens Axboe 提交于
      commit 3d6770fbd9353988839611bab107e4e891506aad upstream.
      
      Since the fget/fput handling was reworked in commit 09bb839434bd, we
      never call io_file_put() with state == NULL (and hence file != NULL)
      anymore. Remove that case.
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      52460705
    • J
      io_uring: only test SQPOLL cpu after we've verified it · e738eb4e
      Jens Axboe 提交于
      commit 917257daa0fea7a007102691c0e27d9216a96768 upstream.
      
      We currently call cpu_possible() even if we don't use the CPU. Move the
      test under the SQ_AFF branch, which is the only place where we'll use
      the value. Do the cpu_possible() test AFTER we've limited it to a max
      of NR_CPUS. This avoids triggering the following warning:
      
      WARNING: CPU: 1 PID: 7600 at include/linux/cpumask.h:121 cpu_max_bits_warn
      
      if CONFIG_DEBUG_PER_CPU_MAPS is enabled.
      
      While in there, also move the SQ thread idle period assignment inside
      SETUP_SQPOLL, as we don't use it otherwise either.
      
      Reported-by: syzbot+cd714a07c6de2bc34293@syzkaller.appspotmail.com
      Fixes: 6c271ce2f1d5 ("io_uring: add submission polling")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      e738eb4e
    • J
      io_uring: park SQPOLL thread if it's percpu · 35fea020
      Jens Axboe 提交于
      commit 06058632464845abb1af91521122fd04dd3daaec upstream.
      
      kthread expects this, or we can throw a warning on exit:
      
      WARNING: CPU: 0 PID: 7822 at kernel/kthread.c:399
      __kthread_bind_mask+0x3b/0xc0 kernel/kthread.c:399
      Kernel panic - not syncing: panic_on_warn set ...
      CPU: 0 PID: 7822 Comm: syz-executor030 Not tainted 5.1.0-rc4-next-20190412
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
      Google 01/01/2011
      Call Trace:
        __dump_stack lib/dump_stack.c:77 [inline]
        dump_stack+0x172/0x1f0 lib/dump_stack.c:113
        panic+0x2cb/0x72b kernel/panic.c:214
        __warn.cold+0x20/0x46 kernel/panic.c:576
        report_bug+0x263/0x2b0 lib/bug.c:186
        fixup_bug arch/x86/kernel/traps.c:179 [inline]
        fixup_bug arch/x86/kernel/traps.c:174 [inline]
        do_error_trap+0x11b/0x200 arch/x86/kernel/traps.c:272
        do_invalid_op+0x37/0x50 arch/x86/kernel/traps.c:291
        invalid_op+0x14/0x20 arch/x86/entry/entry_64.S:973
      RIP: 0010:__kthread_bind_mask+0x3b/0xc0 kernel/kthread.c:399
      Code: 48 89 fb e8 f7 ab 24 00 4c 89 e6 48 89 df e8 ac e1 02 00 31 ff 49 89
      c4 48 89 c6 e8 7f ad 24 00 4d 85 e4 75 15 e8 d5 ab 24 00 <0f> 0b e8 ce ab
      24 00 5b 41 5c 41 5d 41 5e 5d c3 e8 c0 ab 24 00 4c
      RSP: 0018:ffff8880a89bfbb8 EFLAGS: 00010293
      RAX: ffff88808ca7a280 RBX: ffff8880a98e4380 RCX: ffffffff814bdd11
      RDX: 0000000000000000 RSI: ffffffff814bdd1b RDI: 0000000000000007
      RBP: ffff8880a89bfbd8 R08: ffff88808ca7a280 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
      R13: ffffffff87691148 R14: ffff8880a98e43a0 R15: ffffffff81c91e10
        __kthread_bind kernel/kthread.c:412 [inline]
        kthread_unpark+0x123/0x160 kernel/kthread.c:480
        kthread_stop+0xfa/0x6c0 kernel/kthread.c:556
        io_sq_thread_stop fs/io_uring.c:2057 [inline]
        io_sq_thread_stop fs/io_uring.c:2052 [inline]
        io_finish_async+0xab/0x180 fs/io_uring.c:2064
        io_ring_ctx_free fs/io_uring.c:2534 [inline]
        io_ring_ctx_wait_and_kill+0x133/0x510 fs/io_uring.c:2591
        io_uring_release+0x42/0x50 fs/io_uring.c:2599
        __fput+0x2e5/0x8d0 fs/file_table.c:278
        ____fput+0x16/0x20 fs/file_table.c:309
        task_work_run+0x14a/0x1c0 kernel/task_work.c:113
        exit_task_work include/linux/task_work.h:22 [inline]
        do_exit+0x90a/0x2fa0 kernel/exit.c:876
        do_group_exit+0x135/0x370 kernel/exit.c:980
        __do_sys_exit_group kernel/exit.c:991 [inline]
        __se_sys_exit_group kernel/exit.c:989 [inline]
        __x64_sys_exit_group+0x44/0x50 kernel/exit.c:989
        do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Reported-by: syzbot+6d4a92619eb0ad08602b@syzkaller.appspotmail.com
      Fixes: 6c271ce2f1d5 ("io_uring: add submission polling")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      35fea020
    • J
      io_uring: restrict IORING_SETUP_SQPOLL to root · 29745a0d
      Jens Axboe 提交于
      commit 3ec482d15cb986bf08b923f9193eeddb3b9ca69f upstream.
      
      This options spawns a kernel side thread that will poll for submissions
      (and completions, if IORING_SETUP_IOPOLL is set). As this allows a user
      to potentially use more cycles outside of the normal hierarchy,
      restrict the use of this feature to root.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      29745a0d
    • J
      tools/io_uring: remove IOCQE_FLAG_CACHEHIT · 204fcca3
      Jens Axboe 提交于
      commit 704236672edacf353c362bab70c3d3eda7bb4a51 upstream.
      
      This ended up not being included in the mainline version of io_uring,
      so drop it from the test app as well.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      204fcca3
    • J
      io_uring: fix double free in case of fileset regitration failure · b7e56f26
      Jens Axboe 提交于
      commit 25adf50fe25d506d3fc12070a5ff4be858a1ac1b upstream.
      
      Will Deacon reported the following KASAN complaint:
      
      [  149.890370] ==================================================================
      [  149.891266] BUG: KASAN: double-free or invalid-free in io_sqe_files_unregister+0xa8/0x140
      [  149.892218]
      [  149.892411] CPU: 113 PID: 3974 Comm: io_uring_regist Tainted: G    B             5.1.0-rc3-00012-g40b114779944 #3
      [  149.893623] Hardware name: linux,dummy-virt (DT)
      [  149.894169] Call trace:
      [  149.894539]  dump_backtrace+0x0/0x228
      [  149.895172]  show_stack+0x14/0x20
      [  149.895747]  dump_stack+0xe8/0x124
      [  149.896335]  print_address_description+0x60/0x258
      [  149.897148]  kasan_report_invalid_free+0x78/0xb8
      [  149.897936]  __kasan_slab_free+0x1fc/0x228
      [  149.898641]  kasan_slab_free+0x10/0x18
      [  149.899283]  kfree+0x70/0x1f8
      [  149.899798]  io_sqe_files_unregister+0xa8/0x140
      [  149.900574]  io_ring_ctx_wait_and_kill+0x190/0x3c0
      [  149.901402]  io_uring_release+0x2c/0x48
      [  149.902068]  __fput+0x18c/0x510
      [  149.902612]  ____fput+0xc/0x18
      [  149.903146]  task_work_run+0xf0/0x148
      [  149.903778]  do_notify_resume+0x554/0x748
      [  149.904467]  work_pending+0x8/0x10
      [  149.905060]
      [  149.905331] Allocated by task 3974:
      [  149.905934]  __kasan_kmalloc.isra.0.part.1+0x48/0xf8
      [  149.906786]  __kasan_kmalloc.isra.0+0xb8/0xd8
      [  149.907531]  kasan_kmalloc+0xc/0x18
      [  149.908134]  __kmalloc+0x168/0x248
      [  149.908724]  __arm64_sys_io_uring_register+0x2b8/0x15a8
      [  149.909622]  el0_svc_common+0x100/0x258
      [  149.910281]  el0_svc_handler+0x48/0xc0
      [  149.910928]  el0_svc+0x8/0xc
      [  149.911425]
      [  149.911696] Freed by task 3974:
      [  149.912242]  __kasan_slab_free+0x114/0x228
      [  149.912955]  kasan_slab_free+0x10/0x18
      [  149.913602]  kfree+0x70/0x1f8
      [  149.914118]  __arm64_sys_io_uring_register+0xc2c/0x15a8
      [  149.915009]  el0_svc_common+0x100/0x258
      [  149.915670]  el0_svc_handler+0x48/0xc0
      [  149.916317]  el0_svc+0x8/0xc
      [  149.916817]
      [  149.917101] The buggy address belongs to the object at ffff8004ce07ed00
      [  149.917101]  which belongs to the cache kmalloc-128 of size 128
      [  149.919197] The buggy address is located 0 bytes inside of
      [  149.919197]  128-byte region [ffff8004ce07ed00, ffff8004ce07ed80)
      [  149.921142] The buggy address belongs to the page:
      [  149.921953] page:ffff7e0013381f00 count:1 mapcount:0 mapping:ffff800503417c00 index:0x0 compound_mapcount: 0
      [  149.923595] flags: 0x1ffff00000010200(slab|head)
      [  149.924388] raw: 1ffff00000010200 dead000000000100 dead000000000200 ffff800503417c00
      [  149.925706] raw: 0000000000000000 0000000080400040 00000001ffffffff 0000000000000000
      [  149.927011] page dumped because: kasan: bad access detected
      [  149.927956]
      [  149.928224] Memory state around the buggy address:
      [  149.929054]  ffff8004ce07ec00: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
      [  149.930274]  ffff8004ce07ec80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [  149.931494] >ffff8004ce07ed00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [  149.932712]                    ^
      [  149.933281]  ffff8004ce07ed80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [  149.934508]  ffff8004ce07ee00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [  149.935725] ==================================================================
      
      which is due to a failure in registrering a fileset. This frees the
      ctx->user_files pointer, but doesn't clear it. When the io_uring
      instance is later freed through the normal channels, we free this
      pointer again. At this point it's invalid.
      
      Ensure we clear the pointer when we free it for the error case.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      b7e56f26
    • A
      tools headers: Update x86's syscall_64.tbl and uapi/asm-generic/unistd · 00400e96
      Arnaldo Carvalho de Melo 提交于
      commit 8142bd82a59e452fefea7b21113101d6a87d9fa8 upstream.
      
      To pick up the changes introduced in the following csets:
      
        2b188cc1bb85 ("Add io_uring IO interface")
        edafccee56ff ("io_uring: add support for pre-mapped user IO buffers")
        3eb39f47934f ("signal: add pidfd_send_signal() syscall")
      
      This makes 'perf trace' to become aware of these new syscalls, so that
      one can use them like 'perf trace -e ui_uring*,*signal' to do a system
      wide strace-like session looking at those syscalls, for instance.
      
      For example:
      
        # perf trace -s io_uring-cp ~acme/isos/RHEL-x86_64-dvd1.iso ~/bla
      
         Summary of events:
      
         io_uring-cp (383), 1208866 events, 100.0%
      
           syscall         calls   total    min     avg     max   stddev
                                   (msec) (msec)  (msec)  (msec)     (%)
           -------------- ------ -------- ------ ------- -------  ------
           io_uring_enter 605780 2955.615  0.000   0.005  33.804   1.94%
           openat              4  459.446  0.004 114.861 459.435 100.00%
           munmap              4    0.073  0.009   0.018   0.042  44.03%
           mmap               10    0.054  0.002   0.005   0.026  43.24%
           brk                28    0.038  0.001   0.001   0.003   7.51%
           io_uring_setup      1    0.030  0.030   0.030   0.030   0.00%
           mprotect            4    0.014  0.002   0.004   0.005  14.32%
           close               5    0.012  0.001   0.002   0.004  28.87%
           fstat               3    0.006  0.001   0.002   0.003  35.83%
           read                4    0.004  0.001   0.001   0.002  13.58%
           access              1    0.003  0.003   0.003   0.003   0.00%
           lseek               3    0.002  0.001   0.001   0.001   9.00%
           arch_prctl          2    0.002  0.001   0.001   0.001   0.69%
           execve              1    0.000  0.000   0.000   0.000   0.00%
        #
        # perf trace -e io_uring* -s io_uring-cp ~acme/isos/RHEL-x86_64-dvd1.iso ~/bla
      
         Summary of events:
      
         io_uring-cp (390), 1191250 events, 100.0%
      
           syscall         calls   total    min    avg    max  stddev
                                   (msec) (msec) (msec) (msec)    (%)
           -------------- ------ -------- ------ ------ ------ ------
           io_uring_enter 597093 2706.060  0.001  0.005 14.761  1.10%
           io_uring_setup      1    0.038  0.038  0.038  0.038  0.00%
        #
      
      More work needed to make the tools/perf/examples/bpf/augmented_raw_syscalls.c
      BPF program to copy the 'struct io_uring_params' arguments to perf's ring
      buffer so that 'perf trace' can use the BTF info put in place by pahole's
      conversion of the kernel DWARF and then auto-beautify those arguments.
      
      This patch produces the expected change in the generated syscalls table
      for x86_64:
      
        --- /tmp/build/perf/arch/x86/include/generated/asm/syscalls_64.c.before	2019-03-26 13:37:46.679057774 -0300
        +++ /tmp/build/perf/arch/x86/include/generated/asm/syscalls_64.c	2019-03-26 13:38:12.755990383 -0300
        @@ -334,5 +334,9 @@ static const char *syscalltbl_x86_64[] =
         	[332] = "statx",
         	[333] = "io_pgetevents",
         	[334] = "rseq",
        +	[424] = "pidfd_send_signal",
        +	[425] = "io_uring_setup",
        +	[426] = "io_uring_enter",
        +	[427] = "io_uring_register",
         };
        -#define SYSCALLTBL_x86_64_MAX_ID 334
        +#define SYSCALLTBL_x86_64_MAX_ID 427
      
      This silences these perf build warnings:
      
        Warning: Kernel ABI header at 'tools/include/uapi/asm-generic/unistd.h' differs from latest version at 'include/uapi/asm-generic/unistd.h'
        diff -u tools/include/uapi/asm-generic/unistd.h include/uapi/asm-generic/unistd.h
        Warning: Kernel ABI header at 'tools/perf/arch/x86/entry/syscalls/syscall_64.tbl' differs from latest version at 'arch/x86/entry/syscalls/syscall_64.tbl'
        diff -u tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
      Cc: Christian Brauner <christian@brauner.io>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Martin KaFai Lau <kafai@fb.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: Yonghong Song <yhs@fb.com>
      Link: https://lkml.kernel.org/n/tip-p0ars3otuc52x5iznf21shhw@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      00400e96
    • R
      io_uring: offload write to async worker in case of -EAGAIN · 691ad7b2
      Roman Penyaev 提交于
      commit 9bf7933fc3f306bc4ce74ad734f690a71670178a upstream.
      
      In case of direct write -EAGAIN will be returned if page cache was
      previously populated.  To avoid immediate completion of a request
      with -EAGAIN error write has to be offloaded to the async worker,
      like io_read() does.
      Signed-off-by: NRoman Penyaev <rpenyaev@suse.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      691ad7b2
    • A
      io_uring: fix big-endian compat signal mask handling · d895ee01
      Arnd Bergmann 提交于
      commit 9e75ad5d8f399a21c86271571aa630dd080223e2 upstream.
      
      On big-endian architectures, the signal masks are differnet
      between 32-bit and 64-bit tasks, so we have to use a different
      function for reading them from user space.
      
      io_cqring_wait() initially got this wrong, and always interprets
      this as a native structure. This is ok on x86 and most arm64,
      but not on s390, ppc64be, mips64be, sparc64 and parisc.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      d895ee01
    • J
      block: add BIO_NO_PAGE_REF flag · c0d2a0b9
      Jens Axboe 提交于
      commit 399254aaf4892113c806816f7e64cf40c804d46d upstream.
      
      If bio_iov_iter_get_pages() is called on an iov_iter that is flagged
      with NO_REF, then we don't need to add a page reference for the pages
      that we add.
      
      Add BIO_NO_PAGE_REF to track this in the bio, so IO completion knows
      not to drop a reference to these pages.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      c0d2a0b9
    • J
      iov_iter: add ITER_BVEC_FLAG_NO_REF flag · 209cc5b5
      Jens Axboe 提交于
      commit 875f1d0769cdcfe1596ff0ca609b453359e42ec9 upstream.
      
      For ITER_BVEC, if we're holding on to kernel pages, the caller
      doesn't need to grab a reference to the bvec pages, and drop that
      same reference on IO completion. This is essentially safe for any
      ITER_BVEC, but some use cases end up reusing pages and uncondtionally
      dropping a page reference on completion. And example of that is
      sendfile(2), that ends up being a splice_in + splice_out on the
      pipe pages.
      
      Add a flag that tells us it's fine to not grab a page reference
      to the bvec pages, since that caller knows not to drop a reference
      when it's done with the pages.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      209cc5b5
    • J
      io_uring: mark me as the maintainer · b8b92e8f
      Jens Axboe 提交于
      commit bf33a7699e992b12d4c7d39dc3f0b61f6b26c5c2 upstream.
      
      And io_uring as maintained in general.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      b8b92e8f
    • J
      io_uring: retry bulk slab allocs as single allocs · ef1ee13b
      Jens Axboe 提交于
      commit fd6fab2cb78d3b6023c26ec53e0aa6f0b477d2f7 upstream.
      
      I've seen cases where bulk alloc fails, since the bulk alloc API
      is all-or-nothing - either we get the number we ask for, or it
      returns 0 as number of entries.
      
      If we fail a batch bulk alloc, retry a "normal" kmem_cache_alloc()
      and just use that instead of failing with -EAGAIN.
      
      While in there, ensure we use GFP_KERNEL. That was an oversight in
      the original code, when we switched away from GFP_ATOMIC.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      ef1ee13b
    • J
      io_uring: fix poll races · 845a60d5
      Jens Axboe 提交于
      commit 8c838788775a593527803786d376393b7c28f589 upstream.
      
      This is a straight port of Al's fix for the aio poll implementation,
      since the io_uring version is heavily based on that. The below
      description is almost straight from that patch, just modified to
      fit the io_uring situation.
      
      io_poll() has to cope with several unpleasant problems:
      	* requests that might stay around indefinitely need to
      be made visible for io_cancel(2); that must not be done to
      a request already completed, though.
      	* in cases when ->poll() has placed us on a waitqueue,
      wakeup might have happened (and request completed) before ->poll()
      returns.
      	* worse, in some early wakeup cases request might end
      up re-added into the queue later - we can't treat "woken up and
      currently not in the queue" as "it's not going to stick around
      indefinitely"
      	* ... moreover, ->poll() might have decided not to
      put it on any queues to start with, and that needs to be distinguished
      from the previous case
      	* ->poll() might have tried to put us on more than one queue.
      Only the first will succeed for io poll, so we might end up missing
      wakeups.  OTOH, we might very well notice that only after the
      wakeup hits and request gets completed (all before ->poll() gets
      around to the second poll_wait()).  In that case it's too late to
      decide that we have an error.
      
      req->woken was an attempt to deal with that.  Unfortunately, it was
      broken.  What we need to keep track of is not that wakeup has happened -
      the thing might come back after that.  It's that async reference is
      already gone and won't come back, so we can't (and needn't) put the
      request on the list of cancellables.
      
      The easiest case is "request hadn't been put on any waitqueues"; we
      can tell by seeing NULL apt.head, and in that case there won't be
      anything async.  We should either complete the request ourselves
      (if vfs_poll() reports anything of interest) or return an error.
      
      In all other cases we get exclusion with wakeups by grabbing the
      queue lock.
      
      If request is currently on queue and we have something interesting
      from vfs_poll(), we can steal it and complete the request ourselves.
      
      If it's on queue and vfs_poll() has not reported anything interesting,
      we either put it on the cancellable list, or, if we know that it
      hadn't been put on all queues ->poll() wanted it on, we steal it and
      return an error.
      
      If it's _not_ on queue, it's either been already dealt with (in which
      case we do nothing), or there's io_poll_complete_work() about to be
      executed.  In that case we either put it on the cancellable list,
      or, if we know it hadn't been put on all queues ->poll() wanted it on,
      simulate what cancel would've done.
      
      Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      845a60d5
    • J
      io_uring: fix fget/fput handling · 1a18e019
      Jens Axboe 提交于
      commit 09bb839434bd845c01da3d159b0c126fe7fa90da upstream.
      
      This isn't a straight port of commit 84c4e1f89fef for aio.c, since
      io_uring doesn't use files in exactly the same way. But it's pretty
      close. See the commit message for that commit.
      
      This essentially fixes a use-after-free with the poll command
      handling, but it takes cue from Linus's approach to just simplifying
      the file handling. We move the setup of the file into a higher level
      location, so the individual commands don't have to deal with it. And
      then we release the reference when we free the associated io_kiocb.
      
      Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      1a18e019
    • J
      io_uring: add prepped flag · 2904810b
      Jens Axboe 提交于
      commit d530a402a114efcf6d2b88d7f628856dade5b90b upstream.
      
      We currently use the fact that if ->ki_filp is already set, then we've
      done the prep. In preparation for moving the file assignment earlier,
      use a separate flag to tell whether the request has been prepped for
      IO or not.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      2904810b
    • J
      io_uring: make io_read/write return an integer · b586a15b
      Jens Axboe 提交于
      commit e0c5c576d5074b5bb7b1b4b59848c25ceb521331 upstream.
      
      The callers all convert to an integer, and we only return 0/-ERROR
      anyway.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      b586a15b
    • J
      io_uring: use regular request ref counts · 3a3cfffc
      Jens Axboe 提交于
      commit e65ef56db4945fb18a0d522e056c02ddf939e644 upstream.
      
      Get rid of the special casing of "normal" requests not having
      any references to the io_kiocb. We initialize the ref count to 2,
      one for the submission side, and one or the completion side.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      3a3cfffc
    • J
      io_uring: add a few test tools · b0034373
      Jens Axboe 提交于
      commit 21b4aa5d20fd07207e73270cadffed5c63fb4343 upstream.
      
      This adds two test programs in tools/io_uring/ that demonstrate both
      the raw io_uring API (and all features) through a small benchmark
      app, io_uring-bench, and the liburing exposed API in a simplified
      cp(1) implementation through io_uring-cp.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      b0034373
    • J
      io_uring: allow workqueue item to handle multiple buffered requests · af01af1b
      Jens Axboe 提交于
      commit 31b515106428b9717d2b6475b6f6182cf231b1e6 upstream.
      
      Right now we punt any buffered request that ends up triggering an
      -EAGAIN to an async workqueue. This works fine in terms of providing
      async execution of them, but it also can create quite a lot of work
      queue items. For sequentially buffered IO, it's advantageous to
      serialize the issue of them. For reads, the first one will trigger a
      read-ahead, and subsequent request merely end up waiting on later pages
      to complete. For writes, devices usually respond better to streamed
      sequential writes.
      
      Add state to track the last buffered request we punted to a work queue,
      and if the next one is sequential to the previous, attempt to get the
      previous work item to handle it. We limit the number of sequential
      add-ons to the a multiple (8) of the max read-ahead size of the file.
      This should be a good number for both reads and wries, as it defines the
      max IO size the device can do directly.
      
      This drastically cuts down on the number of context switches we need to
      handle buffered sequential IO, and a basic test case of copying a big
      file with io_uring sees a 5x speedup.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      af01af1b
    • J
      io_uring: add support for IORING_OP_POLL · 51de0e8f
      Jens Axboe 提交于
      commit 221c5eb2338232f7340386de1c43decc32682e58 upstream.
      
      This is basically a direct port of bfe4037e, which implements a
      one-shot poll command through aio. Description below is based on that
      commit as well. However, instead of adding a POLL command and relying
      on io_cancel(2) to remove it, we mimic the epoll(2) interface of
      having a command to add a poll notification, IORING_OP_POLL_ADD,
      and one to remove it again, IORING_OP_POLL_REMOVE.
      
      To poll for a file descriptor the application should submit an sqe of
      type IORING_OP_POLL. It will poll the fd for the events specified in the
      poll_events field.
      
      Unlike poll or epoll without EPOLLONESHOT this interface always works in
      one shot mode, that is once the sqe is completed, it will have to be
      resubmitted.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Based-on-code-from: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      51de0e8f
    • J
      io_uring: add io_kiocb ref count · 739ec170
      Jens Axboe 提交于
      commit c16361c1d805b6ea50c3c1fc5c314e944c71a984 upstream.
      
      We'll use this for the POLL implementation. Regular requests will
      NOT be using references, so initialize it to 0. Any real use of
      the io_kiocb ref will initialize it to at least 2.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      739ec170
    • J
      io_uring: add submission polling · aa124ba8
      Jens Axboe 提交于
      commit 6c271ce2f1d572f7fa225700a13cfe7ced492434 upstream.
      
      This enables an application to do IO, without ever entering the kernel.
      By using the SQ ring to fill in new sqes and watching for completions
      on the CQ ring, we can submit and reap IOs without doing a single system
      call. The kernel side thread will poll for new submissions, and in case
      of HIPRI/polled IO, it'll also poll for completions.
      
      By default, we allow 1 second of active spinning. This can by changed
      by passing in a different grace period at io_uring_register(2) time.
      If the thread exceeds this idle time without having any work to do, it
      will set:
      
      sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
      
      The application will have to call io_uring_enter() to start things back
      up again. If IO is kept busy, that will never be needed. Basically an
      application that has this feature enabled will guard it's
      io_uring_enter(2) call with:
      
      read_barrier();
      if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
      	io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
      
      instead of calling it unconditionally.
      
      It's mandatory to use fixed files with this feature. Failure to do so
      will result in the application getting an -EBADF CQ entry when
      submitting IO.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      aa124ba8
    • J
      io_uring: add file set registration · 7bfbdad6
      Jens Axboe 提交于
      commit 6b06314c47e141031be043539900d80d2c7ba10f upstream.
      
      We normally have to fget/fput for each IO we do on a file. Even with
      the batching we do, the cost of the atomic inc/dec of the file usage
      count adds up.
      
      This adds IORING_REGISTER_FILES, and IORING_UNREGISTER_FILES opcodes
      for the io_uring_register(2) system call. The arguments passed in must
      be an array of __s32 holding file descriptors, and nr_args should hold
      the number of file descriptors the application wishes to pin for the
      duration of the io_uring instance (or until IORING_UNREGISTER_FILES is
      called).
      
      When used, the application must set IOSQE_FIXED_FILE in the sqe->flags
      member. Then, instead of setting sqe->fd to the real fd, it sets sqe->fd
      to the index in the array passed in to IORING_REGISTER_FILES.
      
      Files are automatically unregistered when the io_uring instance is torn
      down. An application need only unregister if it wishes to register a new
      set of fds.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      7bfbdad6
    • J
      net: split out functions related to registering inflight socket files · 586b37da
      Jens Axboe 提交于
      commit f4e65870e5cede5ca1ec0006b6c9803994e5f7b8 upstream.
      
      We need this functionality for the io_uring file registration, but
      we cannot rely on it since CONFIG_UNIX can be modular. Move the helpers
      to a separate file, that's always builtin to the kernel if CONFIG_UNIX is
      m/y.
      
      No functional changes in this patch, just moving code around.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      586b37da
    • J
      io_uring: add support for pre-mapped user IO buffers · a078ed69
      Jens Axboe 提交于
      commit edafccee56ff31678a091ddb7219aba9b28bc3cb upstream.
      
      If we have fixed user buffers, we can map them into the kernel when we
      setup the io_uring. That avoids the need to do get_user_pages() for
      each and every IO.
      
      To utilize this feature, the application must call io_uring_register()
      after having setup an io_uring instance, passing in
      IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
      an iovec array, and the nr_args should contain how many iovecs the
      application wishes to map.
      
      If successful, these buffers are now mapped into the kernel, eligible
      for IO. To use these fixed buffers, the application must use the
      IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
      set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
      must point to somewhere inside the indexed buffer.
      
      The application may register buffers throughout the lifetime of the
      io_uring instance. It can call io_uring_register() with
      IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
      buffers, and then register a new set. The application need not
      unregister buffers explicitly before shutting down the io_uring
      instance.
      
      It's perfectly valid to setup a larger buffer, and then sometimes only
      use parts of it for an IO. As long as the range is within the originally
      mapped region, it will work just fine.
      
      For now, buffers must not be file backed. If file backed buffers are
      passed in, the registration will fail with -1/EOPNOTSUPP. This
      restriction may be relaxed in the future.
      
      RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
      arbitrary 1G per buffer size is also imposed.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      a078ed69
    • J
      block: implement bio helper to add iter bvec pages to bio · b1d06bf8
      Jens Axboe 提交于
      commit 6d0c48aede85e38316d0251564cab39cbc2422f6 upstream.
      
      For an ITER_BVEC, we can just iterate the iov and add the pages
      to the bio directly. For now, we grab a reference to those pages,
      and release them normally on IO completion. This isn't really needed
      for the normal case of O_DIRECT from/to a file, but some of the more
      esoteric use cases (like splice(2)) will unconditionally put the
      pipe buffer pages when the buffers are released. Until we can manage
      that case properly, ITER_BVEC pages are treated like normal pages
      in terms of reference counting.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      b1d06bf8
    • J
      io_uring: batch io_kiocb allocation · f25b8cbf
      Jens Axboe 提交于
      commit 2579f913d41a086563bb81762c519f3d62ddee37 upstream.
      
      Similarly to how we use the state->ios_left to know how many references
      to get to a file, we can use it to allocate the io_kiocb's we need in
      bulk.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      f25b8cbf
    • J
      io_uring: use fget/fput_many() for file references · cc2a32d5
      Jens Axboe 提交于
      commit 9a56a2323dbbd8ed7f380a5af7ae3ff82caa55a6 upstream.
      
      Add a separate io_submit_state structure, to cache some of the things
      we need for IO submission.
      
      One such example is file reference batching. io_submit_state. We get as
      many references as the number of sqes we are submitting, and drop
      unused ones if we end up switching files. The assumption here is that
      we're usually only dealing with one fd, and if there are multiple,
      hopefuly they are at least somewhat ordered. Could trivially be extended
      to cover multiple fds, if needed.
      
      On the completion side we do the same thing, except this is trivially
      done just locally in io_iopoll_reap().
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      cc2a32d5
    • J
      fs: add fget_many() and fput_many() · dc7be5b8
      Jens Axboe 提交于
      commit 091141a42e15fe47ada737f3996b317072afcefb upstream.
      
      Some uses cases repeatedly get and put references to the same file, but
      the only exposed interface is doing these one at the time. As each of
      these entail an atomic inc or dec on a shared structure, that cost can
      add up.
      
      Add fget_many(), which works just like fget(), except it takes an
      argument for how many references to get on the file. Ditto fput_many(),
      which can drop an arbitrary number of references to a file.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      dc7be5b8
    • J
      io_uring: support for IO polling · c3440f68
      Jens Axboe 提交于
      commit def596e9557c91d9846fc4d84d26f2c564644416 upstream.
      
      Add support for a polled io_uring instance. When a read or write is
      submitted to a polled io_uring, the application must poll for
      completions on the CQ ring through io_uring_enter(2). Polled IO may not
      generate IRQ completions, hence they need to be actively found by the
      application itself.
      
      To use polling, io_uring_setup() must be used with the
      IORING_SETUP_IOPOLL flag being set. It is illegal to mix and match
      polled and non-polled IO on an io_uring.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      c3440f68
    • C
      io_uring: add fsync support · cb0d3740
      Christoph Hellwig 提交于
      commit c992fe2925d776be066d9f6cc13f9ea11d78b657 upstream.
      
      Add a new fsync opcode, which either syncs a range if one is passed,
      or the whole file if the offset and length fields are both cleared
      to zero.  A flag is provided to use fdatasync semantics, that is only
      force out metadata which is required to retrieve the file data, but
      not others like metadata.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      cb0d3740
    • J
      Add io_uring IO interface · 209d771f
      Jens Axboe 提交于
      commit 2b188cc1bb857a9d4701ae59aa7768b5124e262e upstream.
      
      The submission queue (SQ) and completion queue (CQ) rings are shared
      between the application and the kernel. This eliminates the need to
      copy data back and forth to submit and complete IO.
      
      IO submissions use the io_uring_sqe data structure, and completions
      are generated in the form of io_uring_cqe data structures. The SQ
      ring is an index into the io_uring_sqe array, which makes it possible
      to submit a batch of IOs without them being contiguous in the ring.
      The CQ ring is always contiguous, as completion events are inherently
      unordered, and hence any io_uring_cqe entry can point back to an
      arbitrary submission.
      
      Two new system calls are added for this:
      
      io_uring_setup(entries, params)
      	Sets up an io_uring instance for doing async IO. On success,
      	returns a file descriptor that the application can mmap to
      	gain access to the SQ ring, CQ ring, and io_uring_sqes.
      
      io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
      	Initiates IO against the rings mapped to this fd, or waits for
      	them to complete, or both. The behavior is controlled by the
      	parameters passed in. If 'to_submit' is non-zero, then we'll
      	try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
      	kernel will wait for 'min_complete' events, if they aren't
      	already available. It's valid to set IORING_ENTER_GETEVENTS
      	and 'min_complete' == 0 at the same time, this allows the
      	kernel to return already completed events without waiting
      	for them. This is useful only for polling, as for IRQ
      	driven IO, the application can just check the CQ ring
      	without entering the kernel.
      
      With this setup, it's possible to do async IO with a single system
      call. Future developments will enable polled IO with this interface,
      and polled submission as well. The latter will enable an application
      to do IO without doing ANY system calls at all.
      
      For IRQ driven IO, an application only needs to enter the kernel for
      completions if it wants to wait for them to occur.
      
      Each io_uring is backed by a workqueue, to support buffered async IO
      as well. We will only punt to an async context if the command would
      need to wait for IO on the device side. Any data that can be accessed
      directly in the page cache is done inline. This avoids the slowness
      issue of usual threadpools, since cached data is accessed as quickly
      as a sync interface.
      
      Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.cReviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
      209d771f
    • J
      xfs: Fix stale data exposure when readahead races with hole punch · a9abd3d6
      Jan Kara 提交于
      commit 40144e49ff84c3bd6bd091b58115257670be8803 upstream.
      
      Hole puching currently evicts pages from page cache and then goes on to
      remove blocks from the inode. This happens under both XFS_IOLOCK_EXCL
      and XFS_MMAPLOCK_EXCL which provides appropriate serialization with
      racing reads or page faults. However there is currently nothing that
      prevents readahead triggered by fadvise() or madvise() from racing with
      the hole punch and instantiating page cache page after hole punching has
      evicted page cache in xfs_flush_unmap_range() but before it has removed
      blocks from the inode. This page cache page will be mapping soon to be
      freed block and that can lead to returning stale data to userspace or
      even filesystem corruption.
      
      Fix the problem by protecting handling of readahead requests by
      XFS_IOLOCK_SHARED similarly as we protect reads.
      
      CC: stable@vger.kernel.org
      Link: https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/Reported-by: NAmir Goldstein <amir73il@gmail.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      a9abd3d6
    • J
      fs: Export generic_fadvise() · 00451d52
      Jan Kara 提交于
      commit cf1ea0592dbf109e7e7935b7d5b1a47a1ba04174 upstream.
      
      Filesystems will need to call this function from their fadvise handlers.
      
      CC: stable@vger.kernel.org
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      00451d52
    • R
      xfs: fix missed wakeup on l_flush_wait · d09feb42
      Rik van Riel 提交于
      commit cdea5459ce263fbc963657a7736762ae897a8ae6 upstream.
      
      The code in xlog_wait uses the spinlock to make adding the task to
      the wait queue, and setting the task state to UNINTERRUPTIBLE atomic
      with respect to the waker.
      
      Doing the wakeup after releasing the spinlock opens up the following
      race condition:
      
      Task 1					task 2
      add task to wait queue
      					wake up task
      set task state to UNINTERRUPTIBLE
      
      This issue was found through code inspection as a result of kworkers
      being observed stuck in UNINTERRUPTIBLE state with an empty
      wait queue. It is rare and largely unreproducable.
      
      Simply moving the spin_unlock to after the wake_up_all results
      in the waker not being able to see a task on the waitqueue before
      it has set its state to UNINTERRUPTIBLE.
      
      This bug dates back to the conversion of this code to generic
      waitqueue infrastructure from a counting semaphore back in 2008
      which didn't place the wakeups consistently w.r.t. to the relevant
      spin locks.
      
      [dchinner: Also fix a similar issue in the shutdown path on
      xc_commit_wait. Update commit log with more details of the issue.]
      
      Fixes: d748c623 ("[XFS] Convert l_flushsema to a sv_t")
      Reported-by: NChris Mason <clm@fb.com>
      Signed-off-by: NRik van Riel <riel@surriel.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      d09feb42
    • T
      fs: xfs: xfs_log: Don't use KM_MAYFAIL at xfs_log_reserve(). · d62252e6
      Tetsuo Handa 提交于
      commit 294fc7a4c8ec42b3053b1d2e87b0dafef80a76b8 upstream.
      
      When the system is close-to-OOM, fsync() may fail due to -ENOMEM because
      xfs_log_reserve() is using KM_MAYFAIL. It is a bad thing to fail writeback
      operation due to user-triggerable OOM condition. Since we are not using
      KM_MAYFAIL at xfs_trans_alloc() before calling xfs_log_reserve(), let's
      use the same flags at xfs_log_reserve().
      
        oom-torture: page allocation failure: order:0, mode:0x46c40(GFP_NOFS|__GFP_NOWARN|__GFP_RETRY_MAYFAIL|__GFP_COMP), nodemask=(null)
        CPU: 7 PID: 1662 Comm: oom-torture Kdump: loaded Not tainted 5.3.0-rc2+ #925
        Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00
        Call Trace:
         dump_stack+0x67/0x95
         warn_alloc+0xa9/0x140
         __alloc_pages_slowpath+0x9a8/0xbce
         __alloc_pages_nodemask+0x372/0x3b0
         alloc_slab_page+0x3a/0x8d0
         new_slab+0x330/0x420
         ___slab_alloc.constprop.94+0x879/0xb00
         __slab_alloc.isra.89.constprop.93+0x43/0x6f
         kmem_cache_alloc+0x331/0x390
         kmem_zone_alloc+0x9f/0x110 [xfs]
         kmem_zone_alloc+0x9f/0x110 [xfs]
         xlog_ticket_alloc+0x33/0xd0 [xfs]
         xfs_log_reserve+0xb4/0x410 [xfs]
         xfs_trans_reserve+0x1d1/0x2b0 [xfs]
         xfs_trans_alloc+0xc9/0x250 [xfs]
         xfs_setfilesize_trans_alloc.isra.27+0x44/0xc0 [xfs]
         xfs_submit_ioend.isra.28+0xa5/0x180 [xfs]
         xfs_vm_writepages+0x76/0xa0 [xfs]
         do_writepages+0x17/0x80
         __filemap_fdatawrite_range+0xc1/0xf0
         file_write_and_wait_range+0x53/0xa0
         xfs_file_fsync+0x87/0x290 [xfs]
         vfs_fsync_range+0x37/0x80
         do_fsync+0x38/0x60
         __x64_sys_fsync+0xf/0x20
         do_syscall_64+0x4a/0x1c0
         entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Fixes: eb01c9cd ("[XFS] Remove the xlog_ticket allocator")
      Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      d62252e6
    • D
      xfs: fix off-by-one error in rtbitmap cross-reference · a7b23da6
      Darrick J. Wong 提交于
      commit 87c9607df2ff73290dcfe08d22f34687ce0142ce upstream.
      
      Fix an off-by-one error in the realtime bitmap "is used" cross-reference
      helper function if the realtime extent size is a single block.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      a7b23da6
    • D
      xfs: unlock inode when xfs_ioctl_setattr_get_trans can't get transaction · 38862a70
      Darrick J. Wong 提交于
      commit 3de5eab3fde1e379be65973a69ded29da3802133 upstream.
      
      We passed an inode into xfs_ioctl_setattr_get_trans with join_flags
      indicating which locks are held on that inode.  If we can't allocate a
      transaction then we need to unlock the inode before we bail out, like
      all the other error paths do.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      38862a70