1. 09 4月, 2020 1 次提交
  2. 08 4月, 2020 6 次提交
  3. 07 4月, 2020 2 次提交
    • X
      io_uring: initialize fixed_file_data lock · f7fe9346
      Xiaoguang Wang 提交于
      syzbot reports below warning:
      INFO: trying to register non-static key.
      the code is fine but needs lockdep annotation.
      turning off the locking correctness validator.
      CPU: 1 PID: 7099 Comm: syz-executor897 Not tainted 5.6.0-next-20200406-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      Call Trace:
       __dump_stack lib/dump_stack.c:77 [inline]
       dump_stack+0x188/0x20d lib/dump_stack.c:118
       assign_lock_key kernel/locking/lockdep.c:913 [inline]
       register_lock_class+0x1664/0x1760 kernel/locking/lockdep.c:1225
       __lock_acquire+0x104/0x4e00 kernel/locking/lockdep.c:4223
       lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
       io_sqe_files_register fs/io_uring.c:6599 [inline]
       __io_uring_register+0x1fe8/0x2f00 fs/io_uring.c:8001
       __do_sys_io_uring_register fs/io_uring.c:8081 [inline]
       __se_sys_io_uring_register fs/io_uring.c:8063 [inline]
       __x64_sys_io_uring_register+0x192/0x560 fs/io_uring.c:8063
       do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
       entry_SYSCALL_64_after_hwframe+0x49/0xb3
      RIP: 0033:0x440289
      Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7
      48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
      ff 0f 83 fb 13 fc ff c3 66 2e 0f 1f 84 00 00 00 00
      RSP: 002b:00007ffff1bbf558 EFLAGS: 00000246 ORIG_RAX: 00000000000001ab
      RAX: ffffffffffffffda RBX: 00000000004002c8 RCX: 0000000000440289
      RDX: 0000000020000280 RSI: 0000000000000002 RDI: 0000000000000003
      RBP: 00000000006ca018 R08: 0000000000000000 R09: 00000000004002c8
      R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000401b10
      R13: 0000000000401ba0 R14: 0000000000000000 R15: 0000000000000000
      
      Initialize struct fixed_file_data's lock to fix this issue.
      
      Reported-by: syzbot+e6eeca4a035da76b3065@syzkaller.appspotmail.com
      Fixes: 05589553 ("io_uring: refactor file register/unregister/update handling")
      Signed-off-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f7fe9346
    • C
      io_uring: remove redundant variable pointer nxt and io_wq_assign_next call · 211fea18
      Colin Ian King 提交于
      An earlier commit "io_uring: remove @nxt from handlers" removed the
      setting of pointer nxt and now it is always null, hence the non-null
      check and call to io_wq_assign_next is redundant and can be removed.
      
      Addresses-Coverity: ("'Constant' variable guard")
      Reviewed-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      211fea18
  4. 06 4月, 2020 1 次提交
  5. 04 4月, 2020 5 次提交
  6. 01 4月, 2020 1 次提交
  7. 31 3月, 2020 1 次提交
    • X
      io_uring: refactor file register/unregister/update handling · 05589553
      Xiaoguang Wang 提交于
      While diving into io_uring fileset register/unregister/update codes, we
      found one bug in the fileset update handling. io_uring fileset update
      use a percpu_ref variable to check whether we can put the previously
      registered file, only when the refcnt of the perfcpu_ref variable
      reaches zero, can we safely put these files. But this doesn't work so
      well. If applications always issue requests continually, this
      perfcpu_ref will never have an chance to reach zero, and it'll always be
      in atomic mode, also will defeat the gains introduced by fileset
      register/unresiger/update feature, which are used to reduce the atomic
      operation overhead of fput/fget.
      
      To fix this issue, while applications do IORING_REGISTER_FILES or
      IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
      and kill the old percpu_ref, new requests will use the new percpu_ref.
      Once all previous old requests complete, old percpu_refs will be dropped
      and registered files will be put safely.
      
      Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#tSigned-off-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      05589553
  8. 27 3月, 2020 1 次提交
  9. 25 3月, 2020 1 次提交
  10. 23 3月, 2020 3 次提交
    • H
      io-uring: drop 'free_pfile' in struct io_file_put · a5318d3c
      Hillf Danton 提交于
      Sync removal of file is only used in case of a GFP_KERNEL kmalloc
      failure at the cost of io_file_put::done and work flush, while a
      glich like it can be handled at the call site without too much pain.
      
      That said, what is proposed is to drop sync removing of file, and
      the kink in neck as well.
      Signed-off-by: NHillf Danton <hdanton@sina.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a5318d3c
    • H
      io-uring: drop completion when removing file · 4afdb733
      Hillf Danton 提交于
      A case of task hung was reported by syzbot,
      
      INFO: task syz-executor975:9880 blocked for more than 143 seconds.
            Not tainted 5.6.0-rc6-syzkaller #0
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      syz-executor975 D27576  9880   9878 0x80004000
      Call Trace:
       schedule+0xd0/0x2a0 kernel/sched/core.c:4154
       schedule_timeout+0x6db/0xba0 kernel/time/timer.c:1871
       do_wait_for_common kernel/sched/completion.c:83 [inline]
       __wait_for_common kernel/sched/completion.c:104 [inline]
       wait_for_common kernel/sched/completion.c:115 [inline]
       wait_for_completion+0x26a/0x3c0 kernel/sched/completion.c:136
       io_queue_file_removal+0x1af/0x1e0 fs/io_uring.c:5826
       __io_sqe_files_update.isra.0+0x3a1/0xb00 fs/io_uring.c:5867
       io_sqe_files_update fs/io_uring.c:5918 [inline]
       __io_uring_register+0x377/0x2c00 fs/io_uring.c:7131
       __do_sys_io_uring_register fs/io_uring.c:7202 [inline]
       __se_sys_io_uring_register fs/io_uring.c:7184 [inline]
       __x64_sys_io_uring_register+0x192/0x560 fs/io_uring.c:7184
       do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      and bisect pointed to 05f3fb3c ("io_uring: avoid ring quiesce for
      fixed file set unregister and update").
      
      It is down to the order that we wait for work done before flushing it
      while nobody is likely going to wake us up.
      
      We can drop that completion on stack as flushing work itself is a sync
      operation we need and no more is left behind it.
      
      To that end, io_file_put::done is re-used for indicating if it can be
      freed in the workqueue worker context.
      Reported-and-Inspired-by: Nsyzbot <syzbot+538d1957ce178382a394@syzkaller.appspotmail.com>
      Signed-off-by: NHillf Danton <hdanton@sina.com>
      
      Rename ->done to ->free_pfile
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4afdb733
    • P
      io_uring: Fix ->data corruption on re-enqueue · 18a542ff
      Pavel Begunkov 提交于
      work->data and work->list are shared in union. io_wq_assign_next() sets
      ->data if a req having a linked_timeout, but then io-wq may want to use
      work->list, e.g. to do re-enqueue of a request, so corrupting ->data.
      
      ->data is not necessary, just remove it and extract linked_timeout
      through @Link_list.
      
      Fixes: 60cf46ae ("io-wq: hash dependent work")
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      18a542ff
  11. 21 3月, 2020 1 次提交
    • J
      io_uring: honor original task RLIMIT_FSIZE · 4ed734b0
      Jens Axboe 提交于
      With the previous fixes for number of files open checking, I added some
      debug code to see if we had other spots where we're checking rlimit()
      against the async io-wq workers. The only one I found was file size
      checking, which we should also honor.
      
      During write and fallocate prep, store the max file size and override
      that for the current ask if we're in io-wq worker context.
      
      Cc: stable@vger.kernel.org # 5.1+
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4ed734b0
  12. 20 3月, 2020 2 次提交
  13. 15 3月, 2020 2 次提交
  14. 12 3月, 2020 1 次提交
  15. 11 3月, 2020 1 次提交
    • X
      io_uring: io_uring_enter(2) don't poll while SETUP_IOPOLL|SETUP_SQPOLL enabled · 32b2244a
      Xiaoguang Wang 提交于
      When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, applications don't need
      to do io completion events polling again, they can rely on io_sq_thread to do
      polling work, which can reduce cpu usage and uring_lock contention.
      
      I modify fio io_uring engine codes a bit to evaluate the performance:
      static int fio_ioring_getevents(struct thread_data *td, unsigned int min,
                              continue;
                      }
      
      -               if (!o->sqpoll_thread) {
      +               if (o->sqpoll_thread && o->hipri) {
                              r = io_uring_enter(ld, 0, actual_min,
                                                      IORING_ENTER_GETEVENTS);
                              if (r < 0) {
      
      and use "fio  -name=fiotest -filename=/dev/nvme0n1 -iodepth=$depth -thread
      -rw=read -ioengine=io_uring  -hipri=1 -sqthread_poll=1  -direct=1 -bs=4k
      -size=10G -numjobs=1  -time_based -runtime=120"
      
      original codes
      --------------------------------------------------------------------
      iodepth       |        4 |        8 |       16 |       32 |       64
      bw            | 1133MB/s | 1519MB/s | 2090MB/s | 2710MB/s | 3012MB/s
      fio cpu usage |     100% |     100% |     100% |     100% |     100%
      --------------------------------------------------------------------
      
      with patch
      --------------------------------------------------------------------
      iodepth       |        4 |        8 |       16 |       32 |       64
      bw            | 1196MB/s | 1721MB/s | 2351MB/s | 2977MB/s | 3357MB/s
      fio cpu usage |    63.8% |   74.4%% |    81.1% |    83.7% |    82.4%
      --------------------------------------------------------------------
      bw improve    |     5.5% |    13.2% |    12.3% |     9.8% |    11.5%
      --------------------------------------------------------------------
      
      From above test results, we can see that bw has above 5.5%~13%
      improvement, and fio process's cpu usage also drops much. Note this
      won't improve io_sq_thread's cpu usage when SETUP_IOPOLL|SETUP_SQPOLL
      are both enabled, in this case, io_sq_thread always has 100% cpu usage.
      I think this patch will be friendly to applications which will often use
      io_uring_wait_cqe() or similar from liburing.
      Signed-off-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      32b2244a
  16. 10 3月, 2020 7 次提交
    • Y
      io_uring: Fix unused function warnings · 469956e8
      YueHaibing 提交于
      If CONFIG_NET is not set, gcc warns:
      
      fs/io_uring.c:3110:12: warning: io_setup_async_msg defined but not used [-Wunused-function]
       static int io_setup_async_msg(struct io_kiocb *req,
                  ^~~~~~~~~~~~~~~~~~
      
      There are many funcions wraped by CONFIG_NET, move them
      together to simplify code, also fix this warning.
      Reported-by: NHulk Robot <hulkci@huawei.com>
      Signed-off-by: NYueHaibing <yuehaibing@huawei.com>
      
      Minor tweaks.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      469956e8
    • J
      io_uring: add end-of-bits marker and build time verify it · 84557871
      Jens Axboe 提交于
      Not easy to tell if we're going over the size of bits we can shove
      in req->flags, so add an end-of-bits marker and a BUILD_BUG_ON()
      check for it.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      84557871
    • J
      io_uring: provide means of removing buffers · 067524e9
      Jens Axboe 提交于
      We have IORING_OP_PROVIDE_BUFFERS, but the only way to remove buffers
      is to trigger IO on them. The usual case of shrinking a buffer pool
      would be to just not replenish the buffers when IO completes, and
      instead just free it. But it may be nice to have a way to manually
      remove a number of buffers from a given group, and
      IORING_OP_REMOVE_BUFFERS provides that functionality.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      067524e9
    • J
      io_uring: add IOSQE_BUFFER_SELECT support for IORING_OP_RECVMSG · 52de1fe1
      Jens Axboe 提交于
      Like IORING_OP_READV, this is limited to supporting just a single
      segment in the iovec passed in.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      52de1fe1
    • J
      io_uring: add IOSQE_BUFFER_SELECT support for IORING_OP_READV · 4d954c25
      Jens Axboe 提交于
      This adds support for the vectored read. This is limited to supporting
      just 1 segment in the iov, and is provided just for convenience for
      applications that use IORING_OP_READV already.
      
      The iov helpers will be used for IORING_OP_RECVMSG as well.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4d954c25
    • J
      io_uring: support buffer selection for OP_READ and OP_RECV · bcda7baa
      Jens Axboe 提交于
      If a server process has tons of pending socket connections, generally
      it uses epoll to wait for activity. When the socket is ready for reading
      (or writing), the task can select a buffer and issue a recv/send on the
      given fd.
      
      Now that we have fast (non-async thread) support, a task can have tons
      of pending reads or writes pending. But that means they need buffers to
      back that data, and if the number of connections is high enough, having
      them preallocated for all possible connections is unfeasible.
      
      With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
      use for any request. The request then sets IOSQE_BUFFER_SELECT in the
      sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
      a free buffer from the specified group is selected. If none are
      available, the request is terminated with -ENOBUFS. If successful, the
      CQE on completion will contain the buffer ID chosen in the cqe->flags
      member, encoded as:
      
      	(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
      
      Once a buffer has been consumed by a request, it is no longer available
      and must be registered again with IORING_OP_PROVIDE_BUFFERS.
      
      Requests need to support this feature. For now, IORING_OP_READ and
      IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
      res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bcda7baa
    • J
      io_uring: add IORING_OP_PROVIDE_BUFFERS · ddf0322d
      Jens Axboe 提交于
      IORING_OP_PROVIDE_BUFFERS uses the buffer registration infrastructure to
      support passing in an addr/len that is associated with a buffer ID and
      buffer group ID. The group ID is used to index and lookup the buffers,
      while the buffer ID can be used to notify the application which buffer
      in the group was used. The addr passed in is the starting buffer address,
      and length is each buffer length. A number of buffers to add with can be
      specified, in which case addr is incremented by length for each addition,
      and each buffer increments the buffer ID specified.
      
      No validation is done of the buffer ID. If the application provides
      buffers within the same group with identical buffer IDs, then it'll have
      a hard time telling which buffer ID was used. The only restriction is
      that the buffer ID can be a max of 16-bits in size, so USHRT_MAX is the
      maximum ID that can be used.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ddf0322d
  17. 09 3月, 2020 1 次提交
    • J
      io_uring: ensure RCU callback ordering with rcu_barrier() · 805b13ad
      Jens Axboe 提交于
      After more careful studying, Paul informs me that we cannot rely on
      ordering of RCU callbacks in the way that the the tagged commit did.
      The current construct looks like this:
      
      	void C(struct rcu_head *rhp)
      	{
      		do_something(rhp);
      		call_rcu(&p->rh, B);
      	}
      
      	call_rcu(&p->rh, A);
      	call_rcu(&p->rh, C);
      
      and we're relying on ordering between A and B, which isn't guaranteed.
      Make this explicit instead, and have a work item issue the rcu_barrier()
      to ensure that A has run before we manually execute B.
      
      While thorough testing never showed this issue, it's dependent on the
      per-cpu load in terms of RCU callbacks. The updated method simplifies
      the code as well, and eliminates the need to maintain an rcu_head in
      the fileset data.
      
      Fixes: c1e2148f ("io_uring: free fixed_file_data after RCU grace period")
      Reported-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      805b13ad
  18. 07 3月, 2020 2 次提交
    • P
      io_uring: fix lockup with timeouts · f0e20b89
      Pavel Begunkov 提交于
      There is a recipe to deadlock the kernel: submit a timeout sqe with a
      linked_timeout (e.g.  test_single_link_timeout_ception() from liburing),
      and SIGKILL the process.
      
      Then, io_kill_timeouts() takes @ctx->completion_lock, but the timeout
      isn't flagged with REQ_F_COMP_LOCKED, and will try to double grab it
      during io_put_free() to cancel the linked timeout. Probably, the same
      can happen with another io_kill_timeout() call site, that is
      io_commit_cqring().
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f0e20b89
    • J
      io_uring: free fixed_file_data after RCU grace period · c1e2148f
      Jens Axboe 提交于
      The percpu refcount protects this structure, and we can have an atomic
      switch in progress when exiting. This makes it unsafe to just free the
      struct normally, and can trigger the following KASAN warning:
      
      BUG: KASAN: use-after-free in percpu_ref_switch_to_atomic_rcu+0xfa/0x1b0
      Read of size 1 at addr ffff888181a19a30 by task swapper/0/0
      
      CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0-rc4+ #5747
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
      Call Trace:
       <IRQ>
       dump_stack+0x76/0xa0
       print_address_description.constprop.0+0x3b/0x60
       ? percpu_ref_switch_to_atomic_rcu+0xfa/0x1b0
       ? percpu_ref_switch_to_atomic_rcu+0xfa/0x1b0
       __kasan_report.cold+0x1a/0x3d
       ? percpu_ref_switch_to_atomic_rcu+0xfa/0x1b0
       percpu_ref_switch_to_atomic_rcu+0xfa/0x1b0
       rcu_core+0x370/0x830
       ? percpu_ref_exit+0x50/0x50
       ? rcu_note_context_switch+0x7b0/0x7b0
       ? run_rebalance_domains+0x11d/0x140
       __do_softirq+0x10a/0x3e9
       irq_exit+0xd5/0xe0
       smp_apic_timer_interrupt+0x86/0x200
       apic_timer_interrupt+0xf/0x20
       </IRQ>
      RIP: 0010:default_idle+0x26/0x1f0
      
      Fix this by punting the final exit and free of the struct to RCU, then
      we know that it's safe to do so. Jann suggested the approach of using a
      double rcu callback to achieve this. It's important that we do a nested
      call_rcu() callback, as otherwise the free could be ordered before the
      atomic switch, even if the latter was already queued.
      
      Reported-by: syzbot+e017e49c39ab484ac87a@syzkaller.appspotmail.com
      Suggested-by: NJann Horn <jannh@google.com>
      Reviewed-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c1e2148f
  19. 05 3月, 2020 1 次提交