1. 12 11月, 2021 1 次提交
    • J
      io-wq: serialize hash clear with wakeup · d3e3c102
      Jens Axboe 提交于
      We need to ensure that we serialize the stalled and hash bits with the
      wait_queue wait handler, or we could be racing with someone modifying
      the hashed state after we find it busy, but before we then give up and
      wait for it to be cleared. This can cause random delays or stalls when
      handling buffered writes for many files, where some of these files cause
      hash collisions between the worker threads.
      
      Cc: stable@vger.kernel.org
      Reported-by: NDaniel Black <daniel@mariadb.org>
      Fixes: e941894e ("io-wq: make buffered file write hashed work map per-ctx")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d3e3c102
  2. 03 11月, 2021 1 次提交
  3. 29 10月, 2021 1 次提交
    • P
      io-wq: remove worker to owner tw dependency · 1d5f5ea7
      Pavel Begunkov 提交于
      INFO: task iou-wrk-6609:6612 blocked for more than 143 seconds.
            Not tainted 5.15.0-rc5-syzkaller #0
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      task:iou-wrk-6609    state:D stack:27944 pid: 6612 ppid:  6526 flags:0x00004006
      Call Trace:
       context_switch kernel/sched/core.c:4940 [inline]
       __schedule+0xb44/0x5960 kernel/sched/core.c:6287
       schedule+0xd3/0x270 kernel/sched/core.c:6366
       schedule_timeout+0x1db/0x2a0 kernel/time/timer.c:1857
       do_wait_for_common kernel/sched/completion.c:85 [inline]
       __wait_for_common kernel/sched/completion.c:106 [inline]
       wait_for_common kernel/sched/completion.c:117 [inline]
       wait_for_completion+0x176/0x280 kernel/sched/completion.c:138
       io_worker_exit fs/io-wq.c:183 [inline]
       io_wqe_worker+0x66d/0xc40 fs/io-wq.c:597
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
      
      io-wq worker may submit a task_work to the master task and upon
      io_worker_exit() wait for the tw to get executed. The problem appears
      when the master task is waiting in coredump.c:
      
      468                     freezer_do_not_count();
      469                     wait_for_completion(&core_state->startup);
      470                     freezer_count();
      
      Apparently having some dependency on children threads getting everything
      stuck. Workaround it by cancelling the taks_work callback that causes it
      before going into io_worker_exit() waiting.
      
      p.s. probably a better option is to not submit tw elevating the refcount
      in the first place, but let's leave this excercise for the future.
      
      Cc: stable@vger.kernel.org
      Reported-and-tested-by: syzbot+27d62ee6f256b186883e@syzkaller.appspotmail.com
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Link: https://lore.kernel.org/r/142a716f4ed936feae868959059154362bfa8c19.1635509451.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      1d5f5ea7
  4. 23 10月, 2021 1 次提交
  5. 20 10月, 2021 1 次提交
  6. 19 10月, 2021 1 次提交
  7. 28 9月, 2021 1 次提交
  8. 25 9月, 2021 1 次提交
  9. 20 9月, 2021 1 次提交
    • P
      audit,io_uring,io-wq: add some basic audit support to io_uring · 5bd2182d
      Paul Moore 提交于
      This patch adds basic auditing to io_uring operations, regardless of
      their context.  This is accomplished by allocating audit_context
      structures for the io-wq worker and io_uring SQPOLL kernel threads
      as well as explicitly auditing the io_uring operations in
      io_issue_sqe().  Individual io_uring operations can bypass auditing
      through the "audit_skip" field in the struct io_op_def definition for
      the operation; although great care must be taken so that security
      relevant io_uring operations do not bypass auditing; please contact
      the audit mailing list (see the MAINTAINERS file) with any questions.
      
      The io_uring operations are audited using a new AUDIT_URINGOP record,
      an example is shown below:
      
        type=UNKNOWN[1336] msg=audit(1631800225.981:37289):
          uring_op=19 success=yes exit=0 items=0 ppid=15454 pid=15681
          uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0
          subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
          key=(null)
      
      Thanks to Richard Guy Briggs for review and feedback.
      Signed-off-by: NPaul Moore <paul@paul-moore.com>
      5bd2182d
  10. 14 9月, 2021 1 次提交
  11. 13 9月, 2021 2 次提交
  12. 09 9月, 2021 2 次提交
    • Q
      io-wq: fix memory leak in create_io_worker() · 66e70be7
      Qiang.zhang 提交于
      BUG: memory leak
      unreferenced object 0xffff888126fcd6c0 (size 192):
        comm "syz-executor.1", pid 11934, jiffies 4294983026 (age 15.690s)
        backtrace:
          [<ffffffff81632c91>] kmalloc_node include/linux/slab.h:609 [inline]
          [<ffffffff81632c91>] kzalloc_node include/linux/slab.h:732 [inline]
          [<ffffffff81632c91>] create_io_worker+0x41/0x1e0 fs/io-wq.c:739
          [<ffffffff8163311e>] io_wqe_create_worker fs/io-wq.c:267 [inline]
          [<ffffffff8163311e>] io_wqe_enqueue+0x1fe/0x330 fs/io-wq.c:866
          [<ffffffff81620b64>] io_queue_async_work+0xc4/0x200 fs/io_uring.c:1473
          [<ffffffff8162c59c>] __io_queue_sqe+0x34c/0x510 fs/io_uring.c:6933
          [<ffffffff8162c7ab>] io_req_task_submit+0x4b/0xa0 fs/io_uring.c:2233
          [<ffffffff8162cb48>] io_async_task_func+0x108/0x1c0 fs/io_uring.c:5462
          [<ffffffff816259e3>] tctx_task_work+0x1b3/0x3a0 fs/io_uring.c:2158
          [<ffffffff81269b43>] task_work_run+0x73/0xb0 kernel/task_work.c:164
          [<ffffffff812dcdd1>] tracehook_notify_signal include/linux/tracehook.h:212 [inline]
          [<ffffffff812dcdd1>] handle_signal_work kernel/entry/common.c:146 [inline]
          [<ffffffff812dcdd1>] exit_to_user_mode_loop kernel/entry/common.c:172 [inline]
          [<ffffffff812dcdd1>] exit_to_user_mode_prepare+0x151/0x180 kernel/entry/common.c:209
          [<ffffffff843ff25d>] __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
          [<ffffffff843ff25d>] syscall_exit_to_user_mode+0x1d/0x40 kernel/entry/common.c:302
          [<ffffffff843fa4a2>] do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86
          [<ffffffff84600068>] entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      when create_io_thread() return error, and not retry, the worker object
      need to be freed.
      
      Reported-by: syzbot+65454c239241d3d647da@syzkaller.appspotmail.com
      Signed-off-by: NQiang.zhang <qiang.zhang@windriver.com>
      Link: https://lore.kernel.org/r/20210909115822.181188-1-qiang.zhang@windriver.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      66e70be7
    • J
      io-wq: fix silly logic error in io_task_work_match() · 3b33e3f4
      Jens Axboe 提交于
      We check for the func with an OR condition, which means it always ends
      up being false and we never match the task_work we want to cancel. In
      the unexpected case that we do exit with that pending, we can trigger
      a hang waiting for a worker to exit, but it was never created. syzbot
      reports that as such:
      
      INFO: task syz-executor687:8514 blocked for more than 143 seconds.
            Not tainted 5.14.0-syzkaller #0
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      task:syz-executor687 state:D stack:27296 pid: 8514 ppid:  8479 flags:0x00024004
      Call Trace:
       context_switch kernel/sched/core.c:4940 [inline]
       __schedule+0x940/0x26f0 kernel/sched/core.c:6287
       schedule+0xd3/0x270 kernel/sched/core.c:6366
       schedule_timeout+0x1db/0x2a0 kernel/time/timer.c:1857
       do_wait_for_common kernel/sched/completion.c:85 [inline]
       __wait_for_common kernel/sched/completion.c:106 [inline]
       wait_for_common kernel/sched/completion.c:117 [inline]
       wait_for_completion+0x176/0x280 kernel/sched/completion.c:138
       io_wq_exit_workers fs/io-wq.c:1162 [inline]
       io_wq_put_and_exit+0x40c/0xc70 fs/io-wq.c:1197
       io_uring_clean_tctx fs/io_uring.c:9607 [inline]
       io_uring_cancel_generic+0x5fe/0x740 fs/io_uring.c:9687
       io_uring_files_cancel include/linux/io_uring.h:16 [inline]
       do_exit+0x265/0x2a30 kernel/exit.c:780
       do_group_exit+0x125/0x310 kernel/exit.c:922
       get_signal+0x47f/0x2160 kernel/signal.c:2868
       arch_do_signal_or_restart+0x2a9/0x1c40 arch/x86/kernel/signal.c:865
       handle_signal_work kernel/entry/common.c:148 [inline]
       exit_to_user_mode_loop kernel/entry/common.c:172 [inline]
       exit_to_user_mode_prepare+0x17d/0x290 kernel/entry/common.c:209
       __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
       syscall_exit_to_user_mode+0x19/0x60 kernel/entry/common.c:302
       do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      RIP: 0033:0x445cd9
      RSP: 002b:00007fc657f4b308 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
      RAX: 0000000000000001 RBX: 00000000004cb448 RCX: 0000000000445cd9
      RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00000000004cb44c
      RBP: 00000000004cb440 R08: 000000000000000e R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000246 R12: 000000000049b154
      R13: 0000000000000003 R14: 00007fc657f4b400 R15: 0000000000022000
      
      While in there, also decrement accr->nr_workers. This isn't strictly
      needed as we're exiting, but let's make sure the accounting matches up.
      
      Fixes: 3146cba9 ("io-wq: make worker creation resilient against signals")
      Reported-by: syzbot+f62d3e0a4ea4f38f5326@syzkaller.appspotmail.com
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3b33e3f4
  13. 08 9月, 2021 1 次提交
    • P
      io-wq: fix cancellation on create-worker failure · 713b9825
      Pavel Begunkov 提交于
      WARNING: CPU: 0 PID: 10392 at fs/io_uring.c:1151 req_ref_put_and_test
      fs/io_uring.c:1151 [inline]
      WARNING: CPU: 0 PID: 10392 at fs/io_uring.c:1151 req_ref_put_and_test
      fs/io_uring.c:1146 [inline]
      WARNING: CPU: 0 PID: 10392 at fs/io_uring.c:1151
      io_req_complete_post+0xf5b/0x1190 fs/io_uring.c:1794
      Modules linked in:
      Call Trace:
       tctx_task_work+0x1e5/0x570 fs/io_uring.c:2158
       task_work_run+0xe0/0x1a0 kernel/task_work.c:164
       tracehook_notify_signal include/linux/tracehook.h:212 [inline]
       handle_signal_work kernel/entry/common.c:146 [inline]
       exit_to_user_mode_loop kernel/entry/common.c:172 [inline]
       exit_to_user_mode_prepare+0x232/0x2a0 kernel/entry/common.c:209
       __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
       syscall_exit_to_user_mode+0x19/0x60 kernel/entry/common.c:302
       do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      When io_wqe_enqueue() -> io_wqe_create_worker() fails, we can't just
      call io_run_cancel() to clean up the request, it's already enqueued via
      io_wqe_insert_work() and will be executed either by some other worker
      during cancellation (e.g. in io_wq_put_and_exit()).
      Reported-by: NHao Sun <sunhao.th@gmail.com>
      Fixes: 3146cba9 ("io-wq: make worker creation resilient against signals")
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Link: https://lore.kernel.org/r/93b9de0fcf657affab0acfd675d4abcd273ee863.1631092071.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      713b9825
  14. 03 9月, 2021 2 次提交
    • J
      io-wq: make worker creation resilient against signals · 3146cba9
      Jens Axboe 提交于
      If a task is queueing async work and also handling signals, then we can
      run into the case where create_io_thread() is interrupted and returns
      failure because of that. If this happens for creating the first worker
      in a group, then that worker will never get created and we can hang the
      ring.
      
      If we do get a fork failure, retry from task_work. With signals we have
      to be a bit careful as we cannot simply queue as task_work, as we'll
      still have signals pending at that point. Punt over a normal workqueue
      first and then create from task_work after that.
      
      Lastly, ensure that we handle fatal worker creations. Worker creation
      failures are normally not fatal, only if we fail to create one in an empty
      worker group can we not make progress. Right now that is ignored, ensure
      that we handle that and run cancel on the work item.
      
      There are two paths that create new workers - one is the "existing worker
      going to sleep", and the other is "no workers found for this work, create
      one". The former is never fatal, as workers do exist in the group. Only
      the latter needs to be carefully handled.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3146cba9
    • J
      io-wq: get rid of FIXED worker flag · 05c5f4ee
      Jens Axboe 提交于
      It makes the logic easier to follow if we just get rid of the fixed worker
      flag, and simply ensure that we never exit the last worker in the group.
      This also means that no particular worker is special.
      
      Just track the last timeout state, and if we have hit it and no work
      is pending, check if there are other workers. If yes, then we can exit
      this one safely.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      05c5f4ee
  15. 02 9月, 2021 2 次提交
    • J
      io-wq: only exit on fatal signals · 15e20db2
      Jens Axboe 提交于
      If the application uses io_uring and also relies heavily on signals
      for communication, that can cause io-wq workers to spuriously exit
      just because the parent has a signal pending. Just ignore signals
      unless they are fatal.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      15e20db2
    • J
      io-wq: split bounded and unbounded work into separate lists · f95dc207
      Jens Axboe 提交于
      We've got a few issues that all boil down to the fact that we have one
      list of pending work items, yet two different types of workers to
      serve them. This causes some oddities around workers switching type and
      even hashed work vs regular work on the same bounded list.
      
      Just separate them out cleanly, similarly to how we already do
      accounting of what is running. That provides a clean separation and
      removes some corner cases that can cause stalls when handling IO
      that is punted to io-wq.
      
      Fixes: ecc53c48 ("io-wq: check max_worker limits if a worker transitions bound state")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f95dc207
  16. 01 9月, 2021 3 次提交
    • J
      io-wq: fix queue stalling race · 0242f642
      Jens Axboe 提交于
      We need to set the stalled bit early, before we drop the lock for adding
      us to the stall hash queue. If not, then we can race with new work being
      queued between adding us to the stall hash and io_worker_handle_work()
      marking us stalled.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0242f642
    • J
      io-wq: ensure that hash wait lock is IRQ disabling · 08bdbd39
      Jens Axboe 提交于
      A previous commit removed the IRQ safety of the worker and wqe locks,
      but that left one spot of the hash wait lock now being done without
      already having IRQs disabled.
      
      Ensure that we use the right locking variant for the hashed waitqueue
      lock.
      
      Fixes: a9a4aa9f ("io-wq: wqe and worker locks no longer need to be IRQ safe")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      08bdbd39
    • J
      io-wq: fix race between adding work and activating a free worker · 94ffb0a2
      Jens Axboe 提交于
      The attempt to find and activate a free worker for new work is currently
      combined with creating a new one if we don't find one, but that opens
      io-wq up to a race where the worker that is found and activated can
      put itself to sleep without knowing that it has been selected to perform
      this new work.
      
      Fix this by moving the activation into where we add the new work item,
      then we can retain it within the wqe->lock scope and elimiate the race
      with the worker itself checking inside the lock, but sleeping outside of
      it.
      
      Cc: stable@vger.kernel.org
      Reported-by: NAndres Freund <andres@anarazel.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      94ffb0a2
  17. 30 8月, 2021 3 次提交
    • J
      io-wq: fix wakeup race when adding new work · 87df7fb9
      Jens Axboe 提交于
      When new work is added, io_wqe_enqueue() checks if we need to wake or
      create a new worker. But that check is done outside the lock that
      otherwise synchronizes us with a worker going to sleep, so we can end
      up in the following situation:
      
      CPU0				CPU1
      lock
      insert work
      unlock
      atomic_read(nr_running) != 0
      				lock
      				atomic_dec(nr_running)
      no wakeup needed
      
      Hold the wqe lock around the "need to wakeup" check. Then we can also get
      rid of the temporary work_flags variable, as we know the work will remain
      valid as long as we hold the lock.
      
      Cc: stable@vger.kernel.org
      Reported-by: NAndres Freund <andres@anarazel.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      87df7fb9
    • J
      io-wq: wqe and worker locks no longer need to be IRQ safe · a9a4aa9f
      Jens Axboe 提交于
      io_uring no longer queues async work off completion handlers that run in
      hard or soft interrupt context, and that use case was the only reason that
      io-wq had to use IRQ safe locks for wqe and worker locks.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a9a4aa9f
    • J
      io-wq: check max_worker limits if a worker transitions bound state · ecc53c48
      Jens Axboe 提交于
      For the two places where new workers are created, we diligently check if
      we are allowed to create a new worker. If we're currently at the limit
      of how many workers of a given type we can have, then we don't create
      any new ones.
      
      If you have a mixed workload with various types of bound and unbounded
      work, then it can happen that a worker finishes one type of work and
      is then transitioned to the other type. For this case, we don't check
      if we are actually allowed to do so. This can cause io-wq to temporarily
      exceed the allowed number of workers for a given type.
      
      When retrieving work, check that the types match. If they don't, check
      if we are allowed to transition to the other type. If not, then don't
      handle the new work.
      
      Cc: stable@vger.kernel.org
      Reported-by: NJohannes Lundberg <johalun0@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ecc53c48
  18. 29 8月, 2021 1 次提交
    • J
      io-wq: provide a way to limit max number of workers · 2e480058
      Jens Axboe 提交于
      io-wq divides work into two categories:
      
      1) Work that completes in a bounded time, like reading from a regular file
         or a block device. This type of work is limited based on the size of
         the SQ ring.
      
      2) Work that may never complete, we call this unbounded work. The amount
         of workers here is just limited by RLIMIT_NPROC.
      
      For various uses cases, it's handy to have the kernel limit the maximum
      amount of pending workers for both categories. Provide a way to do with
      with a new IORING_REGISTER_IOWQ_MAX_WORKERS operation.
      
      IORING_REGISTER_IOWQ_MAX_WORKERS takes an array of two integers and sets
      the max worker count to what is being passed in for each category. The
      old values are returned into that same array. If 0 is being passed in for
      either category, it simply returns the current value.
      
      The value is capped at RLIMIT_NPROC. This actually isn't that important
      as it's more of a hint, if we're exceeding the value then our attempt
      to fork a new worker will fail. This happens naturally already if more
      than one node is in the system, as these values are per-node internally
      for io-wq.
      Reported-by: NJohannes Lundberg <johalun0@gmail.com>
      Link: https://github.com/axboe/liburing/issues/420Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2e480058
  19. 24 8月, 2021 2 次提交
    • H
      io-wq: move nr_running and worker_refs out of wqe->lock protection · 79dca184
      Hao Xu 提交于
      We don't need to protect nr_running and worker_refs by wqe->lock, so
      narrow the range of raw_spin_lock_irq - raw_spin_unlock_irq
      Signed-off-by: NHao Xu <haoxu@linux.alibaba.com>
      Link: https://lore.kernel.org/r/20210810125554.99229-1-haoxu@linux.alibaba.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      79dca184
    • J
      io-wq: remove GFP_ATOMIC allocation off schedule out path · d3e9f732
      Jens Axboe 提交于
      Daniel reports that the v5.14-rc4-rt4 kernel throws a BUG when running
      stress-ng:
      
      | [   90.202543] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35
      | [   90.202549] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 2047, name: iou-wrk-2041
      | [   90.202555] CPU: 5 PID: 2047 Comm: iou-wrk-2041 Tainted: G        W         5.14.0-rc4-rt4+ #89
      | [   90.202559] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
      | [   90.202561] Call Trace:
      | [   90.202577]  dump_stack_lvl+0x34/0x44
      | [   90.202584]  ___might_sleep.cold+0x87/0x94
      | [   90.202588]  rt_spin_lock+0x19/0x70
      | [   90.202593]  ___slab_alloc+0xcb/0x7d0
      | [   90.202598]  ? newidle_balance.constprop.0+0xf5/0x3b0
      | [   90.202603]  ? dequeue_entity+0xc3/0x290
      | [   90.202605]  ? io_wqe_dec_running.isra.0+0x98/0xe0
      | [   90.202610]  ? pick_next_task_fair+0xb9/0x330
      | [   90.202612]  ? __schedule+0x670/0x1410
      | [   90.202615]  ? io_wqe_dec_running.isra.0+0x98/0xe0
      | [   90.202618]  kmem_cache_alloc_trace+0x79/0x1f0
      | [   90.202621]  io_wqe_dec_running.isra.0+0x98/0xe0
      | [   90.202625]  io_wq_worker_sleeping+0x37/0x50
      | [   90.202628]  schedule+0x30/0xd0
      | [   90.202630]  schedule_timeout+0x8f/0x1a0
      | [   90.202634]  ? __bpf_trace_tick_stop+0x10/0x10
      | [   90.202637]  io_wqe_worker+0xfd/0x320
      | [   90.202641]  ? finish_task_switch.isra.0+0xd3/0x290
      | [   90.202644]  ? io_worker_handle_work+0x670/0x670
      | [   90.202646]  ? io_worker_handle_work+0x670/0x670
      | [   90.202649]  ret_from_fork+0x22/0x30
      
      which is due to the RT kernel not liking a GFP_ATOMIC allocation inside
      a raw spinlock. Besides that not working on RT, doing any kind of
      allocation from inside schedule() is kind of nasty and should be avoided
      if at all possible.
      
      This particular path happens when an io-wq worker goes to sleep, and we
      need a new worker to handle pending work. We currently allocate a small
      data item to hold the information we need to create a new worker, but we
      can instead include this data in the io_worker struct itself and just
      protect it with a single bit lock. We only really need one per worker
      anyway, as we will have run pending work between to sleep cycles.
      
      https://lore.kernel.org/lkml/20210804082418.fbibprcwtzyt5qax@beryllium.lan/Reported-by: NDaniel Wagner <dwagner@suse.de>
      Tested-by: NDaniel Wagner <dwagner@suse.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d3e9f732
  20. 10 8月, 2021 2 次提交
  21. 06 8月, 2021 2 次提交
    • H
      io-wq: fix lack of acct->nr_workers < acct->max_workers judgement · 21698274
      Hao Xu 提交于
      There should be this judgement before we create an io-worker
      
      Fixes: 685fe7fe ("io-wq: eliminate the need for a manager thread")
      Signed-off-by: NHao Xu <haoxu@linux.alibaba.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      21698274
    • H
      io-wq: fix no lock protection of acct->nr_worker · 3d4e4fac
      Hao Xu 提交于
      There is an acct->nr_worker visit without lock protection. Think about
      the case: two callers call io_wqe_wake_worker(), one is the original
      context and the other one is an io-worker(by calling
      io_wqe_enqueue(wqe, linked)), on two cpus paralelly, this may cause
      nr_worker to be larger than max_worker.
      Let's fix it by adding lock for it, and let's do nr_workers++ before
      create_io_worker. There may be a edge cause that the first caller fails
      to create an io-worker, but the second caller doesn't know it and then
      quit creating io-worker as well:
      
      say nr_worker = max_worker - 1
              cpu 0                        cpu 1
         io_wqe_wake_worker()          io_wqe_wake_worker()
            nr_worker < max_worker
            nr_worker++
            create_io_worker()         nr_worker == max_worker
               failed                  return
            return
      
      But the chance of this case is very slim.
      
      Fixes: 685fe7fe ("io-wq: eliminate the need for a manager thread")
      Signed-off-by: NHao Xu <haoxu@linux.alibaba.com>
      [axboe: fix unconditional create_io_worker() call]
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3d4e4fac
  22. 05 8月, 2021 1 次提交
  23. 24 7月, 2021 1 次提交
    • J
      io_uring: explicitly catch any illegal async queue attempt · 991468dc
      Jens Axboe 提交于
      Catch an illegal case to queue async from an unrelated task that got
      the ring fd passed to it. This should not be possible to hit, but
      better be proactive and catch it explicitly. io-wq is extended to
      check for early IO_WQ_WORK_CANCEL being set on a work item as well,
      so it can run the request through the normal cancelation path.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      991468dc
  24. 18 6月, 2021 3 次提交
  25. 16 6月, 2021 2 次提交
  26. 14 6月, 2021 1 次提交