1. 19 10月, 2021 16 次提交
  2. 18 10月, 2021 3 次提交
  3. 14 10月, 2021 1 次提交
  4. 02 10月, 2021 1 次提交
  5. 25 9月, 2021 8 次提交
  6. 15 9月, 2021 3 次提交
  7. 14 9月, 2021 3 次提交
  8. 13 9月, 2021 1 次提交
  9. 10 9月, 2021 1 次提交
  10. 09 9月, 2021 3 次提交
    • P
      io_uring: fail links of cancelled timeouts · 2ae2eb9d
      Pavel Begunkov 提交于
      When we cancel a timeout we should mark it with REQ_F_FAIL, so
      linked requests are cancelled as well, but not queued for further
      execution.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Link: https://lore.kernel.org/r/fff625b44eeced3a5cae79f60e6acf3fbdf8f990.1631192135.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      2ae2eb9d
    • J
      io_uring: drop ctx->uring_lock before acquiring sqd->lock · 009ad9f0
      Jens Axboe 提交于
      The SQPOLL thread dictates the lock order, and we hold the ctx->uring_lock
      for all the registration opcodes. We also hold a ref to the ctx, and we
      do drop the lock for other reasons to quiesce, so it's fine to drop the
      ctx lock temporarily to grab the sqd->lock. This fixes the following
      lockdep splat:
      
      ======================================================
      WARNING: possible circular locking dependency detected
      5.14.0-syzkaller #0 Not tainted
      ------------------------------------------------------
      syz-executor.5/25433 is trying to acquire lock:
      ffff888023426870 (&sqd->lock){+.+.}-{3:3}, at: io_register_iowq_max_workers fs/io_uring.c:10551 [inline]
      ffff888023426870 (&sqd->lock){+.+.}-{3:3}, at: __io_uring_register fs/io_uring.c:10757 [inline]
      ffff888023426870 (&sqd->lock){+.+.}-{3:3}, at: __do_sys_io_uring_register+0x10aa/0x2e70 fs/io_uring.c:10792
      
      but task is already holding lock:
      ffff8880885b40a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_register+0x2e1/0x2e70 fs/io_uring.c:10791
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #1 (&ctx->uring_lock){+.+.}-{3:3}:
             __mutex_lock_common kernel/locking/mutex.c:596 [inline]
             __mutex_lock+0x131/0x12f0 kernel/locking/mutex.c:729
             __io_sq_thread fs/io_uring.c:7291 [inline]
             io_sq_thread+0x65a/0x1370 fs/io_uring.c:7368
             ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
      
      -> #0 (&sqd->lock){+.+.}-{3:3}:
             check_prev_add kernel/locking/lockdep.c:3051 [inline]
             check_prevs_add kernel/locking/lockdep.c:3174 [inline]
             validate_chain kernel/locking/lockdep.c:3789 [inline]
             __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015
             lock_acquire kernel/locking/lockdep.c:5625 [inline]
             lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590
             __mutex_lock_common kernel/locking/mutex.c:596 [inline]
             __mutex_lock+0x131/0x12f0 kernel/locking/mutex.c:729
             io_register_iowq_max_workers fs/io_uring.c:10551 [inline]
             __io_uring_register fs/io_uring.c:10757 [inline]
             __do_sys_io_uring_register+0x10aa/0x2e70 fs/io_uring.c:10792
             do_syscall_x64 arch/x86/entry/common.c:50 [inline]
             do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
             entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      other info that might help us debug this:
      
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(&ctx->uring_lock);
                                     lock(&sqd->lock);
                                     lock(&ctx->uring_lock);
        lock(&sqd->lock);
      
       *** DEADLOCK ***
      
      Fixes: 2e480058 ("io-wq: provide a way to limit max number of workers")
      Reported-by: syzbot+97fa56483f69d677969f@syzkaller.appspotmail.com
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      009ad9f0
    • P
      io_uring: fix missing mb() before waitqueue_active · c57a91fb
      Pavel Begunkov 提交于
      In case of !SQPOLL, io_cqring_ev_posted_iopoll() doesn't provide a
      memory barrier required by waitqueue_active(&ctx->poll_wait). There is
      a wq_has_sleeper(), which does smb_mb() inside, but it's called only for
      SQPOLL.
      
      Fixes: 5fd46178 ("io_uring: be smarter about waking multiple CQ ring waiters")
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Link: https://lore.kernel.org/r/2982e53bcea2274006ed435ee2a77197107d8a29.1631130542.git.asml.silence@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      c57a91fb