1. 06 11月, 2019 4 次提交
  2. 04 11月, 2019 1 次提交
  3. 02 11月, 2019 1 次提交
  4. 01 11月, 2019 2 次提交
    • J
      io_uring: set -EINTR directly when a signal wakes up in io_cqring_wait · e9ffa5c2
      Jackie Liu 提交于
      We didn't use -ERESTARTSYS to tell the application layer to restart the
      system call, but instead return -EINTR. we can set -EINTR directly when
      wakeup by the signal, which can help us save an assignment operation and
      comparison operation.
      Reviewed-by: NBob Liu <bob.liu@oracle.com>
      Signed-off-by: NJackie Liu <liuyun01@kylinos.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e9ffa5c2
    • J
      io_uring: support for generic async request cancel · 62755e35
      Jens Axboe 提交于
      This adds support for IORING_OP_ASYNC_CANCEL, which will attempt to
      cancel requests that have been punted to async context and are now
      in-flight. This works for regular read/write requests to files, as
      long as they haven't been started yet. For socket based IO (or things
      like accept4(2)), we can cancel work that is already running as well.
      
      To cancel a request, the sqe must have ->addr set to the user_data of
      the request it wishes to cancel. If the request is cancelled
      successfully, the original request is completed with -ECANCELED
      and the cancel request is completed with a result of 0. If the
      request was already running, the original may or may not complete
      in error. The cancel request will complete with -EALREADY for that
      case. And finally, if the request to cancel wasn't found, the cancel
      request is completed with -ENOENT.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      62755e35
  5. 30 10月, 2019 18 次提交
    • J
      io_uring: io_wq_create() returns an error pointer, not NULL · 975c99a5
      Jens Axboe 提交于
      syzbot reported an issue where we crash at setup time if failslab is
      used. The issue is that io_wq_create() returns an error pointer on
      failure, not NULL. Hence io_uring thought the io-wq was setup just
      fine, but in reality it's a garbage error pointer.
      
      Use IS_ERR() instead of a NULL check, and assign ret appropriately.
      
      Reported-by: syzbot+221cc24572a2fed23b6b@syzkaller.appspotmail.com
      Fixes: 561fb04a ("io_uring: replace workqueue usage with io-wq")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      975c99a5
    • J
      io_uring: fix race with canceling timeouts · 842f9612
      Jens Axboe 提交于
      If we get -1 from hrtimer_try_to_cancel(), we know that the timer
      is running. Hence leave all completion to the timeout handler. If
      we don't, we can corrupt the list and miss a completion.
      
      Fixes: 11365043 ("io_uring: add support for canceling timeout requests")
      Reported-by: NHrvoje Zeba <zeba.hrvoje@gmail.com>
      Tested-by: NHrvoje Zeba <zeba.hrvoje@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      842f9612
    • J
      io_uring: support for larger fixed file sets · 65e19f54
      Jens Axboe 提交于
      There's been a few requests for supporting more fixed files than 1024.
      This isn't really tricky to do, we just need to split up the file table
      into multiple tables and index appropriately. As we do so, reduce the
      max single file table to 512. This enables us to do single page allocs
      always for the tables, which is an improvement over the situation prior.
      
      This patch adds support for up to 64K files, which should be enough for
      everyone.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      65e19f54
    • J
      io_uring: protect fixed file indexing with array_index_nospec() · b7620121
      Jens Axboe 提交于
      We index the file tables with a user given value. After we check
      it's within our limits, use array_index_nospec() to prevent any
      spectre attacks here.
      Suggested-by: NJann Horn <jannh@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b7620121
    • J
      io_uring: add support for IORING_OP_ACCEPT · 17f2fe35
      Jens Axboe 提交于
      This allows an application to call accept4() in an async fashion. Like
      other opcodes, we first try a non-blocking accept, then punt to async
      context if we have to.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      17f2fe35
    • J
      io_uring: io_uring: add support for async work inheriting files · fcb323cc
      Jens Axboe 提交于
      This is in preparation for adding opcodes that need to add new files
      in a process file table, system calls like open(2) or accept4(2).
      
      If an opcode needs this, it must set IO_WQ_WORK_NEEDS_FILES in the work
      item. If work that needs to get punted to async context have this
      set, the async worker will assume the original task file table before
      executing the work.
      
      Note that opcodes that need access to the current files of an
      application cannot be done through IORING_SETUP_SQPOLL.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fcb323cc
    • J
      io_uring: replace workqueue usage with io-wq · 561fb04a
      Jens Axboe 提交于
      Drop various work-arounds we have for workqueues:
      
      - We no longer need the async_list for tracking sequential IO.
      
      - We don't have to maintain our own mm tracking/setting.
      
      - We don't need a separate workqueue for buffered writes. This didn't
        even work that well to begin with, as it was suboptimal for multiple
        buffered writers on multiple files.
      
      - We can properly cancel pending interruptible work. This fixes
        deadlocks with particularly socket IO, where we cannot cancel them
        when the io_uring is closed. Hence the ring will wait forever for
        these requests to complete, which may never happen. This is different
        from disk IO where we know requests will complete in a finite amount
        of time.
      
      - Due to being able to cancel work interruptible work that is already
        running, we can implement file table support for work. We need that
        for supporting system calls that add to a process file table.
      
      - It gets us one step closer to adding async support for any system
        call.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      561fb04a
    • J
      io-wq: small threadpool implementation for io_uring · 771b53d0
      Jens Axboe 提交于
      This adds support for io-wq, a smaller and specialized thread pool
      implementation. This is meant to replace workqueues for io_uring. Among
      the reasons for this addition are:
      
      - We can assign memory context smarter and more persistently if we
        manage the life time of threads.
      
      - We can drop various work-arounds we have in io_uring, like the
        async_list.
      
      - We can implement hashed work insertion, to manage concurrency of
        buffered writes without needing a) an extra workqueue, or b)
        needlessly making the concurrency of said workqueue very low
        which hurts performance of multiple buffered file writers.
      
      - We can implement cancel through signals, for cancelling
        interruptible work like read/write (or send/recv) to/from sockets.
      
      - We need the above cancel for being able to assign and use file tables
        from a process.
      
      - We can implement a more thorough cancel operation in general.
      
      - We need it to move towards a syslet/threadlet model for even faster
        async execution. For that we need to take ownership of the used
        threads.
      
      This list is just off the top of my head. Performance should be the
      same, or better, at least that's what I've seen in my testing. io-wq
      supports basic NUMA functionality, setting up a pool per node.
      
      io-wq hooks up to the scheduler schedule in/out just like workqueue
      and uses that to drive the need for more/less workers.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      771b53d0
    • P
      io_uring: Fix mm_fault with READ/WRITE_FIXED · 95a1b3ff
      Pavel Begunkov 提交于
      Commit fb5ccc98 ("io_uring: Fix broken links with offloading")
      introduced a potential performance regression with unconditionally
      taking mm even for READ/WRITE_FIXED operations.
      
      Return the logic handling it back. mm-faulted requests will go through
      the generic submission path, so honoring links and drains, but will
      fail further on req->has_user check.
      
      Fixes: fb5ccc98 ("io_uring: Fix broken links with offloading")
      Cc: stable@vger.kernel.org # v5.4
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      95a1b3ff
    • P
      io_uring: remove index from sqe_submit · fa456228
      Pavel Begunkov 提交于
      submit->index is used only for inbound check in submission path (i.e.
      head < ctx->sq_entries). However, it always will be true, as
      1. it's already validated by io_get_sqring()
      2. ctx->sq_entries can't be changedd in between, because of held
      ctx->uring_lock and ctx->refs.
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fa456228
    • D
      io_uring: add set of tracing events · c826bd7a
      Dmitrii Dolgov 提交于
      To trace io_uring activity one can get an information from workqueue and
      io trace events, but looks like some parts could be hard to identify via
      this approach. Making what happens inside io_uring more transparent is
      important to be able to reason about many aspects of it, hence introduce
      the set of tracing events.
      
      All such events could be roughly divided into two categories:
      
      * those, that are helping to understand correctness (from both kernel
        and an application point of view). E.g. a ring creation, file
        registration, or waiting for available CQE. Proposed approach is to
        get a pointer to an original structure of interest (ring context, or
        request), and then find relevant events. io_uring_queue_async_work
        also exposes a pointer to work_struct, to be able to track down
        corresponding workqueue events.
      
      * those, that provide performance related information. Mostly it's about
        events that change the flow of requests, e.g. whether an async work
        was queued, or delayed due to some dependencies. Another important
        case is how io_uring optimizations (e.g. registered files) are
        utilized.
      Signed-off-by: NDmitrii Dolgov <9erthalion6@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c826bd7a
    • J
      io_uring: add support for canceling timeout requests · 11365043
      Jens Axboe 提交于
      We might have cases where the need for a specific timeout is gone, add
      support for canceling an existing timeout operation. This works like the
      POLL_REMOVE command, where the application passes in the user_data of
      the timeout it wishes to cancel in the sqe->addr field.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      11365043
    • J
      io_uring: add support for absolute timeouts · a41525ab
      Jens Axboe 提交于
      This is a pretty trivial addition on top of the relative timeouts
      we have now, but it's handy for ensuring tighter timing for those
      that are building scheduling primitives on top of io_uring.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a41525ab
    • J
      io_uring: replace s->needs_lock with s->in_async · ba5290cc
      Jackie Liu 提交于
      There is no function change, just to clean up the code, use s->in_async
      to make the code know where it is.
      Signed-off-by: NJackie Liu <liuyun01@kylinos.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ba5290cc
    • J
      io_uring: allow application controlled CQ ring size · 33a107f0
      Jens Axboe 提交于
      We currently size the CQ ring as twice the SQ ring, to allow some
      flexibility in not overflowing the CQ ring. This is done because the
      SQE life time is different than that of the IO request itself, the SQE
      is consumed as soon as the kernel has seen the entry.
      
      Certain application don't need a huge SQ ring size, since they just
      submit IO in batches. But they may have a lot of requests pending, and
      hence need a big CQ ring to hold them all. By allowing the application
      to control the CQ ring size multiplier, we can cater to those
      applications more efficiently.
      
      If an application wants to define its own CQ ring size, it must set
      IORING_SETUP_CQSIZE in the setup flags, and fill out
      io_uring_params->cq_entries. The value must be a power of two.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      33a107f0
    • J
      io_uring: add support for IORING_REGISTER_FILES_UPDATE · c3a31e60
      Jens Axboe 提交于
      Allows the application to remove/replace/add files to/from a file set.
      Passes in a struct:
      
      struct io_uring_files_update {
      	__u32 offset;
      	__s32 *fds;
      };
      
      that holds an array of fds, size of array passed in through the usual
      nr_args part of the io_uring_register() system call. The logic is as
      follows:
      
      1) If ->fds[i] is -1, the existing file at i + ->offset is removed from
         the set.
      2) If ->fds[i] is a valid fd, the existing file at i + ->offset is
         replaced with ->fds[i].
      
      For case #2, is the existing file is currently empty (fd == -1), the
      new fd is simply added to the array.
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c3a31e60
    • J
      io_uring: allow sparse fixed file sets · 08a45173
      Jens Axboe 提交于
      This is in preparation for allowing updates to fixed file sets without
      requiring a full unregister+register.
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      08a45173
    • J
      io_uring: run dependent links inline if possible · ba816ad6
      Jens Axboe 提交于
      Currently any dependent link is executed from a new workqueue context,
      which means that we'll be doing a context switch per link in the chain.
      If we are running the completion of the current request from our async
      workqueue and find that the next request is a link, then run it directly
      from the workqueue context instead of forcing another switch.
      
      This improves the performance of linked SQEs, and reduces the CPU
      overhead.
      Reviewed-by: NJackie Liu <liuyun01@kylinos.cn>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ba816ad6
  6. 28 10月, 2019 2 次提交
  7. 26 10月, 2019 2 次提交
  8. 25 10月, 2019 6 次提交
    • P
      io_uring: Fix race for sqes with userspace · 935d1e45
      Pavel Begunkov 提交于
      io_ring_submit() finalises with
      1. io_commit_sqring(), which releases sqes to the userspace
      2. Then calls to io_queue_link_head(), accessing released head's sqe
      
      Reorder them.
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      935d1e45
    • P
      io_uring: Fix broken links with offloading · fb5ccc98
      Pavel Begunkov 提交于
      io_sq_thread() processes sqes by 8 without considering links. As a
      result, links will be randomely subdivided.
      
      The easiest way to fix it is to call io_get_sqring() inside
      io_submit_sqes() as do io_ring_submit().
      
      Downsides:
      1. This removes optimisation of not grabbing mm_struct for fixed files
      2. It submitting all sqes in one go, without finer-grained sheduling
      with cq processing.
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fb5ccc98
    • P
      io_uring: Fix corrupted user_data · 84d55dc5
      Pavel Begunkov 提交于
      There is a bug, where failed linked requests are returned not with
      specified @user_data, but with garbage from a kernel stack.
      
      The reason is that io_fail_links() uses req->user_data, which is
      uninitialised when called from io_queue_sqe() on fail path.
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      84d55dc5
    • D
      cifs: Fix cifsInodeInfo lock_sem deadlock when reconnect occurs · d46b0da7
      Dave Wysochanski 提交于
      There's a deadlock that is possible and can easily be seen with
      a test where multiple readers open/read/close of the same file
      and a disruption occurs causing reconnect.  The deadlock is due
      a reader thread inside cifs_strict_readv calling down_read and
      obtaining lock_sem, and then after reconnect inside
      cifs_reopen_file calling down_read a second time.  If in
      between the two down_read calls, a down_write comes from
      another process, deadlock occurs.
      
              CPU0                    CPU1
              ----                    ----
      cifs_strict_readv()
       down_read(&cifsi->lock_sem);
                                     _cifsFileInfo_put
                                        OR
                                     cifs_new_fileinfo
                                      down_write(&cifsi->lock_sem);
      cifs_reopen_file()
       down_read(&cifsi->lock_sem);
      
      Fix the above by changing all down_write(lock_sem) calls to
      down_write_trylock(lock_sem)/msleep() loop, which in turn
      makes the second down_read call benign since it will never
      block behind the writer while holding lock_sem.
      Signed-off-by: NDave Wysochanski <dwysocha@redhat.com>
      Suggested-by: NRonnie Sahlberg <lsahlber@redhat.com>
      Reviewed--by: NRonnie Sahlberg <lsahlber@redhat.com>
      Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com>
      d46b0da7
    • P
      CIFS: Fix use after free of file info structures · 1a67c415
      Pavel Shilovsky 提交于
      Currently the code assumes that if a file info entry belongs
      to lists of open file handles of an inode and a tcon then
      it has non-zero reference. The recent changes broke that
      assumption when putting the last reference of the file info.
      There may be a situation when a file is being deleted but
      nothing prevents another thread to reference it again
      and start using it. This happens because we do not hold
      the inode list lock while checking the number of references
      of the file info structure. Fix this by doing the proper
      locking when doing the check.
      
      Fixes: 487317c9 ("cifs: add spinlock for the openFileList to cifsInodeInfo")
      Fixes: cb248819 ("cifs: use cifsInodeInfo->open_file_lock while iterating to avoid a panic")
      Cc: Stable <stable@vger.kernel.org>
      Reviewed-by: NRonnie Sahlberg <lsahlber@redhat.com>
      Signed-off-by: NPavel Shilovsky <pshilov@microsoft.com>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      1a67c415
    • P
      CIFS: Fix retry mid list corruption on reconnects · abe57073
      Pavel Shilovsky 提交于
      When the client hits reconnect it iterates over the mid
      pending queue marking entries for retry and moving them
      to a temporary list to issue callbacks later without holding
      GlobalMid_Lock. In the same time there is no guarantee that
      mids can't be removed from the temporary list or even
      freed completely by another thread. It may cause a temporary
      list corruption:
      
      [  430.454897] list_del corruption. prev->next should be ffff98d3a8f316c0, but was 2e885cb266355469
      [  430.464668] ------------[ cut here ]------------
      [  430.466569] kernel BUG at lib/list_debug.c:51!
      [  430.468476] invalid opcode: 0000 [#1] SMP PTI
      [  430.470286] CPU: 0 PID: 13267 Comm: cifsd Kdump: loaded Not tainted 5.4.0-rc3+ #19
      [  430.473472] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      [  430.475872] RIP: 0010:__list_del_entry_valid.cold+0x31/0x55
      ...
      [  430.510426] Call Trace:
      [  430.511500]  cifs_reconnect+0x25e/0x610 [cifs]
      [  430.513350]  cifs_readv_from_socket+0x220/0x250 [cifs]
      [  430.515464]  cifs_read_from_socket+0x4a/0x70 [cifs]
      [  430.517452]  ? try_to_wake_up+0x212/0x650
      [  430.519122]  ? cifs_small_buf_get+0x16/0x30 [cifs]
      [  430.521086]  ? allocate_buffers+0x66/0x120 [cifs]
      [  430.523019]  cifs_demultiplex_thread+0xdc/0xc30 [cifs]
      [  430.525116]  kthread+0xfb/0x130
      [  430.526421]  ? cifs_handle_standard+0x190/0x190 [cifs]
      [  430.528514]  ? kthread_park+0x90/0x90
      [  430.530019]  ret_from_fork+0x35/0x40
      
      Fix this by obtaining extra references for mids being retried
      and marking them as MID_DELETED which indicates that such a mid
      has been dequeued from the pending list.
      
      Also move mid cleanup logic from DeleteMidQEntry to
      _cifs_mid_q_entry_release which is called when the last reference
      to a particular mid is put. This allows to avoid any use-after-free
      of response buffers.
      
      The patch needs to be backported to stable kernels. A stable tag
      is not mentioned below because the patch doesn't apply cleanly
      to any actively maintained stable kernel.
      Reviewed-by: NRonnie Sahlberg <lsahlber@redhat.com>
      Reviewed-and-tested-by: NDavid Wysochanski <dwysocha@redhat.com>
      Signed-off-by: NPavel Shilovsky <pshilov@microsoft.com>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      abe57073
  9. 24 10月, 2019 4 次提交
新手
引导
客服 返回
顶部