1. 15 3月, 2019 11 次提交
  2. 13 3月, 2019 4 次提交
  3. 09 3月, 2019 1 次提交
    • N
      nfsd: allow nfsv3 readdir request to be larger. · f875a792
      NeilBrown 提交于
      nfsd currently reports the NFSv3 dtpref FSINFO parameter
      to be PAGE_SIZE, so NFS clients will typically ask for one
      page of directory entries at a time.  This is needlessly restrictive
      as nfsd can handle larger replies easily.
      
      Also, a READDIR request (but not a READDIRPLUS request) has the count
      size clipped to PAGE_SIE, again unnecessary.
      
      This patch lifts these limits so that larger readdir requests can be
      used.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f875a792
  4. 08 3月, 2019 15 次提交
  5. 07 3月, 2019 3 次提交
    • E
      fs: cifs: Kconfig: pedantic formatting · 50cfad78
      Enrico Weigelt, metux IT consult 提交于
      Formatting of Kconfig files doesn't look so pretty, so just
      take damp cloth and clean it up.
      Signed-off-by: NEnrico Weigelt, metux IT consult <info@metux.net>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      50cfad78
    • J
      io_uring: allow workqueue item to handle multiple buffered requests · 31b51510
      Jens Axboe 提交于
      Right now we punt any buffered request that ends up triggering an
      -EAGAIN to an async workqueue. This works fine in terms of providing
      async execution of them, but it also can create quite a lot of work
      queue items. For sequentially buffered IO, it's advantageous to
      serialize the issue of them. For reads, the first one will trigger a
      read-ahead, and subsequent request merely end up waiting on later pages
      to complete. For writes, devices usually respond better to streamed
      sequential writes.
      
      Add state to track the last buffered request we punted to a work queue,
      and if the next one is sequential to the previous, attempt to get the
      previous work item to handle it. We limit the number of sequential
      add-ons to the a multiple (8) of the max read-ahead size of the file.
      This should be a good number for both reads and wries, as it defines the
      max IO size the device can do directly.
      
      This drastically cuts down on the number of context switches we need to
      handle buffered sequential IO, and a basic test case of copying a big
      file with io_uring sees a 5x speedup.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      31b51510
    • J
      io_uring: add support for IORING_OP_POLL · 221c5eb2
      Jens Axboe 提交于
      This is basically a direct port of bfe4037e, which implements a
      one-shot poll command through aio. Description below is based on that
      commit as well. However, instead of adding a POLL command and relying
      on io_cancel(2) to remove it, we mimic the epoll(2) interface of
      having a command to add a poll notification, IORING_OP_POLL_ADD,
      and one to remove it again, IORING_OP_POLL_REMOVE.
      
      To poll for a file descriptor the application should submit an sqe of
      type IORING_OP_POLL. It will poll the fd for the events specified in the
      poll_events field.
      
      Unlike poll or epoll without EPOLLONESHOT this interface always works in
      one shot mode, that is once the sqe is completed, it will have to be
      resubmitted.
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Based-on-code-from: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      221c5eb2
  6. 06 3月, 2019 6 次提交