1. 27 8月, 2020 1 次提交
    • B
      xfs: finish dfops on every insert range shift iteration · 9c516e0e
      Brian Foster 提交于
      The recent change to make insert range an atomic operation used the
      incorrect transaction rolling mechanism. The explicit transaction
      roll does not finish deferred operations. This means that intents
      for rmapbt updates caused by extent shifts are not logged until the
      final transaction commits. Thus if a crash occurs during an insert
      range, log recovery might leave the rmapbt in an inconsistent state.
      This was discovered by repeated runs of generic/455.
      
      Update insert range to finish dfops on every shift iteration. This
      is similar to collapse range and ensures that intents are logged
      with the transactions that make associated changes.
      
      Fixes: dd87f87d ("xfs: rework insert range into an atomic operation")
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      9c516e0e
  2. 16 8月, 2020 2 次提交
    • J
      io_uring: short circuit -EAGAIN for blocking read attempt · f91daf56
      Jens Axboe 提交于
      One case was missed in the short IO retry handling, and that's hitting
      -EAGAIN on a blocking attempt read (eg from io-wq context). This is a
      problem on sockets that are marked as non-blocking when created, they
      don't carry any REQ_F_NOWAIT information to help us terminate them
      instead of perpetually retrying.
      
      Fixes: 227c0c96 ("io_uring: internally retry short reads")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f91daf56
    • J
      io_uring: sanitize double poll handling · d4e7cd36
      Jens Axboe 提交于
      There's a bit of confusion on the matching pairs of poll vs double poll,
      depending on if the request is a pure poll (IORING_OP_POLL_ADD) or
      poll driven retry.
      
      Add io_poll_get_double() that returns the double poll waitqueue, if any,
      and io_poll_get_single() that returns the original poll waitqueue. With
      that, remove the argument to io_poll_remove_double().
      
      Finally ensure that wait->private is cleared once the double poll handler
      has run, so that remove knows it's already been seen.
      
      Cc: stable@vger.kernel.org # v5.8
      Reported-by: syzbot+7f617d4a9369028b8a2c@syzkaller.appspotmail.com
      Fixes: 18bceab1 ("io_uring: allow POLL_ADD with double poll_wait() users")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d4e7cd36
  3. 15 8月, 2020 2 次提交
  4. 14 8月, 2020 3 次提交
    • S
      SMB3: Fix mkdir when idsfromsid configured on mount · c8c412f9
      Steve French 提交于
      mkdir uses a compounded create operation which was not setting
      the security descriptor on create of a directory. Fix so
      mkdir now sets the mode and owner info properly when idsfromsid
      and modefromsid are configured on the mount.
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      CC: Stable <stable@vger.kernel.org> # v5.8
      Reviewed-by: NPaulo Alcantara (SUSE) <pc@cjr.nz>
      Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com>
      c8c412f9
    • J
      io_uring: internally retry short reads · 227c0c96
      Jens Axboe 提交于
      We've had a few application cases of not handling short reads properly,
      and it is understandable as short reads aren't really expected if the
      application isn't doing non-blocking IO.
      
      Now that we retain the iov_iter over retries, we can implement internal
      retry pretty trivially. This ensures that we don't return a short read,
      even for buffered reads on page cache conflicts.
      
      Cleanup the deep nesting and hard to read nature of io_read() as well,
      it's much more straight forward now to read and understand. Added a
      few comments explaining the logic as well.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      227c0c96
    • J
      io_uring: retain iov_iter state over io_read/io_write calls · ff6165b2
      Jens Axboe 提交于
      Instead of maintaining (and setting/remembering) iov_iter size and
      segment counts, just put the iov_iter in the async part of the IO
      structure.
      
      This is mostly a preparation patch for doing appropriate internal retries
      for short reads, but it also cleans up the state handling nicely and
      simplifies it quite a bit.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ff6165b2
  5. 13 8月, 2020 25 次提交
  6. 12 8月, 2020 7 次提交