1. 16 2月, 2023 1 次提交
  2. 15 11月, 2022 1 次提交
  3. 09 8月, 2022 1 次提交
    • A
      new iov_iter flavour - ITER_UBUF · fcb14cb1
      Al Viro 提交于
      Equivalent of single-segment iovec.  Initialized by iov_iter_ubuf(),
      checked for by iter_is_ubuf(), otherwise behaves like ITER_IOVEC
      ones.
      
      We are going to expose the things like ->write_iter() et.al. to those
      in subsequent commits.
      
      New predicate (user_backed_iter()) that is true for ITER_IOVEC and
      ITER_UBUF; places like direct-IO handling should use that for
      checking that pages we modify after getting them from iov_iter_get_pages()
      would need to be dirtied.
      
      DO NOT assume that replacing iter_is_iovec() with user_backed_iter()
      will solve all problems - there's code that uses iter_is_iovec() to
      decide how to poke around in iov_iter guts and for that the predicate
      replacement obviously won't suffice.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      fcb14cb1
  4. 15 7月, 2022 1 次提交
  5. 27 6月, 2022 1 次提交
  6. 11 6月, 2022 2 次提交
  7. 16 5月, 2022 2 次提交
  8. 03 5月, 2022 1 次提交
  9. 18 4月, 2022 1 次提交
  10. 08 3月, 2022 1 次提交
  11. 09 2月, 2022 1 次提交
  12. 02 2月, 2022 1 次提交
  13. 04 12月, 2021 1 次提交
  14. 26 10月, 2021 1 次提交
  15. 24 10月, 2021 3 次提交
  16. 19 10月, 2021 1 次提交
  17. 18 10月, 2021 3 次提交
  18. 17 8月, 2021 1 次提交
  19. 04 8月, 2021 1 次提交
  20. 01 5月, 2021 1 次提交
  21. 11 3月, 2021 1 次提交
  22. 09 2月, 2021 1 次提交
  23. 25 1月, 2021 1 次提交
  24. 24 1月, 2021 3 次提交
  25. 28 9月, 2020 2 次提交
  26. 10 9月, 2020 2 次提交
  27. 06 8月, 2020 2 次提交
    • C
      iomap: fall back to buffered writes for invalidation failures · 60263d58
      Christoph Hellwig 提交于
      Failing to invalid the page cache means data in incoherent, which is
      a very bad state for the system.  Always fall back to buffered I/O
      through the page cache if we can't invalidate mappings.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NGoldwyn Rodrigues <rgoldwyn@suse.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Acked-by: NBob Peterson <rpeterso@redhat.com>
      Acked-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: Theodore Ts'o <tytso@mit.edu> # for ext4
      Reviewed-by: Andreas Gruenbacher <agruenba@redhat.com> # for gfs2
      Reviewed-by: NRitesh Harjani <riteshh@linux.ibm.com>
      60263d58
    • D
      iomap: Only invalidate page cache pages on direct IO writes · 54752de9
      Dave Chinner 提交于
      The historic requirement for XFS to invalidate cached pages on
      direct IO reads has been lost in the twisty pages of history - it was
      inherited from Irix, which implemented page cache invalidation on
      read as a method of working around problems synchronising page
      cache state with uncached IO.
      
      XFS has carried this ever since. In the initial linux ports it was
      necessary to get mmap and DIO to play "ok" together and not
      immediately corrupt data. This was the state of play until the linux
      kernel had infrastructure to track unwritten extents and synchronise
      page faults with allocations and unwritten extent conversions
      (->page_mkwrite infrastructure). IOws, the page cache invalidation
      on DIO read was necessary to prevent trivial data corruptions. This
      didn't solve all the problems, though.
      
      There were peformance problems if we didn't invalidate the entire
      page cache over the file on read - we couldn't easily determine if
      the cached pages were over the range of the IO, and invalidation
      required taking a serialising lock (i_mutex) on the inode. This
      serialising lock was an issue for XFS, as it was the only exclusive
      lock in the direct Io read path.
      
      Hence if there were any cached pages, we'd just invalidate the
      entire file in one go so that subsequent IOs didn't need to take the
      serialising lock. This was a problem that prevented ranged
      invalidation from being particularly useful for avoiding the
      remaining coherency issues. This was solved with the conversion of
      i_mutex to i_rwsem and the conversion of the XFS inode IO lock to
      use i_rwsem. Hence we could now just do ranged invalidation and the
      performance problem went away.
      
      However, page cache invalidation was still needed to serialise
      sub-page/sub-block zeroing via direct IO against buffered IO because
      bufferhead state attached to the cached page could get out of whack
      when direct IOs were issued.  We've removed bufferheads from the
      XFS code, and we don't carry any extent state on the cached pages
      anymore, and so this problem has gone away, too.
      
      IOWs, it would appear that we don't have any good reason to be
      invalidating the page cache on DIO reads anymore. Hence remove the
      invalidation on read because it is unnecessary overhead,
      not needed to maintain coherency between mmap/buffered access and
      direct IO anymore, and prevents anyone from using direct IO reads
      from intentionally invalidating the page cache of a file.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      54752de9
  28. 25 5月, 2020 2 次提交