1. 10 12月, 2020 1 次提交
  2. 05 11月, 2020 1 次提交
    • B
      xfs: flush new eof page on truncate to avoid post-eof corruption · 869ae85d
      Brian Foster 提交于
      It is possible to expose non-zeroed post-EOF data in XFS if the new
      EOF page is dirty, backed by an unwritten block and the truncate
      happens to race with writeback. iomap_truncate_page() will not zero
      the post-EOF portion of the page if the underlying block is
      unwritten. The subsequent call to truncate_setsize() will, but
      doesn't dirty the page. Therefore, if writeback happens to complete
      after iomap_truncate_page() (so it still sees the unwritten block)
      but before truncate_setsize(), the cached page becomes inconsistent
      with the on-disk block. A mapped read after the associated page is
      reclaimed or invalidated exposes non-zero post-EOF data.
      
      For example, consider the following sequence when run on a kernel
      modified to explicitly flush the new EOF page within the race
      window:
      
      $ xfs_io -fc "falloc 0 4k" -c fsync /mnt/file
      $ xfs_io -c "pwrite 0 4k" -c "truncate 1k" /mnt/file
        ...
      $ xfs_io -c "mmap 0 4k" -c "mread -v 1k 8" /mnt/file
      00000400:  00 00 00 00 00 00 00 00  ........
      $ umount /mnt/; mount <dev> /mnt/
      $ xfs_io -c "mmap 0 4k" -c "mread -v 1k 8" /mnt/file
      00000400:  cd cd cd cd cd cd cd cd  ........
      
      Update xfs_setattr_size() to explicitly flush the new EOF page prior
      to the page truncate to ensure iomap has the latest state of the
      underlying block.
      
      Fixes: 68a9f5e7 ("xfs: implement iomap based buffered write path")
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      869ae85d
  3. 26 9月, 2020 1 次提交
  4. 10 6月, 2020 1 次提交
  5. 04 6月, 2020 1 次提交
  6. 30 5月, 2020 3 次提交
  7. 20 5月, 2020 1 次提交
  8. 05 5月, 2020 4 次提交
  9. 19 3月, 2020 1 次提交
  10. 03 3月, 2020 4 次提交
  11. 10 1月, 2020 1 次提交
  12. 14 11月, 2019 3 次提交
  13. 11 11月, 2019 2 次提交
  14. 05 11月, 2019 1 次提交
  15. 30 10月, 2019 5 次提交
  16. 28 10月, 2019 1 次提交
  17. 22 10月, 2019 2 次提交
  18. 23 8月, 2019 1 次提交
    • D
      xfs: fix missing ILOCK unlock when xfs_setattr_nonsize fails due to EDQUOT · 1fb254aa
      Darrick J. Wong 提交于
      Benjamin Moody reported to Debian that XFS partially wedges when a chgrp
      fails on account of being out of disk quota.  I ran his reproducer
      script:
      
      # adduser dummy
      # adduser dummy plugdev
      
      # dd if=/dev/zero bs=1M count=100 of=test.img
      # mkfs.xfs test.img
      # mount -t xfs -o gquota test.img /mnt
      # mkdir -p /mnt/dummy
      # chown -c dummy /mnt/dummy
      # xfs_quota -xc 'limit -g bsoft=100k bhard=100k plugdev' /mnt
      
      (and then as user dummy)
      
      $ dd if=/dev/urandom bs=1M count=50 of=/mnt/dummy/foo
      $ chgrp plugdev /mnt/dummy/foo
      
      and saw:
      
      ================================================
      WARNING: lock held when returning to user space!
      5.3.0-rc5 #rc5 Tainted: G        W
      ------------------------------------------------
      chgrp/47006 is leaving the kernel with locks still held!
      1 lock held by chgrp/47006:
       #0: 000000006664ea2d (&xfs_nondir_ilock_class){++++}, at: xfs_ilock+0xd2/0x290 [xfs]
      
      ...which is clearly caused by xfs_setattr_nonsize failing to unlock the
      ILOCK after the xfs_qm_vop_chown_reserve call fails.  Add the missing
      unlock.
      
      Reported-by: benjamin.moody@gmail.com
      Fixes: 253f4911 ("xfs: better xfs_trans_alloc interface")
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Tested-by: NSalvatore Bonaccorso <carnil@debian.org>
      1fb254aa
  19. 29 6月, 2019 1 次提交
  20. 02 3月, 2019 1 次提交
  21. 15 2月, 2019 1 次提交
    • D
      xfs: don't ever put nlink > 0 inodes on the unlinked list · c4a6bf7f
      Darrick J. Wong 提交于
      When XFS creates an O_TMPFILE file, the inode is created with nlink = 1,
      put on the unlinked list, and then the VFS sets nlink = 0 in d_tmpfile.
      If we crash before anything logs the inode (it's dirty incore but the
      vfs doesn't tell us it's dirty so we never log that change), the iunlink
      processing part of recovery will then explode with a pile of:
      
      XFS: Assertion failed: VFS_I(ip)->i_nlink == 0, file:
      fs/xfs/xfs_log_recover.c, line: 5072
      
      Worse yet, since nlink is nonzero, the inodes also don't get cleaned up
      and they just leak until the next xfs_repair run.
      
      Therefore, change xfs_iunlink to require that inodes being put on the
      unlinked list have nlink == 0, change the tmpfile callers to instantiate
      nodes that way, and set the nlink to 1 just prior to calling d_tmpfile.
      Fix the comment for xfs_iunlink while we're at it.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      c4a6bf7f
  22. 29 9月, 2018 1 次提交
  23. 04 8月, 2018 1 次提交
  24. 27 7月, 2018 1 次提交