“cb259f07b1daacafac3b12ecd7f180faf7e506b0”上不存在“Documentation/ioctl/ioctl-number.txt”
  1. 10 1月, 2020 3 次提交
  2. 14 11月, 2019 2 次提交
  3. 08 11月, 2019 1 次提交
  4. 01 11月, 2019 1 次提交
    • D
      xfs: properly serialise fallocate against AIO+DIO · 249bd908
      Dave Chinner 提交于
      AIO+DIO can extend the file size on IO completion, and it holds
      no inode locks while the IO is in flight. Therefore, a race
      condition exists in file size updates if we do something like this:
      
      aio-thread			fallocate-thread
      
      lock inode
      submit IO beyond inode->i_size
      unlock inode
      .....
      				lock inode
      				break layouts
      				if (off + len > inode->i_size)
      					new_size = off + len
      				.....
      				inode_dio_wait()
      				<blocks>
      .....
      completes
      inode->i_size updated
      inode_dio_done()
      ....
      				<wakes>
      				<does stuff no long beyond EOF>
      				if (new_size)
      					xfs_vn_setattr(inode, new_size)
      
      
      Yup, that attempt to extend the file size in the fallocate code
      turns into a truncate - it removes the whatever the aio write
      allocated and put to disk, and reduced the inode size back down to
      where the fallocate operation ends.
      
      Fundamentally, xfs_file_fallocate()  not compatible with racing
      AIO+DIO completions, so we need to move the inode_dio_wait() call
      up to where the lock the inode and break the layouts.
      
      Secondly, storing the inode size and then using it unchecked without
      holding the ILOCK is not safe; we can only do such a thing if we've
      locked out and drained all IO and other modification operations,
      which we don't do initially in xfs_file_fallocate.
      
      It should be noted that some of the fallocate operations are
      compound operations - they are made up of multiple manipulations
      that may zero data, and so we may need to flush and invalidate the
      file multiple times during an operation. However, we only need to
      lock out IO and other space manipulation operations once, as that
      lockout is maintained until the entire fallocate operation has been
      completed.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      249bd908
  5. 29 10月, 2019 1 次提交
  6. 28 10月, 2019 4 次提交
  7. 04 9月, 2019 1 次提交
  8. 31 8月, 2019 3 次提交
    • C
      xfs: fix the dax supported check in xfs_ioctl_setattr_dax_invalidate · adcb0ca2
      Christoph Hellwig 提交于
      Setting the DAX flag on the directory of a file system that is not on a
      DAX capable device makes as little sense as setting it on a regular file
      on the same file system.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      adcb0ca2
    • D
      xfs: allocate xattr buffer on demand · ddbca70c
      Dave Chinner 提交于
      When doing file lookups and checking for permissions, we end up in
      xfs_get_acl() to see if there are any ACLs on the inode. This
      requires and xattr lookup, and to do that we have to supply a buffer
      large enough to hold an maximum sized xattr.
      
      On workloads were we are accessing a wide range of cache cold files
      under memory pressure (e.g. NFS fileservers) we end up spending a
      lot of time allocating the buffer. The buffer is 64k in length, so
      is a contiguous multi-page allocation, and if that then fails we
      fall back to vmalloc(). Hence the allocation here is /expensive/
      when we are looking up hundreds of thousands of files a second.
      
      Initial numbers from a bpf trace show average time in xfs_get_acl()
      is ~32us, with ~19us of that in the memory allocation. Note these
      are average times, so there are going to be affected by the worst
      case allocations more than the common fast case...
      
      To avoid this, we could just do a "null"  lookup to see if the ACL
      xattr exists and then only do the allocation if it exists. This,
      however, optimises the path for the "no ACL present" case at the
      expense of the "acl present" case. i.e. we can halve the time in
      xfs_get_acl() for the no acl case (i.e down to ~10-15us), but that
      then increases the ACL case by 30% (i.e. up to 40-45us).
      
      To solve this and speed up both cases, drive the xattr buffer
      allocation into the attribute code once we know what the actual
      xattr length is. For the no-xattr case, we avoid the allocation
      completely, speeding up that case. For the common ACL case, we'll
      end up with a fast heap allocation (because it'll be smaller than a
      page), and only for the rarer "we have a remote xattr" will we have
      a multi-page allocation occur. Hence the common ACL case will be
      much faster, too.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      ddbca70c
    • A
      kill the last users of user_{path,lpath,path_dir}() · ce6595a2
      Al Viro 提交于
      old wrappers with few callers remaining; put them out of their misery...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ce6595a2
  9. 30 8月, 2019 1 次提交
  10. 27 8月, 2019 1 次提交
  11. 07 7月, 2019 1 次提交
  12. 04 7月, 2019 8 次提交
  13. 03 7月, 2019 3 次提交
  14. 01 7月, 2019 3 次提交
  15. 29 6月, 2019 1 次提交
  16. 02 5月, 2019 1 次提交
  17. 23 4月, 2019 1 次提交
  18. 15 4月, 2019 3 次提交
  19. 06 11月, 2018 1 次提交