1. 07 7月, 2020 17 次提交
    • D
      xfs: add an inode item lock · 1319ebef
      Dave Chinner 提交于
      The inode log item is kind of special in that it can be aggregating
      new changes in memory at the same time time existing changes are
      being written back to disk. This means there are fields in the log
      item that are accessed concurrently from contexts that don't share
      any locking at all.
      
      e.g. updating ili_last_fields occurs at flush time under the
      ILOCK_EXCL and flush lock at flush time, under the flush lock at IO
      completion time, and is read under the ILOCK_EXCL when the inode is
      logged.  Hence there is no actual serialisation between reading the
      field during logging of the inode in transactions vs clearing the
      field in IO completion.
      
      We currently get away with this by the fact that we are only
      clearing fields in IO completion, and nothing bad happens if we
      accidentally log more of the inode than we actually modify. Worst
      case is we consume a tiny bit more memory and log bandwidth.
      
      However, if we want to do more complex state manipulations on the
      log item that requires updates at all three of these potential
      locations, we need to have some mechanism of serialising those
      operations. To do this, introduce a spinlock into the log item to
      serialise internal state.
      
      This could be done via the xfs_inode i_flags_lock, but this then
      leads to potential lock inversion issues where inode flag updates
      need to occur inside locks that best nest inside the inode log item
      locks (e.g. marking inodes stale during inode cluster freeing).
      Using a separate spinlock avoids these sorts of problems and
      simplifies future code.
      
      This does not touch the use of ili_fields in the item formatting
      code - that is entirely protected by the ILOCK_EXCL at this point in
      time, so it remains untouched.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      1319ebef
    • D
      xfs: remove logged flag from inode log item · 1dfde687
      Dave Chinner 提交于
      This was used to track if the item had logged fields being flushed
      to disk. We log everything in the inode these days, so this logic is
      no longer needed. Remove it.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      1dfde687
    • D
      xfs: Don't allow logging of XFS_ISTALE inodes · 96355d5a
      Dave Chinner 提交于
      In tracking down a problem in this patchset, I discovered we are
      reclaiming dirty stale inodes. This wasn't discovered until inodes
      were always attached to the cluster buffer and then the rcu callback
      that freed inodes was assert failing because the inode still had an
      active pointer to the cluster buffer after it had been reclaimed.
      
      Debugging the issue indicated that this was a pre-existing issue
      resulting from the way the inodes are handled in xfs_inactive_ifree.
      When we free a cluster buffer from xfs_ifree_cluster, all the inodes
      in cache are marked XFS_ISTALE. Those that are clean have nothing
      else done to them and so eventually get cleaned up by background
      reclaim. i.e. it is assumed we'll never dirty/relog an inode marked
      XFS_ISTALE.
      
      On journal commit dirty stale inodes as are handled by both
      buffer and inode log items to run though xfs_istale_done() and
      removed from the AIL (buffer log item commit) or the log item will
      simply unpin it because the buffer log item will clean it. What happens
      to any specific inode is entirely dependent on which log item wins
      the commit race, but the result is the same - stale inodes are
      clean, not attached to the cluster buffer, and not in the AIL. Hence
      inode reclaim can just free these inodes without further care.
      
      However, if the stale inode is relogged, it gets dirtied again and
      relogged into the CIL. Most of the time this isn't an issue, because
      relogging simply changes the inode's location in the current
      checkpoint. Problems arise, however, when the CIL checkpoints
      between two transactions in the xfs_inactive_ifree() deferops
      processing. This results in the XFS_ISTALE inode being redirtied
      and inserted into the CIL without any of the other stale cluster
      buffer infrastructure being in place.
      
      Hence on journal commit, it simply gets unpinned, so it remains
      dirty in memory. Everything in inode writeback avoids XFS_ISTALE
      inodes so it can't be written back, and it is not tracked in the AIL
      so there's not even a trigger to attempt to clean the inode. Hence
      the inode just sits dirty in memory until inode reclaim comes along,
      sees that it is XFS_ISTALE, and goes to reclaim it. This reclaiming
      of a dirty inode caused use after free, list corruptions and other
      nasty issues later in this patchset.
      
      Hence this patch addresses a violation of the "never log XFS_ISTALE
      inodes" caused by the deferops processing rolling a transaction
      and relogging a stale inode in xfs_inactive_free. It also adds a
      bunch of asserts to catch this problem in debug kernels so that
      we don't reintroduce this problem in future.
      
      Reproducer for this issue was generic/558 on a v4 filesystem.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      96355d5a
    • Y
      xfs: remove useless definitions in xfs_linux.h · 0d5a5714
      Yafang Shao 提交于
      Remove current_pid(), current_test_flags() and
      current_clear_flags_nested(), because they are useless.
      Signed-off-by: NYafang Shao <laoar.shao@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      0d5a5714
    • D
      xfs: use MMAPLOCK around filemap_map_pages() · cd647d56
      Dave Chinner 提交于
      The page faultround path ->map_pages is implemented in XFS via
      filemap_map_pages(). This function checks that pages found in page
      cache lookups have not raced with truncate based invalidation by
      checking page->mapping is correct and page->index is within EOF.
      
      However, we've known for a long time that this is not sufficient to
      protect against races with invalidations done by operations that do
      not change EOF. e.g. hole punching and other fallocate() based
      direct extent manipulations. The way we protect against these
      races is we wrap the page fault operations in a XFS_MMAPLOCK_SHARED
      lock so they serialise against fallocate and truncate before calling
      into the filemap function that processes the fault.
      
      Do the same for XFS's ->map_pages implementation to close this
      potential data corruption issue.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NAmir Goldstein <amir73il@gmail.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      cd647d56
    • D
      xfs: move helpers that lock and unlock two inodes against userspace IO · e2aaee9c
      Darrick J. Wong 提交于
      Move the double-inode locking helpers to xfs_inode.c since they're not
      specific to reflink.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      e2aaee9c
    • D
      xfs: refactor locking and unlocking two inodes against userspace IO · 10b4bd6c
      Darrick J. Wong 提交于
      Refactor the two functions that we use to lock and unlock two inodes to
      block userspace from initiating IO against a file, whether via system
      calls or mmap activity.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      10b4bd6c
    • D
      xfs: fix xfs_reflink_remap_prep calling conventions · 451d34ee
      Darrick J. Wong 提交于
      Fix the return value of xfs_reflink_remap_prep so that its return value
      conventions match the rest of xfs.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      451d34ee
    • D
      xfs: reflink can skip remap existing mappings · 168eae80
      Darrick J. Wong 提交于
      If the source and destination map are identical, we can skip the remap
      step to save some time.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      168eae80
    • D
      xfs: only reserve quota blocks if we're mapping into a hole · 94b941fd
      Darrick J. Wong 提交于
      When logging quota block count updates during a reflink operation, we
      only log the /delta/ of the block count changes to the dquot.  Since we
      now know ahead of time the extent type of both dmap and smap (and that
      they have the same length), we know that we only need to reserve quota
      blocks for dmap's blockcount if we're mapping it into a hole.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      94b941fd
    • D
      xfs: only reserve quota blocks for bmbt changes if we're changing the data fork · aa5d0ba0
      Darrick J. Wong 提交于
      Now that we've reworked xfs_reflink_remap_extent to remap only one
      extent per transaction, we actually know if the extent being removed is
      an allocated mapping.  This means that we now know ahead of time if
      we're going to be touching the data fork.
      
      Since we only need blocks for a bmbt split if we're going to update the
      data fork, we only need to get quota reservation if we know we're going
      to touch the data fork.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      aa5d0ba0
    • D
      xfs: redesign the reflink remap loop to fix blkres depletion crash · 00fd1d56
      Darrick J. Wong 提交于
      The existing reflink remapping loop has some structural problems that
      need addressing:
      
      The biggest problem is that we create one transaction for each extent in
      the source file without accounting for the number of mappings there are
      for the same range in the destination file.  In other words, we don't
      know the number of remap operations that will be necessary and we
      therefore cannot guess the block reservation required.  On highly
      fragmented filesystems (e.g. ones with active dedupe) we guess wrong,
      run out of block reservation, and fail.
      
      The second problem is that we don't actually use the bmap intents to
      their full potential -- instead of calling bunmapi directly and having
      to deal with its backwards operation, we could call the deferred ops
      xfs_bmap_unmap_extent and xfs_refcount_decrease_extent instead.  This
      makes the frontend loop much simpler.
      
      Solve all of these problems by refactoring the remapping loops so that
      we only perform one remapping operation per transaction, and each
      operation only tries to remap a single extent from source to dest.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reported-by: NEdwin Török <edwin@etorok.net>
      Tested-by: NEdwin Török <edwin@etorok.net>
      00fd1d56
    • D
      xfs: rename xfs_bmap_is_real_extent to is_written_extent · 877f58f5
      Darrick J. Wong 提交于
      The name of this predicate is a little misleading -- it decides if the
      extent mapping is allocated and written.  Change the name to be more
      direct, as we're going to add a new predicate in the next patch.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      877f58f5
    • D
      xfs: fix reflink quota reservation accounting error · 83895227
      Darrick J. Wong 提交于
      Quota reservations are supposed to account for the blocks that might be
      allocated due to a bmap btree split.  Reflink doesn't do this, so fix
      this to make the quota accounting more accurate before we start
      rearranging things.
      
      Fixes: 862bb360 ("xfs: reflink extents from one file to another")
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      83895227
    • D
      xfs: don't eat an EIO/ENOSPC writeback error when scrubbing data fork · eb0efe50
      Darrick J. Wong 提交于
      The data fork scrubber calls filemap_write_and_wait to flush dirty pages
      and delalloc reservations out to disk prior to checking the data fork's
      extent mappings.  Unfortunately, this means that scrub can consume the
      EIO/ENOSPC errors that would otherwise have stayed around in the address
      space until (we hope) the writer application calls fsync to persist data
      and collect errors.  The end result is that programs that wrote to a
      file might never see the error code and proceed as if nothing were
      wrong.
      
      xfs_scrub is not in a position to notify file writers about the
      writeback failure, and it's only here to check metadata, not file
      contents.  Therefore, if writeback fails, we should stuff the error code
      back into the address space so that an fsync by the writer application
      can pick that up.
      
      Fixes: 99d9d8d0 ("xfs: scrub inode block mappings")
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      eb0efe50
    • B
      xfs: preserve rmapbt swapext block reservation from freed blocks · f74681ba
      Brian Foster 提交于
      The rmapbt extent swap algorithm remaps individual extents between
      the source inode and the target to trigger reverse mapping metadata
      updates. If either inode straddles a format or other bmap allocation
      boundary, the individual unmap and map cycles can trigger repeated
      bmap block allocations and frees as the extent count bounces back
      and forth across the boundary. While net block usage is bound across
      the swap operation, this behavior can prematurely exhaust the
      transaction block reservation because it continuously drains as the
      transaction rolls. Each allocation accounts against the reservation
      and each free returns to global free space on transaction roll.
      
      The previous workaround to this problem attempted to detect this
      boundary condition and provide surplus block reservation to
      acommodate it. This is insufficient because more remaps can occur
      than implied by the extent counts; if start offset boundaries are
      not aligned between the two inodes, for example.
      
      To address this problem more generically and dynamically, add a
      transaction accounting mode that returns freed blocks to the
      transaction reservation instead of the superblock counters on
      transaction roll and use it when the rmapbt based algorithm is
      active. This allows the chain of remap transactions to preserve the
      block reservation based own its own frees and prevent premature
      exhaustion regardless of the remap pattern. Note that this is only
      safe for superblocks with lazy sb accounting, but the latter is
      required for v5 supers and the rmap feature depends on v5.
      
      Fixes: b3fed434 ("xfs: account format bouncing into rmapbt swapext tx reservation")
      Root-caused-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      f74681ba
    • K
      xfs: Couple of typo fixes in comments · 06734e3c
      Keyur Patel 提交于
      ./xfs/libxfs/xfs_inode_buf.c:56: unnecssary ==> unnecessary
      ./xfs/libxfs/xfs_inode_buf.c:59: behavour ==> behaviour
      ./xfs/libxfs/xfs_inode_buf.c:206: unitialized ==> uninitialized
      Signed-off-by: NKeyur Patel <iamkeyur96@gmail.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      06734e3c
  2. 23 6月, 2020 1 次提交
    • D
      xfs: fix use-after-free on CIL context on shutdown · c7f87f39
      Dave Chinner 提交于
      xlog_wait() on the CIL context can reference a freed context if the
      waiter doesn't get scheduled before the CIL context is freed. This
      can happen when a task is on the hard throttle and the CIL push
      aborts due to a shutdown. This was detected by generic/019:
      
      thread 1			thread 2
      
      __xfs_trans_commit
       xfs_log_commit_cil
        <CIL size over hard throttle limit>
        xlog_wait
         schedule
      				xlog_cil_push_work
      				wake_up_all
      				<shutdown aborts commit>
      				xlog_cil_committed
      				kmem_free
      
         remove_wait_queue
          spin_lock_irqsave --> UAF
      
      Fix it by moving the wait queue to the CIL rather than keeping it in
      in the CIL context that gets freed on push completion. Because the
      wait queue is now independent of the CIL context and we might have
      multiple contexts in flight at once, only wake the waiters on the
      push throttle when the context we are pushing is over the hard
      throttle size threshold.
      
      Fixes: 0e7ab7ef ("xfs: Throttle commits on delayed background CIL push")
      Reported-by: NYu Kuai <yukuai3@huawei.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      c7f87f39
  3. 10 6月, 2020 1 次提交
  4. 09 6月, 2020 1 次提交
  5. 04 6月, 2020 1 次提交
  6. 03 6月, 2020 3 次提交
  7. 30 5月, 2020 6 次提交
  8. 27 5月, 2020 10 次提交