1. 09 6月, 2021 2 次提交
    • D
      xfs: selectively keep sick inodes in memory · 9492750a
      Darrick J. Wong 提交于
      It's important that the filesystem retain its memory of sick inodes for
      a little while after problems are found so that reports can be collected
      about what was wrong.  Don't let inode reclamation free sick inodes
      unless we're unmounting or the fs already went down.
      Signed-off-by: NDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NCarlos Maiolino <cmaiolino@redhat.com>
      9492750a
    • D
      xfs: only reset incore inode health state flags when reclaiming an inode · 255794c7
      Darrick J. Wong 提交于
      While running some fuzz tests on inode metadata, I noticed that the
      filesystem health report (as provided by xfs_spaceman) failed to report
      the file corruption even when spaceman was run immediately after running
      xfs_scrub to detect the corruption.  That isn't the intended behavior;
      one ought to be able to run scrub to detect errors in the ondisk
      metadata and be able to access to those reports for some time after the
      scrub.
      
      After running the same sequence through an instrumented kernel, I
      discovered the reason why -- scrub igets the file, scans it, marks it
      sick, and ireleases the inode.  When the VFS lets go of the incore
      inode, it moves to RECLAIMABLE state.  If spaceman igets the incore
      inode before it moves to RECLAIM state, iget reinitializes the VFS
      state, clears the sick and checked masks, and hands back the inode.  At
      this point, the caller has the exact same incore inode, but with all the
      health state erased.
      
      In other words, we're erasing the incore inode's health state flags when
      we've decided NOT to sever the link between the incore inode and the
      ondisk inode.  This is wrong, so we need to remove the lines that zero
      the fields from xfs_iget_cache_hit.
      
      As a precaution, we add the same lines into xfs_reclaim_inode just after
      we sever the link between incore and ondisk inode.  Strictly speaking
      this isn't necessary because once an inode has gone through reclaim it
      must go through xfs_inode_alloc (which also zeroes the state) and
      xfs_iget is careful to check for mismatches between the inode it pulls
      out of the radix tree and the one it wants.
      
      Fixes: 6772c1f1 ("xfs: track metadata health status")
      Signed-off-by: NDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NCarlos Maiolino <cmaiolino@redhat.com>
      255794c7
  2. 04 6月, 2021 14 次提交
  3. 02 6月, 2021 2 次提交
  4. 08 4月, 2021 9 次提交
  5. 26 3月, 2021 1 次提交
  6. 04 2月, 2021 12 次提交