1. 13 8月, 2019 2 次提交
    • D
      xfs: don't crash on null attr fork xfs_bmapi_read · 8612de3f
      Darrick J. Wong 提交于
      Zorro Lang reported a crash in generic/475 if we try to inactivate a
      corrupt inode with a NULL attr fork (stack trace shortened somewhat):
      
      RIP: 0010:xfs_bmapi_read+0x311/0xb00 [xfs]
      RSP: 0018:ffff888047f9ed68 EFLAGS: 00010202
      RAX: dffffc0000000000 RBX: ffff888047f9f038 RCX: 1ffffffff5f99f51
      RDX: 0000000000000002 RSI: 0000000000000008 RDI: 0000000000000012
      RBP: ffff888002a41f00 R08: ffffed10005483f0 R09: ffffed10005483ef
      R10: ffffed10005483ef R11: ffff888002a41f7f R12: 0000000000000004
      R13: ffffe8fff53b5768 R14: 0000000000000005 R15: 0000000000000001
      FS:  00007f11d44b5b80(0000) GS:ffff888114200000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000000ef6000 CR3: 000000002e176003 CR4: 00000000001606e0
      Call Trace:
       xfs_dabuf_map.constprop.18+0x696/0xe50 [xfs]
       xfs_da_read_buf+0xf5/0x2c0 [xfs]
       xfs_da3_node_read+0x1d/0x230 [xfs]
       xfs_attr_inactive+0x3cc/0x5e0 [xfs]
       xfs_inactive+0x4c8/0x5b0 [xfs]
       xfs_fs_destroy_inode+0x31b/0x8e0 [xfs]
       destroy_inode+0xbc/0x190
       xfs_bulkstat_one_int+0xa8c/0x1200 [xfs]
       xfs_bulkstat_one+0x16/0x20 [xfs]
       xfs_bulkstat+0x6fa/0xf20 [xfs]
       xfs_ioc_bulkstat+0x182/0x2b0 [xfs]
       xfs_file_ioctl+0xee0/0x12a0 [xfs]
       do_vfs_ioctl+0x193/0x1000
       ksys_ioctl+0x60/0x90
       __x64_sys_ioctl+0x6f/0xb0
       do_syscall_64+0x9f/0x4d0
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      RIP: 0033:0x7f11d39a3e5b
      
      The "obvious" cause is that the attr ifork is null despite the inode
      claiming an attr fork having at least one extent, but it's not so
      obvious why we ended up with an inode in that state.
      Reported-by: NZorro Lang <zlang@redhat.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=204031Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBill O'Donnell <billodo@redhat.com>
      8612de3f
    • D
      xfs: remove more ondisk directory corruption asserts · 858b44dc
      Darrick J. Wong 提交于
      Continue our game of replacing ASSERTs for corrupt ondisk metadata with
      EFSCORRUPTED returns.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBill O'Donnell <billodo@redhat.com>
      858b44dc
  2. 15 7月, 2019 2 次提交
  3. 06 7月, 2019 1 次提交
  4. 04 7月, 2019 7 次提交
  5. 03 7月, 2019 2 次提交
  6. 01 7月, 2019 1 次提交
  7. 29 6月, 2019 9 次提交
  8. 13 6月, 2019 1 次提交
  9. 12 6月, 2019 5 次提交
  10. 21 5月, 2019 1 次提交
    • D
      xfs: don't reserve per-AG space for an internal log · 5cd213b0
      Darrick J. Wong 提交于
      It turns out that the log can consume nearly all the space in an AG, and
      when this happens this it's possible that there will be less free space
      in the AG than the reservation would try to hide.  On a debug kernel
      this can trigger an ASSERT in xfs/250:
      
      XFS: Assertion failed: xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= pag->pagf_freeblks + pag->pagf_flcount, file: fs/xfs/libxfs/xfs_ag_resv.c, line: 319
      
      The log is permanently allocated, so we know we're never going to have
      to expand the btrees to hold any records associated with the log space.
      We therefore can treat the space as if it doesn't exist.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      5cd213b0
  11. 02 5月, 2019 1 次提交
  12. 30 4月, 2019 2 次提交
    • D
      xfs: add online scrub for superblock counters · 75efa57d
      Darrick J. Wong 提交于
      Teach online scrub how to check the filesystem summary counters.  We use
      the incore delalloc block counter along with the incore AG headers to
      compute expected values for fdblocks, icount, and ifree, and then check
      that the percpu counter is within a certain threshold of the expected
      value.  This is done to avoid having to freeze or otherwise lock the
      filesystem, which means that we're only checking that the counters are
      fairly close, not that they're exactly correct.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      75efa57d
    • D
      xfs: always rejoin held resources during defer roll · 710d707d
      Darrick J. Wong 提交于
      During testing of xfs/141 on a V4 filesystem, I observed some
      inconsistent behavior with regards to resources that are held (i.e.
      remain locked) across a defer roll.  The transaction roll always gives
      the defer roll function a new transaction, even if committing the old
      transaction fails.  However, the defer roll function only rejoins the
      held resources if the transaction commit succeedied.  This means that
      callers of defer roll have to figure out whether the held resources are
      attached to the transaction being passed back.
      
      Worse yet, if the defer roll was part of a defer finish call, we have a
      third possibility: the defer finish could pass back a dirty transaction
      with dirty held resources and an error code.
      
      The only sane way to handle all of these scenarios is to require that
      the code that held the resource either cancel the transaction before
      unlocking and releasing the resources, or use functions that detach
      resources from a transaction properly (e.g.  xfs_trans_brelse) if they
      need to drop the reference before committing or cancelling the
      transaction.
      
      In order to make this so, change the defer roll code to join held
      resources to the new transaction unconditionally and fix all the bhold
      callers to release the held buffers correctly.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      710d707d
  13. 27 4月, 2019 1 次提交
  14. 23 4月, 2019 2 次提交
  15. 15 4月, 2019 3 次提交