1. 27 8月, 2021 2 次提交
  2. 22 6月, 2021 1 次提交
  3. 04 6月, 2021 1 次提交
    • D
      xfs: refactor per-AG inode tagging functions · c076ae7a
      Darrick J. Wong 提交于
      In preparation for adding another incore inode tree tag, refactor the
      code that sets and clears tags from the per-AG inode tree and the tree
      of per-AG structures, and remove the open-coded versions used by the
      blockgc code.
      
      Note: For reclaim, we now rely on the radix tree tags instead of the
      reclaimable inode count more heavily than we used to.  The conversion
      should be fine, but the logic isn't 100% identical.
      Signed-off-by: NDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      c076ae7a
  4. 02 6月, 2021 1 次提交
  5. 08 4月, 2021 1 次提交
  6. 26 3月, 2021 4 次提交
  7. 04 2月, 2021 6 次提交
  8. 28 1月, 2021 1 次提交
  9. 24 1月, 2021 1 次提交
  10. 23 1月, 2021 5 次提交
  11. 10 12月, 2020 4 次提交
  12. 09 12月, 2020 1 次提交
  13. 26 9月, 2020 1 次提交
  14. 19 9月, 2020 1 次提交
  15. 16 9月, 2020 5 次提交
  16. 07 9月, 2020 1 次提交
    • D
      xfs: xfs_iflock is no longer a completion · 718ecc50
      Dave Chinner 提交于
      With the recent rework of the inode cluster flushing, we no longer
      ever wait on the the inode flush "lock". It was never a lock in the
      first place, just a completion to allow callers to wait for inode IO
      to complete. We now never wait for flush completion as all inode
      flushing is non-blocking. Hence we can get rid of all the iflock
      infrastructure and instead just set and check a state flag.
      
      Rename the XFS_IFLOCK flag to XFS_IFLUSHING, convert all the
      xfs_iflock_nowait() test-and-set operations on that flag, and
      replace all the xfs_ifunlock() calls to clear operations.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      718ecc50
  17. 18 7月, 2020 1 次提交
    • E
      xfs: preserve inode versioning across remounts · 4750a171
      Eric Sandeen 提交于
      The MS_I_VERSION mount flag is exposed via the VFS, as documented
      in the mount manpages etc; see the iversion and noiversion mount
      options in mount(8).
      
      As a result, mount -o remount looks for this option in /proc/mounts
      and will only send the I_VERSION flag back in during remount it it
      is present.  Since it's not there, a remount will /remove/ the
      I_VERSION flag at the vfs level, and iversion functionality is lost.
      
      xfs v5 superblocks intend to always have i_version enabled; it is
      set as a default at mount time, but is lost during remount for the
      reasons above.
      
      The generic fix would be to expose this documented option in
      /proc/mounts, but since that was rejected, fix it up again in the
      xfs remount path instead, so that at least xfs won't suffer from
      this misbehavior.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      4750a171
  18. 10 7月, 2020 1 次提交
    • W
      xfs: Fix false positive lockdep warning with sb_internal & fs_reclaim · c3f2375b
      Waiman Long 提交于
      Depending on the workloads, the following circular locking dependency
      warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
      lock) may show up:
      
      ======================================================
      WARNING: possible circular locking dependency detected
      5.0.0-rc1+ #60 Tainted: G        W
      ------------------------------------------------------
      fsfreeze/4346 is trying to acquire lock:
      0000000026f1d784 (fs_reclaim){+.+.}, at:
      fs_reclaim_acquire.part.19+0x5/0x30
      
      but task is already holding lock:
      0000000072bfc54b (sb_internal){++++}, at: percpu_down_write+0xb4/0x650
      
      which lock already depends on the new lock.
        :
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(sb_internal);
                                     lock(fs_reclaim);
                                     lock(sb_internal);
        lock(fs_reclaim);
      
       *** DEADLOCK ***
      
      4 locks held by fsfreeze/4346:
       #0: 00000000b478ef56 (sb_writers#8){++++}, at: percpu_down_write+0xb4/0x650
       #1: 000000001ec487a9 (&type->s_umount_key#28){++++}, at: freeze_super+0xda/0x290
       #2: 000000003edbd5a0 (sb_pagefaults){++++}, at: percpu_down_write+0xb4/0x650
       #3: 0000000072bfc54b (sb_internal){++++}, at: percpu_down_write+0xb4/0x650
      
      stack backtrace:
      Call Trace:
       dump_stack+0xe0/0x19a
       print_circular_bug.isra.10.cold.34+0x2f4/0x435
       check_prev_add.constprop.19+0xca1/0x15f0
       validate_chain.isra.14+0x11af/0x3b50
       __lock_acquire+0x728/0x1200
       lock_acquire+0x269/0x5a0
       fs_reclaim_acquire.part.19+0x29/0x30
       fs_reclaim_acquire+0x19/0x20
       kmem_cache_alloc+0x3e/0x3f0
       kmem_zone_alloc+0x79/0x150
       xfs_trans_alloc+0xfa/0x9d0
       xfs_sync_sb+0x86/0x170
       xfs_log_sbcount+0x10f/0x140
       xfs_quiesce_attr+0x134/0x270
       xfs_fs_freeze+0x4a/0x70
       freeze_super+0x1af/0x290
       do_vfs_ioctl+0xedc/0x16c0
       ksys_ioctl+0x41/0x80
       __x64_sys_ioctl+0x73/0xa9
       do_syscall_64+0x18f/0xd23
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      This is a false positive as all the dirty pages are flushed out before
      the filesystem can be frozen.
      
      One way to avoid this splat is to add GFP_NOFS to the affected allocation
      calls by using the memalloc_nofs_save()/memalloc_nofs_restore() pair.
      This shouldn't matter unless the system is really running out of memory.
      In that particular case, the filesystem freeze operation may fail while
      it was succeeding previously.
      
      Without this patch, the command sequence below will show that the lock
      dependency chain sb_internal -> fs_reclaim exists.
      
       # fsfreeze -f /home
       # fsfreeze --unfreeze /home
       # grep -i fs_reclaim -C 3 /proc/lockdep_chains | grep -C 5 sb_internal
      
      After applying the patch, such sb_internal -> fs_reclaim lock dependency
      chain can no longer be found. Because of that, the locking dependency
      warning will not be shown.
      Suggested-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      c3f2375b
  19. 07 7月, 2020 1 次提交
    • D
      xfs: remove SYNC_WAIT from xfs_reclaim_inodes() · 4d0bab3a
      Dave Chinner 提交于
      Clean up xfs_reclaim_inodes() callers. Most callers want blocking
      behaviour, so just make the existing SYNC_WAIT behaviour the
      default.
      
      For the xfs_reclaim_worker(), just call xfs_reclaim_inodes_ag()
      directly because we just want optimistic clean inode reclaim to be
      done in the background.
      
      For xfs_quiesce_attr() we can just remove the inode reclaim calls as
      they are a historic relic that was required to flush dirty inodes
      that contained unlogged changes. We now log all changes to the
      inodes, so the sync AIL push from xfs_log_quiesce() called by
      xfs_quiesce_attr() will do all the required inode writeback for
      freeze.
      
      Seeing as we now want to loop until all reclaimable inodes have been
      reclaimed, make xfs_reclaim_inodes() loop on the XFS_ICI_RECLAIM_TAG
      tag rather than having xfs_reclaim_inodes_ag() tell it that inodes
      were skipped. This is much more reliable and will always loop until
      all reclaimable inodes are reclaimed.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      4d0bab3a
  20. 30 5月, 2020 1 次提交