提交 799ea9e9 编写于 作者: D Darrick J. Wong

xfs: evict all inodes involved with log redo item

When we introduced the bmap redo log items, we set MS_ACTIVE on the
mountpoint and XFS_IRECOVERY on the inode to prevent unlinked inodes
from being truncated prematurely during log recovery.  This also had the
effect of putting linked inodes on the lru instead of evicting them.

Unfortunately, we neglected to find all those unreferenced lru inodes
and evict them after finishing log recovery, which means that we leak
them if anything goes wrong in the rest of xfs_mountfs, because the lru
is only cleaned out on unmount.

Therefore, evict unreferenced inodes in the lru list immediately
after clearing MS_ACTIVE.

Fixes: 17c12bcd ("xfs: when replaying bmap operations, don't let unlinked inodes get reaped")
Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
Cc: viro@ZenIV.linux.org.uk
Reviewed-by: NBrian Foster <bfoster@redhat.com>
上级 2d32311c
...@@ -637,6 +637,7 @@ void evict_inodes(struct super_block *sb) ...@@ -637,6 +637,7 @@ void evict_inodes(struct super_block *sb)
dispose_list(&dispose); dispose_list(&dispose);
} }
EXPORT_SYMBOL_GPL(evict_inodes);
/** /**
* invalidate_inodes - attempt to free all inodes on a superblock * invalidate_inodes - attempt to free all inodes on a superblock
......
...@@ -132,7 +132,6 @@ static inline bool atime_needs_update_rcu(const struct path *path, ...@@ -132,7 +132,6 @@ static inline bool atime_needs_update_rcu(const struct path *path,
extern void inode_io_list_del(struct inode *inode); extern void inode_io_list_del(struct inode *inode);
extern long get_nr_dirty_inodes(void); extern long get_nr_dirty_inodes(void);
extern void evict_inodes(struct super_block *);
extern int invalidate_inodes(struct super_block *, bool); extern int invalidate_inodes(struct super_block *, bool);
/* /*
......
...@@ -761,12 +761,24 @@ xfs_log_mount_finish( ...@@ -761,12 +761,24 @@ xfs_log_mount_finish(
* inodes. Turn it off immediately after recovery finishes * inodes. Turn it off immediately after recovery finishes
* so that we don't leak the quota inodes if subsequent mount * so that we don't leak the quota inodes if subsequent mount
* activities fail. * activities fail.
*
* We let all inodes involved in redo item processing end up on
* the LRU instead of being evicted immediately so that if we do
* something to an unlinked inode, the irele won't cause
* premature truncation and freeing of the inode, which results
* in log recovery failure. We have to evict the unreferenced
* lru inodes after clearing MS_ACTIVE because we don't
* otherwise clean up the lru if there's a subsequent failure in
* xfs_mountfs, which leads to us leaking the inodes if nothing
* else (e.g. quotacheck) references the inodes before the
* mount failure occurs.
*/ */
mp->m_super->s_flags |= MS_ACTIVE; mp->m_super->s_flags |= MS_ACTIVE;
error = xlog_recover_finish(mp->m_log); error = xlog_recover_finish(mp->m_log);
if (!error) if (!error)
xfs_log_work_queue(mp); xfs_log_work_queue(mp);
mp->m_super->s_flags &= ~MS_ACTIVE; mp->m_super->s_flags &= ~MS_ACTIVE;
evict_inodes(mp->m_super);
if (readonly) if (readonly)
mp->m_flags |= XFS_MOUNT_RDONLY; mp->m_flags |= XFS_MOUNT_RDONLY;
......
...@@ -2831,6 +2831,7 @@ static inline void lockdep_annotate_inode_mutex_key(struct inode *inode) { }; ...@@ -2831,6 +2831,7 @@ static inline void lockdep_annotate_inode_mutex_key(struct inode *inode) { };
#endif #endif
extern void unlock_new_inode(struct inode *); extern void unlock_new_inode(struct inode *);
extern unsigned int get_next_ino(void); extern unsigned int get_next_ino(void);
extern void evict_inodes(struct super_block *sb);
extern void __iget(struct inode * inode); extern void __iget(struct inode * inode);
extern void iget_failed(struct inode *); extern void iget_failed(struct inode *);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册