提交 4c60658e 编写于 作者: D David Chinner 提交者: Tim Shimmin

[XFS] Prevent a deadlock when xfslogd unpins inodes.

The previous fixes for the use after free in xfs_iunpin left a nasty log
deadlock when xfslogd unpinned the inode and dropped the last reference to
the inode. the ->clear_inode() method can issue transactions, and if the
log was full, the transaction could push on the log and get stuck trying
to push the inode it was currently unpinning.

To fix this, we provide xfs_iunpin a guarantee that it will always have a
valid xfs_inode <-> linux inode link or a particular flag will be set on
the inode. We then use log forces during lookup to ensure transactions are
completed before we recycle the inode. This ensures that xfs_iunpin will
never use the linux inode after it is being freed, and any lookup on an
inode on the reclaim list will wait until it is safe to attach a new linux
inode to the xfs inode.

SGI-PV: 956832
SGI-Modid: xfs-linux-melb:xfs-kern:27359a
Signed-off-by: NDavid Chinner <dgc@sgi.com>
Signed-off-by: NShailendra Tripathi <stripathi@agami.com>
Signed-off-by: NTakenori Nagano <t-nagano@ah.jp.nec.com>
Signed-off-by: NTim Shimmin <tes@sgi.com>
上级 7a18c386
......@@ -237,6 +237,36 @@ xfs_iget_core(
goto again;
}
ASSERT(xfs_iflags_test(ip, XFS_IRECLAIMABLE));
/*
* If lookup is racing with unlink, then we
* should return an error immediately so we
* don't remove it from the reclaim list and
* potentially leak the inode.
*/
if ((ip->i_d.di_mode == 0) &&
!(flags & XFS_IGET_CREATE)) {
read_unlock(&ih->ih_lock);
return ENOENT;
}
/*
* There may be transactions sitting in the
* incore log buffers or being flushed to disk
* at this time. We can't clear the
* XFS_IRECLAIMABLE flag until these
* transactions have hit the disk, otherwise we
* will void the guarantee the flag provides
* xfs_iunpin()
*/
if (xfs_ipincount(ip)) {
read_unlock(&ih->ih_lock);
xfs_log_force(mp, 0,
XFS_LOG_FORCE|XFS_LOG_SYNC);
XFS_STATS_INC(xs_ig_frecycle);
goto again;
}
vn_trace_exit(vp, "xfs_iget.alloc",
(inst_t *)__return_address);
......
......@@ -2741,42 +2741,39 @@ xfs_iunpin(
{
ASSERT(atomic_read(&ip->i_pincount) > 0);
if (atomic_dec_and_test(&ip->i_pincount)) {
if (atomic_dec_and_lock(&ip->i_pincount, &ip->i_flags_lock)) {
/*
* If the inode is currently being reclaimed, the
* linux inode _and_ the xfs vnode may have been
* freed so we cannot reference either of them safely.
* Hence we should not try to do anything to them
* if the xfs inode is currently in the reclaim
* path.
* If the inode is currently being reclaimed, the link between
* the bhv_vnode and the xfs_inode will be broken after the
* XFS_IRECLAIM* flag is set. Hence, if these flags are not
* set, then we can move forward and mark the linux inode dirty
* knowing that it is still valid as it won't freed until after
* the bhv_vnode<->xfs_inode link is broken in xfs_reclaim. The
* i_flags_lock is used to synchronise the setting of the
* XFS_IRECLAIM* flags and the breaking of the link, and so we
* can execute atomically w.r.t to reclaim by holding this lock
* here.
*
* However, we still need to issue the unpin wakeup
* call as the inode reclaim may be blocked waiting for
* the inode to become unpinned.
* However, we still need to issue the unpin wakeup call as the
* inode reclaim may be blocked waiting for the inode to become
* unpinned.
*/
struct inode *inode = NULL;
spin_lock(&ip->i_flags_lock);
if (!__xfs_iflags_test(ip, XFS_IRECLAIM|XFS_IRECLAIMABLE)) {
bhv_vnode_t *vp = XFS_ITOV_NULL(ip);
struct inode *inode = NULL;
BUG_ON(vp == NULL);
inode = vn_to_inode(vp);
BUG_ON(inode->i_state & I_CLEAR);
/* make sync come back and flush this inode */
if (vp) {
inode = vn_to_inode(vp);
if (!(inode->i_state &
(I_NEW|I_FREEING|I_CLEAR))) {
inode = igrab(inode);
if (inode)
mark_inode_dirty_sync(inode);
} else
inode = NULL;
}
if (!(inode->i_state & (I_NEW|I_FREEING)))
mark_inode_dirty_sync(inode);
}
spin_unlock(&ip->i_flags_lock);
wake_up(&ip->i_ipin_wait);
if (inode)
iput(inode);
}
}
......
......@@ -3827,11 +3827,16 @@ xfs_reclaim(
*/
xfs_synchronize_atime(ip);
/* If we have nothing to flush with this inode then complete the
* teardown now, otherwise break the link between the xfs inode
* and the linux inode and clean up the xfs inode later. This
* avoids flushing the inode to disk during the delete operation
* itself.
/*
* If we have nothing to flush with this inode then complete the
* teardown now, otherwise break the link between the xfs inode and the
* linux inode and clean up the xfs inode later. This avoids flushing
* the inode to disk during the delete operation itself.
*
* When breaking the link, we need to set the XFS_IRECLAIMABLE flag
* first to ensure that xfs_iunpin() will never see an xfs inode
* that has a linux inode being reclaimed. Synchronisation is provided
* by the i_flags_lock.
*/
if (!ip->i_update_core && (ip->i_itemp == NULL)) {
xfs_ilock(ip, XFS_ILOCK_EXCL);
......@@ -3840,11 +3845,13 @@ xfs_reclaim(
} else {
xfs_mount_t *mp = ip->i_mount;
/* Protect sync from us */
/* Protect sync and unpin from us */
XFS_MOUNT_ILOCK(mp);
spin_lock(&ip->i_flags_lock);
__xfs_iflags_set(ip, XFS_IRECLAIMABLE);
vn_bhv_remove(VN_BHV_HEAD(vp), XFS_ITOBHV(ip));
spin_unlock(&ip->i_flags_lock);
list_add_tail(&ip->i_reclaim, &mp->m_del_inodes);
xfs_iflags_set(ip, XFS_IRECLAIMABLE);
XFS_MOUNT_IUNLOCK(mp);
}
return 0;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册