提交 8375f922 编写于 作者: B Brian Foster 提交者: Ben Myers

xfs: re-enable xfsaild idle mode and fix associated races

xfsaild idle mode logic currently leads to a couple hangs:

1.) If xfsaild is rescheduled in during an incremental scan
    (i.e., tout != 0) and the target has been updated since
    the previous run, we can hit the new target and go into
    idle mode with a still populated ail.
2.) A wake up is only issued when the target is pushed forward.
    The wake up can race with xfsaild if it is currently in the
    process of entering idle mode, causing future wake up
    events to be lost.

These hangs have been reproduced and verified as fixed by
running xfstests 273 in a loop on a slightly modified upstream
kernel. The kernel is modified to re-enable idle mode as
previously implemented (when count == 0) and with a revert of
commit 670ce93f, which includes performance improvements that
make this harder to reproduce.

The solution, the algorithm for which has been outlined by
Dave Chinner, is to modify xfsaild to enter idle mode only when
the ail is empty and the push target has not been moved forward
since the last push.
Signed-off-by: NBrian Foster <bfoster@redhat.com>
Reviewed-by: NDave Chinner <dchinner@redhat.com>
Reviewed-by: NChristoph Hellwig <hch@lst.de>
Signed-off-by: NBen Myers <bpm@sgi.com>
上级 4f59af75
......@@ -383,6 +383,12 @@ xfsaild_push(
}
spin_lock(&ailp->xa_lock);
/* barrier matches the xa_target update in xfs_ail_push() */
smp_rmb();
target = ailp->xa_target;
ailp->xa_target_prev = target;
lip = xfs_trans_ail_cursor_first(ailp, &cur, ailp->xa_last_pushed_lsn);
if (!lip) {
/*
......@@ -397,7 +403,6 @@ xfsaild_push(
XFS_STATS_INC(xs_push_ail);
lsn = lip->li_lsn;
target = ailp->xa_target;
while ((XFS_LSN_CMP(lip->li_lsn, target) <= 0)) {
int lock_result;
......@@ -527,8 +532,32 @@ xfsaild(
__set_current_state(TASK_KILLABLE);
else
__set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(tout ?
msecs_to_jiffies(tout) : MAX_SCHEDULE_TIMEOUT);
spin_lock(&ailp->xa_lock);
/*
* Idle if the AIL is empty and we are not racing with a target
* update. We check the AIL after we set the task to a sleep
* state to guarantee that we either catch an xa_target update
* or that a wake_up resets the state to TASK_RUNNING.
* Otherwise, we run the risk of sleeping indefinitely.
*
* The barrier matches the xa_target update in xfs_ail_push().
*/
smp_rmb();
if (!xfs_ail_min(ailp) &&
ailp->xa_target == ailp->xa_target_prev) {
spin_unlock(&ailp->xa_lock);
schedule();
tout = 0;
continue;
}
spin_unlock(&ailp->xa_lock);
if (tout)
schedule_timeout(msecs_to_jiffies(tout));
__set_current_state(TASK_RUNNING);
try_to_freeze();
......
......@@ -67,6 +67,7 @@ struct xfs_ail {
struct task_struct *xa_task;
struct list_head xa_ail;
xfs_lsn_t xa_target;
xfs_lsn_t xa_target_prev;
struct list_head xa_cursors;
spinlock_t xa_lock;
xfs_lsn_t xa_last_pushed_lsn;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册