提交 90c711ab 编写于 作者: Z Zygo Blaxell 提交者: David Sterba

btrfs: avoid blocking open_ctree from cleaner_kthread

This fixes a problem introduced in commit 2f3165ec
"btrfs: don't force mounts to wait for cleaner_kthread to delete one or more subvolumes".

open_ctree eventually calls btrfs_replay_log which in turn calls
btrfs_commit_super which tries to lock the cleaner_mutex, causing a
recursive mutex deadlock during mount.

Instead of playing whack-a-mole trying to keep up with all the
functions that may want to lock cleaner_mutex, put all the cleaner_mutex
lockers back where they were, and attack the problem more directly:
keep cleaner_kthread asleep until the filesystem is mounted.

When filesystems are mounted read-only and later remounted read-write,
open_ctree did not set fs_info->open and neither does anything else.
Set this flag in btrfs_remount so that neither btrfs_delete_unused_bgs
nor cleaner_kthread get confused by the common case of "/" filesystem
read-only mount followed by read-write remount.
Signed-off-by: NZygo Blaxell <ce3g8jdj@umail.furryterror.org>
Reviewed-by: NDavid Sterba <dsterba@suse.com>
Signed-off-by: NDavid Sterba <dsterba@suse.com>
上级 3b6571c1
......@@ -1806,6 +1806,13 @@ static int cleaner_kthread(void *arg)
if (btrfs_need_cleaner_sleep(root))
goto sleep;
/*
* Do not do anything if we might cause open_ctree() to block
* before we have finished mounting the filesystem.
*/
if (!root->fs_info->open)
goto sleep;
if (!mutex_trylock(&root->fs_info->cleaner_mutex))
goto sleep;
......@@ -2520,7 +2527,6 @@ int open_ctree(struct super_block *sb,
int num_backups_tried = 0;
int backup_index = 0;
int max_active;
bool cleaner_mutex_locked = false;
tree_root = fs_info->tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
chunk_root = fs_info->chunk_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
......@@ -2999,13 +3005,6 @@ int open_ctree(struct super_block *sb,
goto fail_sysfs;
}
/*
* Hold the cleaner_mutex thread here so that we don't block
* for a long time on btrfs_recover_relocation. cleaner_kthread
* will wait for us to finish mounting the filesystem.
*/
mutex_lock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = true;
fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
"btrfs-cleaner");
if (IS_ERR(fs_info->cleaner_kthread))
......@@ -3065,8 +3064,10 @@ int open_ctree(struct super_block *sb,
ret = btrfs_cleanup_fs_roots(fs_info);
if (ret)
goto fail_qgroup;
/* We locked cleaner_mutex before creating cleaner_kthread. */
mutex_lock(&fs_info->cleaner_mutex);
ret = btrfs_recover_relocation(tree_root);
mutex_unlock(&fs_info->cleaner_mutex);
if (ret < 0) {
btrfs_warn(fs_info, "failed to recover relocation: %d",
ret);
......@@ -3074,8 +3075,6 @@ int open_ctree(struct super_block *sb,
goto fail_qgroup;
}
}
mutex_unlock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = false;
location.objectid = BTRFS_FS_TREE_OBJECTID;
location.type = BTRFS_ROOT_ITEM_KEY;
......@@ -3189,10 +3188,6 @@ int open_ctree(struct super_block *sb,
filemap_write_and_wait(fs_info->btree_inode->i_mapping);
fail_sysfs:
if (cleaner_mutex_locked) {
mutex_unlock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = false;
}
btrfs_sysfs_remove_mounted(fs_info);
fail_fsdev_sysfs:
......
......@@ -1807,6 +1807,8 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
}
}
sb->s_flags &= ~MS_RDONLY;
fs_info->open = 1;
}
out:
wake_up_process(fs_info->transaction_kthread);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册