提交 d725e66c 编写于 作者: L Linus Torvalds

Revert "fsnotify: fix oops in fsnotify_clear_marks_by_group_flags()"

This reverts commit a2673b6e.

Kinglong Mee reports a memory leak with that patch, and Jan Kara confirms:

 "Thanks for report! You are right that my patch introduces a race
  between fsnotify kthread and fsnotify_destroy_group() which can result
  in leaking inotify event on group destruction.

  I haven't yet decided whether the right fix is not to queue events for
  dying notification group (as that is pointless anyway) or whether we
  should just fix the original problem differently...  Whenever I look
  at fsnotify code mark handling I get lost in the maze of locks, lists,
  and subtle differences between how different notification systems
  handle notification marks :( I'll think about it over night"

and after thinking about it, Jan says:

 "OK, I have looked into the code some more and I found another
  relatively simple way of fixing the original oops.  It will be IMHO
  better than trying to fixup this issue which has more potential for
  breakage.  I'll ask Linus to revert the fsnotify fix he already merged
  and send a new fix"
Reported-by: NKinglong Mee <kinglongmee@gmail.com>
Requested-by: NJan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 71ebd1af
......@@ -152,15 +152,31 @@ void fsnotify_destroy_mark_locked(struct fsnotify_mark *mark,
BUG();
list_del_init(&mark->g_list);
spin_unlock(&mark->lock);
if (inode && (mark->flags & FSNOTIFY_MARK_FLAG_OBJECT_PINNED))
iput(inode);
/* release lock temporarily */
mutex_unlock(&group->mark_mutex);
spin_lock(&destroy_lock);
list_add(&mark->g_list, &destroy_list);
spin_unlock(&destroy_lock);
wake_up(&destroy_waitq);
/*
* We don't necessarily have a ref on mark from caller so the above destroy
* may have actually freed it, unless this group provides a 'freeing_mark'
* function which must be holding a reference.
*/
/*
* Some groups like to know that marks are being freed. This is a
* callback to the group function to let it know that this mark
* is being freed.
*/
if (group->ops->freeing_mark)
group->ops->freeing_mark(mark, group);
/*
* __fsnotify_update_child_dentry_flags(inode);
......@@ -175,6 +191,8 @@ void fsnotify_destroy_mark_locked(struct fsnotify_mark *mark,
*/
atomic_dec(&group->num_marks);
mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
}
void fsnotify_destroy_mark(struct fsnotify_mark *mark,
......@@ -187,10 +205,7 @@ void fsnotify_destroy_mark(struct fsnotify_mark *mark,
/*
* Destroy all marks in the given list. The marks must be already detached from
* the original inode / vfsmount. Note that we can race with
* fsnotify_clear_marks_by_group_flags(). However we hold a reference to each
* mark so they won't get freed from under us and nobody else touches our
* free_list list_head.
* the original inode / vfsmount.
*/
void fsnotify_destroy_marks(struct list_head *to_free)
{
......@@ -391,7 +406,7 @@ struct fsnotify_mark *fsnotify_find_mark(struct hlist_head *head,
}
/*
* Clear any marks in a group in which mark->flags & flags is true.
* clear any marks in a group in which mark->flags & flags is true
*/
void fsnotify_clear_marks_by_group_flags(struct fsnotify_group *group,
unsigned int flags)
......@@ -445,7 +460,6 @@ static int fsnotify_mark_destroy(void *ignored)
{
struct fsnotify_mark *mark, *next;
struct list_head private_destroy_list;
struct fsnotify_group *group;
for (;;) {
spin_lock(&destroy_lock);
......@@ -457,14 +471,6 @@ static int fsnotify_mark_destroy(void *ignored)
list_for_each_entry_safe(mark, next, &private_destroy_list, g_list) {
list_del_init(&mark->g_list);
group = mark->group;
/*
* Some groups like to know that marks are being freed.
* This is a callback to the group function to let it
* know that this mark is being freed.
*/
if (group && group->ops->freeing_mark)
group->ops->freeing_mark(mark, group);
fsnotify_put_mark(mark);
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册