提交 498a6e6b 编写于 作者: P Piotr Jaroszynski 提交者: Greg Kroah-Hartman

fs/iomap.c: get/put the page in iomap_page_create/release()

commit 61c6de667263184125d5ca75e894fcad632b0dd3 upstream.

migrate_page_move_mapping() expects pages with private data set to have
a page_count elevated by 1.  This is what used to happen for xfs through
the buffer_heads code before the switch to iomap in commit 82cb1417
("xfs: add support for sub-pagesize writeback without buffer_heads").
Not having the count elevated causes move_pages() to fail on memory
mapped files coming from xfs.

Make iomap compatible with the migrate_page_move_mapping() assumption by
elevating the page count as part of iomap_page_create() and lowering it
in iomap_page_release().

It causes the move_pages() syscall to misbehave on memory mapped files
from xfs.  It does not not move any pages, which I suppose is "just" a
perf issue, but it also ends up returning a positive number which is out
of spec for the syscall.  Talking to Michal Hocko, it sounds like
returning positive numbers might be a necessary update to move_pages()
anyway though
(https://lkml.kernel.org/r/20181116114955.GJ14706@dhcp22.suse.cz).

I only hit this in tests that verify that move_pages() actually moved
the pages.  The test also got confused by the positive return from
move_pages() (it got treated as a success as positive numbers were not
expected and not handled) making it a bit harder to track down what's
going on.

Link: http://lkml.kernel.org/r/20181115184140.1388751-1-pjaroszynski@nvidia.com
Fixes: 82cb1417 ("xfs: add support for sub-pagesize writeback without buffer_heads")
Signed-off-by: NPiotr Jaroszynski <pjaroszynski@nvidia.com>
Reviewed-by: NChristoph Hellwig <hch@lst.de>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Brian Foster <bfoster@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
上级 b11d5c02
...@@ -117,6 +117,12 @@ iomap_page_create(struct inode *inode, struct page *page) ...@@ -117,6 +117,12 @@ iomap_page_create(struct inode *inode, struct page *page)
atomic_set(&iop->read_count, 0); atomic_set(&iop->read_count, 0);
atomic_set(&iop->write_count, 0); atomic_set(&iop->write_count, 0);
bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE); bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE);
/*
* migrate_page_move_mapping() assumes that pages with private data have
* their count elevated by 1.
*/
get_page(page);
set_page_private(page, (unsigned long)iop); set_page_private(page, (unsigned long)iop);
SetPagePrivate(page); SetPagePrivate(page);
return iop; return iop;
...@@ -133,6 +139,7 @@ iomap_page_release(struct page *page) ...@@ -133,6 +139,7 @@ iomap_page_release(struct page *page)
WARN_ON_ONCE(atomic_read(&iop->write_count)); WARN_ON_ONCE(atomic_read(&iop->write_count));
ClearPagePrivate(page); ClearPagePrivate(page);
set_page_private(page, 0); set_page_private(page, 0);
put_page(page);
kfree(iop); kfree(iop);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册