提交 2457aec6 编写于 作者: M Mel Gorman 提交者: Linus Torvalds

mm: non-atomically mark page accessed during page cache allocation where possible

aops->write_begin may allocate a new page and make it visible only to have
mark_page_accessed called almost immediately after.  Once the page is
visible the atomic operations are necessary which is noticable overhead
when writing to an in-memory filesystem like tmpfs but should also be
noticable with fast storage.  The objective of the patch is to initialse
the accessed information with non-atomic operations before the page is
visible.

The bulk of filesystems directly or indirectly use
grab_cache_page_write_begin or find_or_create_page for the initial
allocation of a page cache page.  This patch adds an init_page_accessed()
helper which behaves like the first call to mark_page_accessed() but may
called before the page is visible and can be done non-atomically.

The primary APIs of concern in this care are the following and are used
by most filesystems.

	find_get_page
	find_lock_page
	find_or_create_page
	grab_cache_page_nowait
	grab_cache_page_write_begin

All of them are very similar in detail to the patch creates a core helper
pagecache_get_page() which takes a flags parameter that affects its
behavior such as whether the page should be marked accessed or not.  Then
old API is preserved but is basically a thin wrapper around this core
function.

Each of the filesystems are then updated to avoid calling
mark_page_accessed when it is known that the VM interfaces have already
done the job.  There is a slight snag in that the timing of the
mark_page_accessed() has now changed so in rare cases it's possible a page
gets to the end of the LRU as PageReferenced where as previously it might
have been repromoted.  This is expected to be rare but it's worth the
filesystem people thinking about it in case they see a problem with the
timing change.  It is also the case that some filesystems may be marking
pages accessed that previously did not but it makes sense that filesystems
have consistent behaviour in this regard.

The test case used to evaulate this is a simple dd of a large file done
multiple times with the file deleted on each iterations.  The size of the
file is 1/10th physical memory to avoid dirty page balancing.  In the
async case it will be possible that the workload completes without even
hitting the disk and will have variable results but highlight the impact
of mark_page_accessed for async IO.  The sync results are expected to be
more stable.  The exception is tmpfs where the normal case is for the "IO"
to not hit the disk.

The test machine was single socket and UMA to avoid any scheduling or NUMA
artifacts.  Throughput and wall times are presented for sync IO, only wall
times are shown for async as the granularity reported by dd and the
variability is unsuitable for comparison.  As async results were variable
do to writback timings, I'm only reporting the maximum figures.  The sync
results were stable enough to make the mean and stddev uninteresting.

The performance results are reported based on a run with no profiling.
Profile data is based on a separate run with oprofile running.

async dd
                                    3.15.0-rc3            3.15.0-rc3
                                       vanilla           accessed-v2
ext3    Max      elapsed     13.9900 (  0.00%)     11.5900 ( 17.16%)
tmpfs	Max      elapsed      0.5100 (  0.00%)      0.4900 (  3.92%)
btrfs   Max      elapsed     12.8100 (  0.00%)     12.7800 (  0.23%)
ext4	Max      elapsed     18.6000 (  0.00%)     13.3400 ( 28.28%)
xfs	Max      elapsed     12.5600 (  0.00%)      2.0900 ( 83.36%)

The XFS figure is a bit strange as it managed to avoid a worst case by
sheer luck but the average figures looked reasonable.

        samples percentage
ext3       86107    0.9783  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
ext3       23833    0.2710  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
ext3        5036    0.0573  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
ext4       64566    0.8961  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
ext4        5322    0.0713  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
ext4        2869    0.0384  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
xfs        62126    1.7675  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
xfs         1904    0.0554  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
xfs          103    0.0030  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
btrfs      10655    0.1338  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
btrfs       2020    0.0273  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
btrfs        587    0.0079  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
tmpfs      59562    3.2628  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
tmpfs       1210    0.0696  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
tmpfs         94    0.0054  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed

[akpm@linux-foundation.org: don't run init_page_accessed() against an uninitialised pointer]
Signed-off-by: NMel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Jan Kara <jack@suse.cz>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Tested-by: NPrabhakar Lad <prabhakar.csengg@gmail.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 e7470ee8
...@@ -4510,7 +4510,8 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) ...@@ -4510,7 +4510,8 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
spin_unlock(&eb->refs_lock); spin_unlock(&eb->refs_lock);
} }
static void mark_extent_buffer_accessed(struct extent_buffer *eb) static void mark_extent_buffer_accessed(struct extent_buffer *eb,
struct page *accessed)
{ {
unsigned long num_pages, i; unsigned long num_pages, i;
...@@ -4519,6 +4520,7 @@ static void mark_extent_buffer_accessed(struct extent_buffer *eb) ...@@ -4519,6 +4520,7 @@ static void mark_extent_buffer_accessed(struct extent_buffer *eb)
num_pages = num_extent_pages(eb->start, eb->len); num_pages = num_extent_pages(eb->start, eb->len);
for (i = 0; i < num_pages; i++) { for (i = 0; i < num_pages; i++) {
struct page *p = extent_buffer_page(eb, i); struct page *p = extent_buffer_page(eb, i);
if (p != accessed)
mark_page_accessed(p); mark_page_accessed(p);
} }
} }
...@@ -4533,7 +4535,7 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, ...@@ -4533,7 +4535,7 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
start >> PAGE_CACHE_SHIFT); start >> PAGE_CACHE_SHIFT);
if (eb && atomic_inc_not_zero(&eb->refs)) { if (eb && atomic_inc_not_zero(&eb->refs)) {
rcu_read_unlock(); rcu_read_unlock();
mark_extent_buffer_accessed(eb); mark_extent_buffer_accessed(eb, NULL);
return eb; return eb;
} }
rcu_read_unlock(); rcu_read_unlock();
...@@ -4581,7 +4583,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, ...@@ -4581,7 +4583,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
spin_unlock(&mapping->private_lock); spin_unlock(&mapping->private_lock);
unlock_page(p); unlock_page(p);
page_cache_release(p); page_cache_release(p);
mark_extent_buffer_accessed(exists); mark_extent_buffer_accessed(exists, p);
goto free_eb; goto free_eb;
} }
...@@ -4596,7 +4598,6 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, ...@@ -4596,7 +4598,6 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
attach_extent_buffer_page(eb, p); attach_extent_buffer_page(eb, p);
spin_unlock(&mapping->private_lock); spin_unlock(&mapping->private_lock);
WARN_ON(PageDirty(p)); WARN_ON(PageDirty(p));
mark_page_accessed(p);
eb->pages[i] = p; eb->pages[i] = p;
if (!PageUptodate(p)) if (!PageUptodate(p))
uptodate = 0; uptodate = 0;
......
...@@ -470,11 +470,12 @@ static void btrfs_drop_pages(struct page **pages, size_t num_pages) ...@@ -470,11 +470,12 @@ static void btrfs_drop_pages(struct page **pages, size_t num_pages)
for (i = 0; i < num_pages; i++) { for (i = 0; i < num_pages; i++) {
/* page checked is some magic around finding pages that /* page checked is some magic around finding pages that
* have been modified without going through btrfs_set_page_dirty * have been modified without going through btrfs_set_page_dirty
* clear it here * clear it here. There should be no need to mark the pages
* accessed as prepare_pages should have marked them accessed
* in prepare_pages via find_or_create_page()
*/ */
ClearPageChecked(pages[i]); ClearPageChecked(pages[i]);
unlock_page(pages[i]); unlock_page(pages[i]);
mark_page_accessed(pages[i]);
page_cache_release(pages[i]); page_cache_release(pages[i]);
} }
} }
......
...@@ -227,7 +227,7 @@ __find_get_block_slow(struct block_device *bdev, sector_t block) ...@@ -227,7 +227,7 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
int all_mapped = 1; int all_mapped = 1;
index = block >> (PAGE_CACHE_SHIFT - bd_inode->i_blkbits); index = block >> (PAGE_CACHE_SHIFT - bd_inode->i_blkbits);
page = find_get_page(bd_mapping, index); page = find_get_page_flags(bd_mapping, index, FGP_ACCESSED);
if (!page) if (!page)
goto out; goto out;
...@@ -1366,12 +1366,13 @@ __find_get_block(struct block_device *bdev, sector_t block, unsigned size) ...@@ -1366,12 +1366,13 @@ __find_get_block(struct block_device *bdev, sector_t block, unsigned size)
struct buffer_head *bh = lookup_bh_lru(bdev, block, size); struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
if (bh == NULL) { if (bh == NULL) {
/* __find_get_block_slow will mark the page accessed */
bh = __find_get_block_slow(bdev, block); bh = __find_get_block_slow(bdev, block);
if (bh) if (bh)
bh_lru_install(bh); bh_lru_install(bh);
} } else
if (bh)
touch_buffer(bh); touch_buffer(bh);
return bh; return bh;
} }
EXPORT_SYMBOL(__find_get_block); EXPORT_SYMBOL(__find_get_block);
......
...@@ -1044,6 +1044,8 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group) ...@@ -1044,6 +1044,8 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
* allocating. If we are looking at the buddy cache we would * allocating. If we are looking at the buddy cache we would
* have taken a reference using ext4_mb_load_buddy and that * have taken a reference using ext4_mb_load_buddy and that
* would have pinned buddy page to page cache. * would have pinned buddy page to page cache.
* The call to ext4_mb_get_buddy_page_lock will mark the
* page accessed.
*/ */
ret = ext4_mb_get_buddy_page_lock(sb, group, &e4b); ret = ext4_mb_get_buddy_page_lock(sb, group, &e4b);
if (ret || !EXT4_MB_GRP_NEED_INIT(this_grp)) { if (ret || !EXT4_MB_GRP_NEED_INIT(this_grp)) {
...@@ -1062,7 +1064,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group) ...@@ -1062,7 +1064,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
ret = -EIO; ret = -EIO;
goto err; goto err;
} }
mark_page_accessed(page);
if (e4b.bd_buddy_page == NULL) { if (e4b.bd_buddy_page == NULL) {
/* /*
...@@ -1082,7 +1083,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group) ...@@ -1082,7 +1083,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
ret = -EIO; ret = -EIO;
goto err; goto err;
} }
mark_page_accessed(page);
err: err:
ext4_mb_put_buddy_page_lock(&e4b); ext4_mb_put_buddy_page_lock(&e4b);
return ret; return ret;
...@@ -1141,7 +1141,7 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group, ...@@ -1141,7 +1141,7 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
/* we could use find_or_create_page(), but it locks page /* we could use find_or_create_page(), but it locks page
* what we'd like to avoid in fast path ... */ * what we'd like to avoid in fast path ... */
page = find_get_page(inode->i_mapping, pnum); page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED);
if (page == NULL || !PageUptodate(page)) { if (page == NULL || !PageUptodate(page)) {
if (page) if (page)
/* /*
...@@ -1176,15 +1176,16 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group, ...@@ -1176,15 +1176,16 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
ret = -EIO; ret = -EIO;
goto err; goto err;
} }
/* Pages marked accessed already */
e4b->bd_bitmap_page = page; e4b->bd_bitmap_page = page;
e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize); e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);
mark_page_accessed(page);
block++; block++;
pnum = block / blocks_per_page; pnum = block / blocks_per_page;
poff = block % blocks_per_page; poff = block % blocks_per_page;
page = find_get_page(inode->i_mapping, pnum); page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED);
if (page == NULL || !PageUptodate(page)) { if (page == NULL || !PageUptodate(page)) {
if (page) if (page)
page_cache_release(page); page_cache_release(page);
...@@ -1209,9 +1210,10 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group, ...@@ -1209,9 +1210,10 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
ret = -EIO; ret = -EIO;
goto err; goto err;
} }
/* Pages marked accessed already */
e4b->bd_buddy_page = page; e4b->bd_buddy_page = page;
e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize); e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize);
mark_page_accessed(page);
BUG_ON(e4b->bd_bitmap_page == NULL); BUG_ON(e4b->bd_bitmap_page == NULL);
BUG_ON(e4b->bd_buddy_page == NULL); BUG_ON(e4b->bd_buddy_page == NULL);
......
...@@ -69,7 +69,6 @@ struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) ...@@ -69,7 +69,6 @@ struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
goto repeat; goto repeat;
} }
out: out:
mark_page_accessed(page);
return page; return page;
} }
...@@ -137,13 +136,11 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, int start, int nrpages, int type) ...@@ -137,13 +136,11 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, int start, int nrpages, int type)
if (!page) if (!page)
continue; continue;
if (PageUptodate(page)) { if (PageUptodate(page)) {
mark_page_accessed(page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
continue; continue;
} }
f2fs_submit_page_mbio(sbi, page, blk_addr, &fio); f2fs_submit_page_mbio(sbi, page, blk_addr, &fio);
mark_page_accessed(page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
} }
out: out:
......
...@@ -967,7 +967,6 @@ struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid) ...@@ -967,7 +967,6 @@ struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid)
goto repeat; goto repeat;
} }
got_it: got_it:
mark_page_accessed(page);
return page; return page;
} }
...@@ -1022,7 +1021,6 @@ struct page *get_node_page_ra(struct page *parent, int start) ...@@ -1022,7 +1021,6 @@ struct page *get_node_page_ra(struct page *parent, int start)
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
return ERR_PTR(-EIO); return ERR_PTR(-EIO);
} }
mark_page_accessed(page);
return page; return page;
} }
......
...@@ -1089,8 +1089,6 @@ static ssize_t fuse_fill_write_pages(struct fuse_req *req, ...@@ -1089,8 +1089,6 @@ static ssize_t fuse_fill_write_pages(struct fuse_req *req,
tmp = iov_iter_copy_from_user_atomic(page, ii, offset, bytes); tmp = iov_iter_copy_from_user_atomic(page, ii, offset, bytes);
flush_dcache_page(page); flush_dcache_page(page);
mark_page_accessed(page);
if (!tmp) { if (!tmp) {
unlock_page(page); unlock_page(page);
page_cache_release(page); page_cache_release(page);
......
...@@ -577,7 +577,6 @@ int gfs2_internal_read(struct gfs2_inode *ip, char *buf, loff_t *pos, ...@@ -577,7 +577,6 @@ int gfs2_internal_read(struct gfs2_inode *ip, char *buf, loff_t *pos,
p = kmap_atomic(page); p = kmap_atomic(page);
memcpy(buf + copied, p + offset, amt); memcpy(buf + copied, p + offset, amt);
kunmap_atomic(p); kunmap_atomic(p);
mark_page_accessed(page);
page_cache_release(page); page_cache_release(page);
copied += amt; copied += amt;
index++; index++;
......
...@@ -136,7 +136,8 @@ struct buffer_head *gfs2_getbuf(struct gfs2_glock *gl, u64 blkno, int create) ...@@ -136,7 +136,8 @@ struct buffer_head *gfs2_getbuf(struct gfs2_glock *gl, u64 blkno, int create)
yield(); yield();
} }
} else { } else {
page = find_lock_page(mapping, index); page = find_get_page_flags(mapping, index,
FGP_LOCK|FGP_ACCESSED);
if (!page) if (!page)
return NULL; return NULL;
} }
...@@ -153,7 +154,6 @@ struct buffer_head *gfs2_getbuf(struct gfs2_glock *gl, u64 blkno, int create) ...@@ -153,7 +154,6 @@ struct buffer_head *gfs2_getbuf(struct gfs2_glock *gl, u64 blkno, int create)
map_bh(bh, sdp->sd_vfs, blkno); map_bh(bh, sdp->sd_vfs, blkno);
unlock_page(page); unlock_page(page);
mark_page_accessed(page);
page_cache_release(page); page_cache_release(page);
return bh; return bh;
......
...@@ -1748,7 +1748,6 @@ int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size) ...@@ -1748,7 +1748,6 @@ int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size)
if (page) { if (page) {
set_page_dirty(page); set_page_dirty(page);
unlock_page(page); unlock_page(page);
mark_page_accessed(page);
page_cache_release(page); page_cache_release(page);
} }
ntfs_debug("Done."); ntfs_debug("Done.");
......
...@@ -2060,7 +2060,6 @@ static ssize_t ntfs_file_buffered_write(struct kiocb *iocb, ...@@ -2060,7 +2060,6 @@ static ssize_t ntfs_file_buffered_write(struct kiocb *iocb,
} }
do { do {
unlock_page(pages[--do_pages]); unlock_page(pages[--do_pages]);
mark_page_accessed(pages[do_pages]);
page_cache_release(pages[do_pages]); page_cache_release(pages[do_pages]);
} while (do_pages); } while (do_pages);
if (unlikely(status)) if (unlikely(status))
......
...@@ -198,6 +198,7 @@ struct page; /* forward declaration */ ...@@ -198,6 +198,7 @@ struct page; /* forward declaration */
TESTPAGEFLAG(Locked, locked) TESTPAGEFLAG(Locked, locked)
PAGEFLAG(Error, error) TESTCLEARFLAG(Error, error) PAGEFLAG(Error, error) TESTCLEARFLAG(Error, error)
PAGEFLAG(Referenced, referenced) TESTCLEARFLAG(Referenced, referenced) PAGEFLAG(Referenced, referenced) TESTCLEARFLAG(Referenced, referenced)
__SETPAGEFLAG(Referenced, referenced)
PAGEFLAG(Dirty, dirty) TESTSCFLAG(Dirty, dirty) __CLEARPAGEFLAG(Dirty, dirty) PAGEFLAG(Dirty, dirty) TESTSCFLAG(Dirty, dirty) __CLEARPAGEFLAG(Dirty, dirty)
PAGEFLAG(LRU, lru) __CLEARPAGEFLAG(LRU, lru) PAGEFLAG(LRU, lru) __CLEARPAGEFLAG(LRU, lru)
PAGEFLAG(Active, active) __CLEARPAGEFLAG(Active, active) PAGEFLAG(Active, active) __CLEARPAGEFLAG(Active, active)
......
...@@ -259,12 +259,109 @@ pgoff_t page_cache_next_hole(struct address_space *mapping, ...@@ -259,12 +259,109 @@ pgoff_t page_cache_next_hole(struct address_space *mapping,
pgoff_t page_cache_prev_hole(struct address_space *mapping, pgoff_t page_cache_prev_hole(struct address_space *mapping,
pgoff_t index, unsigned long max_scan); pgoff_t index, unsigned long max_scan);
#define FGP_ACCESSED 0x00000001
#define FGP_LOCK 0x00000002
#define FGP_CREAT 0x00000004
#define FGP_WRITE 0x00000008
#define FGP_NOFS 0x00000010
#define FGP_NOWAIT 0x00000020
struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
int fgp_flags, gfp_t cache_gfp_mask, gfp_t radix_gfp_mask);
/**
* find_get_page - find and get a page reference
* @mapping: the address_space to search
* @offset: the page index
*
* Looks up the page cache slot at @mapping & @offset. If there is a
* page cache page, it is returned with an increased refcount.
*
* Otherwise, %NULL is returned.
*/
static inline struct page *find_get_page(struct address_space *mapping,
pgoff_t offset)
{
return pagecache_get_page(mapping, offset, 0, 0, 0);
}
static inline struct page *find_get_page_flags(struct address_space *mapping,
pgoff_t offset, int fgp_flags)
{
return pagecache_get_page(mapping, offset, fgp_flags, 0, 0);
}
/**
* find_lock_page - locate, pin and lock a pagecache page
* pagecache_get_page - find and get a page reference
* @mapping: the address_space to search
* @offset: the page index
*
* Looks up the page cache slot at @mapping & @offset. If there is a
* page cache page, it is returned locked and with an increased
* refcount.
*
* Otherwise, %NULL is returned.
*
* find_lock_page() may sleep.
*/
static inline struct page *find_lock_page(struct address_space *mapping,
pgoff_t offset)
{
return pagecache_get_page(mapping, offset, FGP_LOCK, 0, 0);
}
/**
* find_or_create_page - locate or add a pagecache page
* @mapping: the page's address_space
* @index: the page's index into the mapping
* @gfp_mask: page allocation mode
*
* Looks up the page cache slot at @mapping & @offset. If there is a
* page cache page, it is returned locked and with an increased
* refcount.
*
* If the page is not present, a new page is allocated using @gfp_mask
* and added to the page cache and the VM's LRU list. The page is
* returned locked and with an increased refcount.
*
* On memory exhaustion, %NULL is returned.
*
* find_or_create_page() may sleep, even if @gfp_flags specifies an
* atomic allocation!
*/
static inline struct page *find_or_create_page(struct address_space *mapping,
pgoff_t offset, gfp_t gfp_mask)
{
return pagecache_get_page(mapping, offset,
FGP_LOCK|FGP_ACCESSED|FGP_CREAT,
gfp_mask, gfp_mask & GFP_RECLAIM_MASK);
}
/**
* grab_cache_page_nowait - returns locked page at given index in given cache
* @mapping: target address_space
* @index: the page index
*
* Same as grab_cache_page(), but do not wait if the page is unavailable.
* This is intended for speculative data generators, where the data can
* be regenerated if the page couldn't be grabbed. This routine should
* be safe to call while holding the lock for another page.
*
* Clear __GFP_FS when allocating the page to avoid recursion into the fs
* and deadlock against the caller's locked page.
*/
static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
pgoff_t index)
{
return pagecache_get_page(mapping, index,
FGP_LOCK|FGP_CREAT|FGP_NOFS|FGP_NOWAIT,
mapping_gfp_mask(mapping),
GFP_NOFS);
}
struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); struct page *find_get_entry(struct address_space *mapping, pgoff_t offset);
struct page *find_get_page(struct address_space *mapping, pgoff_t offset);
struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset); struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset);
struct page *find_lock_page(struct address_space *mapping, pgoff_t offset);
struct page *find_or_create_page(struct address_space *mapping, pgoff_t index,
gfp_t gfp_mask);
unsigned find_get_entries(struct address_space *mapping, pgoff_t start, unsigned find_get_entries(struct address_space *mapping, pgoff_t start,
unsigned int nr_entries, struct page **entries, unsigned int nr_entries, struct page **entries,
pgoff_t *indices); pgoff_t *indices);
...@@ -287,8 +384,6 @@ static inline struct page *grab_cache_page(struct address_space *mapping, ...@@ -287,8 +384,6 @@ static inline struct page *grab_cache_page(struct address_space *mapping,
return find_or_create_page(mapping, index, mapping_gfp_mask(mapping)); return find_or_create_page(mapping, index, mapping_gfp_mask(mapping));
} }
extern struct page * grab_cache_page_nowait(struct address_space *mapping,
pgoff_t index);
extern struct page * read_cache_page(struct address_space *mapping, extern struct page * read_cache_page(struct address_space *mapping,
pgoff_t index, filler_t *filler, void *data); pgoff_t index, filler_t *filler, void *data);
extern struct page * read_cache_page_gfp(struct address_space *mapping, extern struct page * read_cache_page_gfp(struct address_space *mapping,
......
...@@ -311,6 +311,7 @@ extern void lru_add_page_tail(struct page *page, struct page *page_tail, ...@@ -311,6 +311,7 @@ extern void lru_add_page_tail(struct page *page, struct page *page_tail,
struct lruvec *lruvec, struct list_head *head); struct lruvec *lruvec, struct list_head *head);
extern void activate_page(struct page *); extern void activate_page(struct page *);
extern void mark_page_accessed(struct page *); extern void mark_page_accessed(struct page *);
extern void init_page_accessed(struct page *page);
extern void lru_add_drain(void); extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_all(void); extern void lru_add_drain_all(void);
......
...@@ -981,26 +981,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) ...@@ -981,26 +981,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
} }
EXPORT_SYMBOL(find_get_entry); EXPORT_SYMBOL(find_get_entry);
/**
* find_get_page - find and get a page reference
* @mapping: the address_space to search
* @offset: the page index
*
* Looks up the page cache slot at @mapping & @offset. If there is a
* page cache page, it is returned with an increased refcount.
*
* Otherwise, %NULL is returned.
*/
struct page *find_get_page(struct address_space *mapping, pgoff_t offset)
{
struct page *page = find_get_entry(mapping, offset);
if (radix_tree_exceptional_entry(page))
page = NULL;
return page;
}
EXPORT_SYMBOL(find_get_page);
/** /**
* find_lock_entry - locate, pin and lock a page cache entry * find_lock_entry - locate, pin and lock a page cache entry
* @mapping: the address_space to search * @mapping: the address_space to search
...@@ -1038,66 +1018,84 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset) ...@@ -1038,66 +1018,84 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
EXPORT_SYMBOL(find_lock_entry); EXPORT_SYMBOL(find_lock_entry);
/** /**
* find_lock_page - locate, pin and lock a pagecache page * pagecache_get_page - find and get a page reference
* @mapping: the address_space to search * @mapping: the address_space to search
* @offset: the page index * @offset: the page index
* @fgp_flags: PCG flags
* @gfp_mask: gfp mask to use if a page is to be allocated
* *
* Looks up the page cache slot at @mapping & @offset. If there is a * Looks up the page cache slot at @mapping & @offset.
* page cache page, it is returned locked and with an increased
* refcount.
* *
* Otherwise, %NULL is returned. * PCG flags modify how the page is returned
*
* FGP_ACCESSED: the page will be marked accessed
* FGP_LOCK: Page is return locked
* FGP_CREAT: If page is not present then a new page is allocated using
* @gfp_mask and added to the page cache and the VM's LRU
* list. The page is returned locked and with an increased
* refcount. Otherwise, %NULL is returned.
*
* If FGP_LOCK or FGP_CREAT are specified then the function may sleep even
* if the GFP flags specified for FGP_CREAT are atomic.
* *
* find_lock_page() may sleep. * If there is a page cache page, it is returned with an increased refcount.
*/ */
struct page *find_lock_page(struct address_space *mapping, pgoff_t offset) struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
int fgp_flags, gfp_t cache_gfp_mask, gfp_t radix_gfp_mask)
{ {
struct page *page = find_lock_entry(mapping, offset); struct page *page;
repeat:
page = find_get_entry(mapping, offset);
if (radix_tree_exceptional_entry(page)) if (radix_tree_exceptional_entry(page))
page = NULL; page = NULL;
return page; if (!page)
} goto no_page;
EXPORT_SYMBOL(find_lock_page);
/** if (fgp_flags & FGP_LOCK) {
* find_or_create_page - locate or add a pagecache page if (fgp_flags & FGP_NOWAIT) {
* @mapping: the page's address_space if (!trylock_page(page)) {
* @index: the page's index into the mapping page_cache_release(page);
* @gfp_mask: page allocation mode return NULL;
* }
* Looks up the page cache slot at @mapping & @offset. If there is a } else {
* page cache page, it is returned locked and with an increased lock_page(page);
* refcount. }
*
* If the page is not present, a new page is allocated using @gfp_mask /* Has the page been truncated? */
* and added to the page cache and the VM's LRU list. The page is if (unlikely(page->mapping != mapping)) {
* returned locked and with an increased refcount. unlock_page(page);
* page_cache_release(page);
* On memory exhaustion, %NULL is returned. goto repeat;
* }
* find_or_create_page() may sleep, even if @gfp_flags specifies an VM_BUG_ON_PAGE(page->index != offset, page);
* atomic allocation! }
*/
struct page *find_or_create_page(struct address_space *mapping, if (page && (fgp_flags & FGP_ACCESSED))
pgoff_t index, gfp_t gfp_mask) mark_page_accessed(page);
{
struct page *page; no_page:
if (!page && (fgp_flags & FGP_CREAT)) {
int err; int err;
repeat: if ((fgp_flags & FGP_WRITE) && mapping_cap_account_dirty(mapping))
page = find_lock_page(mapping, index); cache_gfp_mask |= __GFP_WRITE;
if (!page) { if (fgp_flags & FGP_NOFS) {
page = __page_cache_alloc(gfp_mask); cache_gfp_mask &= ~__GFP_FS;
radix_gfp_mask &= ~__GFP_FS;
}
page = __page_cache_alloc(cache_gfp_mask);
if (!page) if (!page)
return NULL; return NULL;
/*
* We want a regular kernel memory (not highmem or DMA etc) if (WARN_ON_ONCE(!(fgp_flags & FGP_LOCK)))
* allocation for the radix tree nodes, but we need to honour fgp_flags |= FGP_LOCK;
* the context-specific requirements the caller has asked for.
* GFP_RECLAIM_MASK collects those requirements. /* Init accessed so avoit atomic mark_page_accessed later */
*/ if (fgp_flags & FGP_ACCESSED)
err = add_to_page_cache_lru(page, mapping, index, init_page_accessed(page);
(gfp_mask & GFP_RECLAIM_MASK));
err = add_to_page_cache_lru(page, mapping, offset, radix_gfp_mask);
if (unlikely(err)) { if (unlikely(err)) {
page_cache_release(page); page_cache_release(page);
page = NULL; page = NULL;
...@@ -1105,9 +1103,10 @@ struct page *find_or_create_page(struct address_space *mapping, ...@@ -1105,9 +1103,10 @@ struct page *find_or_create_page(struct address_space *mapping,
goto repeat; goto repeat;
} }
} }
return page; return page;
} }
EXPORT_SYMBOL(find_or_create_page); EXPORT_SYMBOL(pagecache_get_page);
/** /**
* find_get_entries - gang pagecache lookup * find_get_entries - gang pagecache lookup
...@@ -1404,39 +1403,6 @@ unsigned find_get_pages_tag(struct address_space *mapping, pgoff_t *index, ...@@ -1404,39 +1403,6 @@ unsigned find_get_pages_tag(struct address_space *mapping, pgoff_t *index,
} }
EXPORT_SYMBOL(find_get_pages_tag); EXPORT_SYMBOL(find_get_pages_tag);
/**
* grab_cache_page_nowait - returns locked page at given index in given cache
* @mapping: target address_space
* @index: the page index
*
* Same as grab_cache_page(), but do not wait if the page is unavailable.
* This is intended for speculative data generators, where the data can
* be regenerated if the page couldn't be grabbed. This routine should
* be safe to call while holding the lock for another page.
*
* Clear __GFP_FS when allocating the page to avoid recursion into the fs
* and deadlock against the caller's locked page.
*/
struct page *
grab_cache_page_nowait(struct address_space *mapping, pgoff_t index)
{
struct page *page = find_get_page(mapping, index);
if (page) {
if (trylock_page(page))
return page;
page_cache_release(page);
return NULL;
}
page = __page_cache_alloc(mapping_gfp_mask(mapping) & ~__GFP_FS);
if (page && add_to_page_cache_lru(page, mapping, index, GFP_NOFS)) {
page_cache_release(page);
page = NULL;
}
return page;
}
EXPORT_SYMBOL(grab_cache_page_nowait);
/* /*
* CD/DVDs are error prone. When a medium error occurs, the driver may fail * CD/DVDs are error prone. When a medium error occurs, the driver may fail
* a _large_ part of the i/o request. Imagine the worst scenario: * a _large_ part of the i/o request. Imagine the worst scenario:
...@@ -2406,7 +2372,6 @@ int pagecache_write_end(struct file *file, struct address_space *mapping, ...@@ -2406,7 +2372,6 @@ int pagecache_write_end(struct file *file, struct address_space *mapping,
{ {
const struct address_space_operations *aops = mapping->a_ops; const struct address_space_operations *aops = mapping->a_ops;
mark_page_accessed(page);
return aops->write_end(file, mapping, pos, len, copied, page, fsdata); return aops->write_end(file, mapping, pos, len, copied, page, fsdata);
} }
EXPORT_SYMBOL(pagecache_write_end); EXPORT_SYMBOL(pagecache_write_end);
...@@ -2488,34 +2453,18 @@ EXPORT_SYMBOL(generic_file_direct_write); ...@@ -2488,34 +2453,18 @@ EXPORT_SYMBOL(generic_file_direct_write);
struct page *grab_cache_page_write_begin(struct address_space *mapping, struct page *grab_cache_page_write_begin(struct address_space *mapping,
pgoff_t index, unsigned flags) pgoff_t index, unsigned flags)
{ {
int status;
gfp_t gfp_mask;
struct page *page; struct page *page;
gfp_t gfp_notmask = 0; int fgp_flags = FGP_LOCK|FGP_ACCESSED|FGP_WRITE|FGP_CREAT;
gfp_mask = mapping_gfp_mask(mapping);
if (mapping_cap_account_dirty(mapping))
gfp_mask |= __GFP_WRITE;
if (flags & AOP_FLAG_NOFS) if (flags & AOP_FLAG_NOFS)
gfp_notmask = __GFP_FS; fgp_flags |= FGP_NOFS;
repeat:
page = find_lock_page(mapping, index);
if (page)
goto found;
page = __page_cache_alloc(gfp_mask & ~gfp_notmask); page = pagecache_get_page(mapping, index, fgp_flags,
if (!page) mapping_gfp_mask(mapping),
return NULL; GFP_KERNEL);
status = add_to_page_cache_lru(page, mapping, index, if (page)
GFP_KERNEL & ~gfp_notmask);
if (unlikely(status)) {
page_cache_release(page);
if (status == -EEXIST)
goto repeat;
return NULL;
}
found:
wait_for_stable_page(page); wait_for_stable_page(page);
return page; return page;
} }
EXPORT_SYMBOL(grab_cache_page_write_begin); EXPORT_SYMBOL(grab_cache_page_write_begin);
...@@ -2564,7 +2513,7 @@ ssize_t generic_perform_write(struct file *file, ...@@ -2564,7 +2513,7 @@ ssize_t generic_perform_write(struct file *file,
status = a_ops->write_begin(file, mapping, pos, bytes, flags, status = a_ops->write_begin(file, mapping, pos, bytes, flags,
&page, &fsdata); &page, &fsdata);
if (unlikely(status)) if (unlikely(status < 0))
break; break;
if (mapping_writably_mapped(mapping)) if (mapping_writably_mapped(mapping))
...@@ -2573,7 +2522,6 @@ ssize_t generic_perform_write(struct file *file, ...@@ -2573,7 +2522,6 @@ ssize_t generic_perform_write(struct file *file,
copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes); copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes);
flush_dcache_page(page); flush_dcache_page(page);
mark_page_accessed(page);
status = a_ops->write_end(file, mapping, pos, bytes, copied, status = a_ops->write_end(file, mapping, pos, bytes, copied,
page, fsdata); page, fsdata);
if (unlikely(status < 0)) if (unlikely(status < 0))
......
...@@ -1372,9 +1372,13 @@ shmem_write_begin(struct file *file, struct address_space *mapping, ...@@ -1372,9 +1372,13 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags, loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata) struct page **pagep, void **fsdata)
{ {
int ret;
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
pgoff_t index = pos >> PAGE_CACHE_SHIFT; pgoff_t index = pos >> PAGE_CACHE_SHIFT;
return shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); ret = shmem_getpage(inode, index, pagep, SGP_WRITE, NULL);
if (ret == 0 && *pagep)
init_page_accessed(*pagep);
return ret;
} }
static int static int
......
...@@ -614,6 +614,17 @@ void mark_page_accessed(struct page *page) ...@@ -614,6 +614,17 @@ void mark_page_accessed(struct page *page)
} }
EXPORT_SYMBOL(mark_page_accessed); EXPORT_SYMBOL(mark_page_accessed);
/*
* Used to mark_page_accessed(page) that is not visible yet and when it is
* still safe to use non-atomic ops
*/
void init_page_accessed(struct page *page)
{
if (!PageReferenced(page))
__SetPageReferenced(page);
}
EXPORT_SYMBOL(init_page_accessed);
static void __lru_cache_add(struct page *page) static void __lru_cache_add(struct page *page)
{ {
struct pagevec *pvec = &get_cpu_var(lru_add_pvec); struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册