提交 207d04ba 编写于 作者: A Andi Kleen 提交者: Linus Torvalds

readahead: reduce unnecessary mmap_miss increases

The original INT_MAX is too large, reduce it to

- avoid unnecessarily dirtying/bouncing the cache line

- restore mmap read-around faster on changed access pattern

Background: in the mosbench exim benchmark which does multi-threaded page
faults on shared struct file, the ra->mmap_miss updates are found to cause
excessive cache line bouncing on tmpfs.  The ra state updates are needless
for tmpfs because it actually disabled readahead totally
(shmem_backing_dev_info.ra_pages == 0).
Tested-by: NTim Chen <tim.c.chen@intel.com>
Signed-off-by: NAndi Kleen <ak@linux.intel.com>
Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 275b12bf
...@@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, ...@@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
return; return;
} }
if (ra->mmap_miss < INT_MAX) /* Avoid banging the cache line if not needed */
if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
ra->mmap_miss++; ra->mmap_miss++;
/* /*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册