• W
    readahead: trigger mmap sequential readahead on PG_readahead · 2cbea1d3
    Wu Fengguang 提交于
    Previously the mmap sequential readahead is triggered by updating
    ra->prev_pos on each page fault and compare it with current page offset.
    
    It costs dirtying the cache line on each _minor_ page fault.  So remove
    the ra->prev_pos recording, and instead tag PG_readahead to trigger the
    possible sequential readahead.  It's not only more simple, but also will
    work more reliably and reduce cache line bouncing on concurrent page
    faults on shared struct file.
    
    In the mosbench exim benchmark which does multi-threaded page faults on
    shared struct file, the ra->mmap_miss and ra->prev_pos updates are found
    to cause excessive cache line bouncing on tmpfs, which actually disabled
    readahead totally (shmem_backing_dev_info.ra_pages == 0).
    
    So remove the ra->prev_pos recording, and instead tag PG_readahead to
    trigger the possible sequential readahead.  It's not only more simple, but
    also will work more reliably on concurrent reads on shared struct file.
    Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
    Tested-by: NTim Chen <tim.c.chen@intel.com>
    Reported-by: NAndi Kleen <ak@linux.intel.com>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    2cbea1d3
filemap.c 68.6 KB