提交 264e90cc 编写于 作者: J Johannes Weiner 提交者: Linus Torvalds

mm: only count actual rotations as LRU reclaim cost

When shrinking the active file list we rotate referenced pages only when
they're in an executable mapping.  The others get deactivated.  When it
comes to balancing scan pressure, though, we count all referenced pages as
rotated, even the deactivated ones.  Yet they do not carry the same cost
to the system: the deactivated page *might* refault later on, but the
deactivation is tangible progress toward freeing pages; rotations on the
other hand cost time and effort without getting any closer to freeing
memory.

Don't treat both events as equal.  The following patch will hook up LRU
balancing to cache and anon refaults, which are a much more concrete cost
signal for reclaiming one list over the other.  Thus, remove the maybe-IO
cost bias from page references, and only note the CPU cost for actual
rotations that prevent the pages from getting reclaimed.
Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Acked-by: NMinchan Kim <minchan@kernel.org>
Acked-by: NMichal Hocko <mhocko@suse.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Rik van Riel <riel@surriel.com>
Link: http://lkml.kernel.org/r/20200520232525.798933-11-hannes@cmpxchg.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 fbbb602e
...@@ -2054,7 +2054,6 @@ static void shrink_active_list(unsigned long nr_to_scan, ...@@ -2054,7 +2054,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
if (page_referenced(page, 0, sc->target_mem_cgroup, if (page_referenced(page, 0, sc->target_mem_cgroup,
&vm_flags)) { &vm_flags)) {
nr_rotated += hpage_nr_pages(page);
/* /*
* Identify referenced, file-backed active pages and * Identify referenced, file-backed active pages and
* give them one more trip around the active list. So * give them one more trip around the active list. So
...@@ -2065,6 +2064,7 @@ static void shrink_active_list(unsigned long nr_to_scan, ...@@ -2065,6 +2064,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
* so we ignore them here. * so we ignore them here.
*/ */
if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) { if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) {
nr_rotated += hpage_nr_pages(page);
list_add(&page->lru, &l_active); list_add(&page->lru, &l_active);
continue; continue;
} }
...@@ -2080,10 +2080,8 @@ static void shrink_active_list(unsigned long nr_to_scan, ...@@ -2080,10 +2080,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
*/ */
spin_lock_irq(&pgdat->lru_lock); spin_lock_irq(&pgdat->lru_lock);
/* /*
* Count referenced pages from currently used mappings as rotated, * Rotating pages costs CPU without actually
* even though only some of them are actually re-activated. This * progressing toward the reclaim goal.
* helps balance scan pressure between file and anonymous pages in
* get_scan_count.
*/ */
lru_note_cost(lruvec, file, nr_rotated); lru_note_cost(lruvec, file, nr_rotated);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册