提交 a9e70735 编写于 作者: M Mel Gorman 提交者: Shile Zhang

mm, compaction: keep cached migration PFNs synced for unusable pageblocks

to #26255339

commit 8854c55f54bcc104e3adae42abe16948286ec75c upstream

Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
basis that some migrations will fail in ASYNC mode.  However, if the
cached PFNs match at the start of scanning and pageblocks are skipped
due to having no isolation candidates, then the sync state does not
matter.  This patch keeps matching cached PFNs in sync until a pageblock
with isolation candidates is found.

The actual benefit is marginal given that the sync scanner following the
async scanner will often skip a number of pageblocks but it's useless
work.  Any benefit depends heavily on whether the scanners restarted
recently.

Link: http://lkml.kernel.org/r/20190118175136.31341-16-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
Acked-by: NVlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: David Rientjes <rientjes@google.com>
Cc: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
上级 767185c9
...@@ -1971,6 +1971,7 @@ static enum compact_result compact_zone(struct compact_control *cc) ...@@ -1971,6 +1971,7 @@ static enum compact_result compact_zone(struct compact_control *cc)
unsigned long end_pfn = zone_end_pfn(cc->zone); unsigned long end_pfn = zone_end_pfn(cc->zone);
unsigned long last_migrated_pfn; unsigned long last_migrated_pfn;
const bool sync = cc->mode != MIGRATE_ASYNC; const bool sync = cc->mode != MIGRATE_ASYNC;
bool update_cached;
cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask); cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask);
ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags, ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags,
...@@ -2018,6 +2019,17 @@ static enum compact_result compact_zone(struct compact_control *cc) ...@@ -2018,6 +2019,17 @@ static enum compact_result compact_zone(struct compact_control *cc)
last_migrated_pfn = 0; last_migrated_pfn = 0;
/*
* Migrate has separate cached PFNs for ASYNC and SYNC* migration on
* the basis that some migrations will fail in ASYNC mode. However,
* if the cached PFNs match and pageblocks are skipped due to having
* no isolation candidates, then the sync state does not matter.
* Until a pageblock with isolation candidates is found, keep the
* cached PFNs in sync to avoid revisiting the same blocks.
*/
update_cached = !sync &&
cc->zone->compact_cached_migrate_pfn[0] == cc->zone->compact_cached_migrate_pfn[1];
trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, trace_mm_compaction_begin(start_pfn, cc->migrate_pfn,
cc->free_pfn, end_pfn, sync); cc->free_pfn, end_pfn, sync);
...@@ -2049,6 +2061,11 @@ static enum compact_result compact_zone(struct compact_control *cc) ...@@ -2049,6 +2061,11 @@ static enum compact_result compact_zone(struct compact_control *cc)
last_migrated_pfn = 0; last_migrated_pfn = 0;
goto out; goto out;
case ISOLATE_NONE: case ISOLATE_NONE:
if (update_cached) {
cc->zone->compact_cached_migrate_pfn[1] =
cc->zone->compact_cached_migrate_pfn[0];
}
/* /*
* We haven't isolated and migrated anything, but * We haven't isolated and migrated anything, but
* there might still be unflushed migrations from * there might still be unflushed migrations from
...@@ -2056,6 +2073,7 @@ static enum compact_result compact_zone(struct compact_control *cc) ...@@ -2056,6 +2073,7 @@ static enum compact_result compact_zone(struct compact_control *cc)
*/ */
goto check_drain; goto check_drain;
case ISOLATE_SUCCESS: case ISOLATE_SUCCESS:
update_cached = false;
last_migrated_pfn = start_pfn; last_migrated_pfn = start_pfn;
; ;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册