提交 63de663e 编写于 作者: C Charan Teja Reddy 提交者: Caspar Zhang

mm, page_alloc: skip ->waternark_boost for atomic order-0 allocations

to #29931646

commit f80b08fc44536a311a9f3182e50f318b79076425 upstream.

When boosting is enabled, it is observed that rate of atomic order-0
allocation failures are high due to the fact that free levels in the
system are checked with ->watermark_boost offset.  This is not a problem
for sleepable allocations but for atomic allocations which looks like
regression.

This problem is seen frequently on system setup of Android kernel running
on Snapdragon hardware with 4GB RAM size.  When no extfrag event occurred
in the system, ->watermark_boost factor is zero, thus the watermark
configurations in the system are:

   _watermark = (
          [WMARK_MIN] = 1272, --> ~5MB
          [WMARK_LOW] = 9067, --> ~36MB
          [WMARK_HIGH] = 9385), --> ~38MB
   watermark_boost = 0

After launching some memory hungry applications in Android which can cause
extfrag events in the system to an extent that ->watermark_boost can be
set to max i.e.  default boost factor makes it to 150% of high watermark.

   _watermark = (
          [WMARK_MIN] = 1272, --> ~5MB
          [WMARK_LOW] = 9067, --> ~36MB
          [WMARK_HIGH] = 9385), --> ~38MB
   watermark_boost = 14077, -->~57MB

With default system configuration, for an atomic order-0 allocation to
succeed, having free memory of ~2MB will suffice.  But boosting makes the
min_wmark to ~61MB thus for an atomic order-0 allocation to be successful
system should have minimum of ~23MB of free memory(from calculations of
zone_watermark_ok(), min = 3/4(min/2)).  But failures are observed despite
system is having ~20MB of free memory.  In the testing, this is
reproducible as early as first 300secs since boot and with furtherlowram
configurations(<2GB) it is observed as early as first 150secs since boot.

These failures can be avoided by excluding the ->watermark_boost in
watermark caluculations for atomic order-0 allocations.

[akpm@linux-foundation.org: fix comment grammar, reflow comment]
[charante@codeaurora.org: fix suggested by Mel Gorman]
  Link: http://lkml.kernel.org/r/31556793-57b1-1c21-1a9d-22674d9bd938@codeaurora.orgSigned-off-by: NCharan Teja Reddy <charante@codeaurora.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Acked-by: NVlastimil Babka <vbabka@suse.cz>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Link: http://lkml.kernel.org/r/1589882284-21010-1-git-send-email-charante@codeaurora.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NXu Yu <xuyu@linux.alibaba.com>
Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
上级 550e512f
...@@ -3513,7 +3513,8 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, ...@@ -3513,7 +3513,8 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
} }
static inline bool zone_watermark_fast(struct zone *z, unsigned int order, static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
unsigned long mark, int classzone_idx, unsigned int alloc_flags) unsigned long mark, int classzone_idx,
unsigned int alloc_flags, gfp_t gfp_mask)
{ {
long free_pages = zone_page_state(z, NR_FREE_PAGES); long free_pages = zone_page_state(z, NR_FREE_PAGES);
long cma_pages = 0; long cma_pages = 0;
...@@ -3534,8 +3535,23 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order, ...@@ -3534,8 +3535,23 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx]) if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
return true; return true;
return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags, if (__zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
free_pages); free_pages))
return true;
/*
* Ignore watermark boosting for GFP_ATOMIC order-0 allocations
* when checking the min watermark. The min watermark is the
* point where boosting is ignored so that kswapd is woken up
* when below the low watermark.
*/
if (unlikely(!order && (gfp_mask & __GFP_ATOMIC) && z->watermark_boost
&& ((alloc_flags & ALLOC_WMARK_MASK) == WMARK_MIN))) {
mark = z->_watermark[WMARK_MIN];
return __zone_watermark_ok(z, order, mark, classzone_idx,
alloc_flags, free_pages);
}
return false;
} }
bool zone_watermark_ok_safe(struct zone *z, unsigned int order, bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
...@@ -3676,7 +3692,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, ...@@ -3676,7 +3692,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
if (!zone_watermark_fast(zone, order, mark, if (!zone_watermark_fast(zone, order, mark,
ac_classzone_idx(ac), alloc_flags)) { ac_classzone_idx(ac), alloc_flags,
gfp_mask)) {
int ret; int ret;
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册