提交 ef4aede3 编写于 作者: R Rafael J. Wysocki

PM/Hibernate: Do not try to allocate too much memory too hard (rev. 2)

We want to avoid attempting to free too much memory too hard during
hibernation, so estimate the minimum size of the image to use as the
lower limit for preallocating memory.

The approach here is based on the (experimental) observation that we
can't free more page frames than the sum of:

* global_page_state(NR_SLAB_RECLAIMABLE)
* global_page_state(NR_ACTIVE_ANON)
* global_page_state(NR_INACTIVE_ANON)
* global_page_state(NR_ACTIVE_FILE)
* global_page_state(NR_INACTIVE_FILE)

minus

* global_page_state(NR_FILE_MAPPED)

Namely, if this number is subtracted from the number of saveable
pages in the system, we get a good estimate of the minimum reasonable
size of a hibernation image.
Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
Acked-by: NWu Fengguang <fengguang.wu@intel.com>
上级 64a473cb
...@@ -1203,6 +1203,36 @@ static void free_unnecessary_pages(void) ...@@ -1203,6 +1203,36 @@ static void free_unnecessary_pages(void)
} }
} }
/**
* minimum_image_size - Estimate the minimum acceptable size of an image
* @saveable: Number of saveable pages in the system.
*
* We want to avoid attempting to free too much memory too hard, so estimate the
* minimum acceptable size of a hibernation image to use as the lower limit for
* preallocating memory.
*
* We assume that the minimum image size should be proportional to
*
* [number of saveable pages] - [number of pages that can be freed in theory]
*
* where the second term is the sum of (1) reclaimable slab pages, (2) active
* and (3) inactive anonymouns pages, (4) active and (5) inactive file pages,
* minus mapped file pages.
*/
static unsigned long minimum_image_size(unsigned long saveable)
{
unsigned long size;
size = global_page_state(NR_SLAB_RECLAIMABLE)
+ global_page_state(NR_ACTIVE_ANON)
+ global_page_state(NR_INACTIVE_ANON)
+ global_page_state(NR_ACTIVE_FILE)
+ global_page_state(NR_INACTIVE_FILE)
- global_page_state(NR_FILE_MAPPED);
return saveable <= size ? 0 : saveable - size;
}
/** /**
* hibernate_preallocate_memory - Preallocate memory for hibernation image * hibernate_preallocate_memory - Preallocate memory for hibernation image
* *
...@@ -1220,8 +1250,8 @@ static void free_unnecessary_pages(void) ...@@ -1220,8 +1250,8 @@ static void free_unnecessary_pages(void)
* *
* If image_size is set below the number following from the above formula, * If image_size is set below the number following from the above formula,
* the preallocation of memory is continued until the total number of saveable * the preallocation of memory is continued until the total number of saveable
* pages in the system is below the requested image size or it is impossible to * pages in the system is below the requested image size or the minimum
* allocate more memory, whichever happens first. * acceptable image size returned by minimum_image_size(), whichever is greater.
*/ */
int hibernate_preallocate_memory(void) int hibernate_preallocate_memory(void)
{ {
...@@ -1282,6 +1312,11 @@ int hibernate_preallocate_memory(void) ...@@ -1282,6 +1312,11 @@ int hibernate_preallocate_memory(void)
goto out; goto out;
} }
/* Estimate the minimum size of the image. */
pages = minimum_image_size(saveable);
if (size < pages)
size = min_t(unsigned long, pages, max_size);
/* /*
* Let the memory management subsystem know that we're going to need a * Let the memory management subsystem know that we're going to need a
* large number of page frames to allocate and make it free some memory. * large number of page frames to allocate and make it free some memory.
...@@ -1294,8 +1329,8 @@ int hibernate_preallocate_memory(void) ...@@ -1294,8 +1329,8 @@ int hibernate_preallocate_memory(void)
* The number of saveable pages in memory was too high, so apply some * The number of saveable pages in memory was too high, so apply some
* pressure to decrease it. First, make room for the largest possible * pressure to decrease it. First, make room for the largest possible
* image and fail if that doesn't work. Next, try to decrease the size * image and fail if that doesn't work. Next, try to decrease the size
* of the image as much as indicated by image_size using allocations * of the image as much as indicated by 'size' using allocations from
* from highmem and non-highmem zones separately. * highmem and non-highmem zones separately.
*/ */
pages_highmem = preallocate_image_highmem(highmem / 2); pages_highmem = preallocate_image_highmem(highmem / 2);
alloc = (count - max_size) - pages_highmem; alloc = (count - max_size) - pages_highmem;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册