提交 db6e3304 编写于 作者: L Linus Torvalds

Merge branch 'akpm' (patches from Andrew Morton)

Merge more patches from Andrew Morton:
 "The rest of MM"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
  mm: remove free_area_cache
  zswap: add documentation
  zswap: add to mm/
  zbud: add to mm/
Overview:
Zswap is a lightweight compressed cache for swap pages. It takes pages that are
in the process of being swapped out and attempts to compress them into a
dynamically allocated RAM-based memory pool. zswap basically trades CPU cycles
for potentially reduced swap I/O.  This trade-off can also result in a
significant performance improvement if reads from the compressed cache are
faster than reads from a swap device.
NOTE: Zswap is a new feature as of v3.11 and interacts heavily with memory
reclaim. This interaction has not be fully explored on the large set of
potential configurations and workloads that exist. For this reason, zswap
is a work in progress and should be considered experimental.
Some potential benefits:
* Desktop/laptop users with limited RAM capacities can mitigate the
    performance impact of swapping.
* Overcommitted guests that share a common I/O resource can
    dramatically reduce their swap I/O pressure, avoiding heavy handed I/O
throttling by the hypervisor. This allows more work to get done with less
impact to the guest workload and guests sharing the I/O subsystem
* Users with SSDs as swap devices can extend the life of the device by
    drastically reducing life-shortening writes.
Zswap evicts pages from compressed cache on an LRU basis to the backing swap
device when the compressed pool reaches it size limit. This requirement had
been identified in prior community discussions.
To enabled zswap, the "enabled" attribute must be set to 1 at boot time. e.g.
zswap.enabled=1
Design:
Zswap receives pages for compression through the Frontswap API and is able to
evict pages from its own compressed pool on an LRU basis and write them back to
the backing swap device in the case that the compressed pool is full.
Zswap makes use of zbud for the managing the compressed memory pool. Each
allocation in zbud is not directly accessible by address. Rather, a handle is
return by the allocation routine and that handle must be mapped before being
accessed. The compressed memory pool grows on demand and shrinks as compressed
pages are freed. The pool is not preallocated.
When a swap page is passed from frontswap to zswap, zswap maintains a mapping
of the swap entry, a combination of the swap type and swap offset, to the zbud
handle that references that compressed swap page. This mapping is achieved
with a red-black tree per swap type. The swap offset is the search key for the
tree nodes.
During a page fault on a PTE that is a swap entry, frontswap calls the zswap
load function to decompress the page into the page allocated by the page fault
handler.
Once there are no PTEs referencing a swap page stored in zswap (i.e. the count
in the swap_map goes to 0) the swap code calls the zswap invalidate function,
via frontswap, to free the compressed entry.
Zswap seeks to be simple in its policies. Sysfs attributes allow for one user
controlled policies:
* max_pool_percent - The maximum percentage of memory that the compressed
pool can occupy.
Zswap allows the compressor to be selected at kernel boot time by setting the
“compressor” attribute. The default compressor is lzo. e.g.
zswap.compressor=deflate
A debugfs interface is provided for various statistic about pool size, number
of pages stored, and various counters for the reasons pages are rejected.
......@@ -181,11 +181,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(random_factor);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......
......@@ -90,11 +90,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
......
......@@ -158,11 +158,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(random_factor);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......
......@@ -92,10 +92,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......@@ -91,11 +91,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......@@ -176,11 +174,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = s390_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = s390_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......
......@@ -290,7 +290,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
sysctl_legacy_va_layout) {
mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
/* We know it's 32-bit */
unsigned long task_size = STACK_TOP32;
......@@ -302,7 +301,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
mm->mmap_base = PAGE_ALIGN(task_size - gap - random_factor);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......
......@@ -66,10 +66,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (!is_32bit || rlimit(RLIMIT_STACK) == RLIM_INFINITY) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(mm);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......@@ -308,8 +308,6 @@ static int load_aout_binary(struct linux_binprm *bprm)
(current->mm->start_data = N_DATADDR(ex));
current->mm->brk = ex.a_bss +
(current->mm->start_brk = N_BSSADDR(ex));
current->mm->free_area_cache = TASK_UNMAPPED_BASE;
current->mm->cached_hole_size = 0;
retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT);
if (retval < 0) {
......
......@@ -115,10 +115,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (mmap_is_legacy()) {
mm->mmap_base = mmap_legacy_base();
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......@@ -255,8 +255,6 @@ static int load_aout_binary(struct linux_binprm * bprm)
(current->mm->start_data = N_DATADDR(ex));
current->mm->brk = ex.a_bss +
(current->mm->start_brk = N_BSSADDR(ex));
current->mm->free_area_cache = current->mm->mmap_base;
current->mm->cached_hole_size = 0;
retval = setup_arg_pages(bprm, STACK_TOP, EXSTACK_DEFAULT);
if (retval < 0) {
......
......@@ -738,8 +738,6 @@ static int load_elf_binary(struct linux_binprm *bprm)
/* Do this so that we can load the interpreter, if need be. We will
change some of these later */
current->mm->free_area_cache = current->mm->mmap_base;
current->mm->cached_hole_size = 0;
retval = setup_arg_pages(bprm, randomize_stack_top(STACK_TOP),
executable_stack);
if (retval < 0) {
......
......@@ -330,12 +330,9 @@ struct mm_struct {
unsigned long (*get_unmapped_area) (struct file *filp,
unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags);
void (*unmap_area) (struct mm_struct *mm, unsigned long addr);
#endif
unsigned long mmap_base; /* base of mmap area */
unsigned long task_size; /* size of task vm space */
unsigned long cached_hole_size; /* if non-zero, the largest hole below free_area_cache */
unsigned long free_area_cache; /* first hole of size cached_hole_size or larger */
unsigned long highest_vm_end; /* highest vma end address */
pgd_t * pgd;
atomic_t mm_users; /* How many users with user space? */
......
......@@ -322,8 +322,6 @@ extern unsigned long
arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags);
extern void arch_unmap_area(struct mm_struct *, unsigned long);
extern void arch_unmap_area_topdown(struct mm_struct *, unsigned long);
#else
static inline void arch_pick_mmap_layout(struct mm_struct *mm) {}
#endif
......
#ifndef _ZBUD_H_
#define _ZBUD_H_
#include <linux/types.h>
struct zbud_pool;
struct zbud_ops {
int (*evict)(struct zbud_pool *pool, unsigned long handle);
};
struct zbud_pool *zbud_create_pool(gfp_t gfp, struct zbud_ops *ops);
void zbud_destroy_pool(struct zbud_pool *pool);
int zbud_alloc(struct zbud_pool *pool, int size, gfp_t gfp,
unsigned long *handle);
void zbud_free(struct zbud_pool *pool, unsigned long handle);
int zbud_reclaim_page(struct zbud_pool *pool, unsigned int retries);
void *zbud_map(struct zbud_pool *pool, unsigned long handle);
void zbud_unmap(struct zbud_pool *pool, unsigned long handle);
u64 zbud_get_pool_size(struct zbud_pool *pool);
#endif /* _ZBUD_H_ */
......@@ -365,8 +365,6 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
mm->locked_vm = 0;
mm->mmap = NULL;
mm->mmap_cache = NULL;
mm->free_area_cache = oldmm->mmap_base;
mm->cached_hole_size = ~0UL;
mm->map_count = 0;
cpumask_clear(mm_cpumask(mm));
mm->mm_rb = RB_ROOT;
......@@ -540,8 +538,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
mm->nr_ptes = 0;
memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
spin_lock_init(&mm->page_table_lock);
mm->free_area_cache = TASK_UNMAPPED_BASE;
mm->cached_hole_size = ~0UL;
mm_init_aio(mm);
mm_init_owner(mm, p);
......
......@@ -478,6 +478,36 @@ config FRONTSWAP
If unsure, say Y to enable frontswap.
config ZBUD
tristate
default n
help
A special purpose allocator for storing compressed pages.
It is designed to store up to two compressed pages per physical
page. While this design limits storage density, it has simple and
deterministic reclaim properties that make it preferable to a higher
density approach when reclaim will be used.
config ZSWAP
bool "Compressed cache for swap pages (EXPERIMENTAL)"
depends on FRONTSWAP && CRYPTO=y
select CRYPTO_LZO
select ZBUD
default n
help
A lightweight compressed cache for swap pages. It takes
pages that are in the process of being swapped out and attempts to
compress them into a dynamically allocated RAM-based memory pool.
This can result in a significant I/O reduction on swap device and,
in the case where decompressing from RAM is faster that swap device
reads, can also improve workload performance.
This is marked experimental because it is a new feature (as of
v3.11) that interacts heavily with memory reclaim. While these
interactions don't cause any known issues on simple memory setups,
they have not be fully explored on the large set of potential
configurations and workloads that exist.
config MEM_SOFT_DIRTY
bool "Track memory changes"
depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY
......
......@@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o
obj-$(CONFIG_FRONTSWAP) += frontswap.o
obj-$(CONFIG_ZSWAP) += zswap.o
obj-$(CONFIG_HAS_DMA) += dmapool.o
obj-$(CONFIG_HUGETLBFS) += hugetlb.o
obj-$(CONFIG_NUMA) += mempolicy.o
......@@ -58,3 +59,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
obj-$(CONFIG_CLEANCACHE) += cleancache.o
obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
obj-$(CONFIG_ZBUD) += zbud.o
......@@ -1878,15 +1878,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
}
#endif
void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
{
/*
* Is this a new hole at the lowest possible address?
*/
if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache)
mm->free_area_cache = addr;
}
/*
* This mmap-allocator allocates new areas top-down from below the
* stack's low limit (the base):
......@@ -1943,19 +1934,6 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
}
#endif
void arch_unmap_area_topdown(struct mm_struct *mm, unsigned long addr)
{
/*
* Is this a new hole at the highest possible address?
*/
if (addr > mm->free_area_cache)
mm->free_area_cache = addr;
/* dont allow allocations above current base */
if (mm->free_area_cache > mm->mmap_base)
mm->free_area_cache = mm->mmap_base;
}
unsigned long
get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
......@@ -2376,7 +2354,6 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
{
struct vm_area_struct **insertion_point;
struct vm_area_struct *tail_vma = NULL;
unsigned long addr;
insertion_point = (prev ? &prev->vm_next : &mm->mmap);
vma->vm_prev = NULL;
......@@ -2393,11 +2370,6 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
} else
mm->highest_vm_end = prev ? prev->vm_end : 0;
tail_vma->vm_next = NULL;
if (mm->unmap_area == arch_unmap_area)
addr = prev ? prev->vm_end : mm->mmap_base;
else
addr = vma ? vma->vm_start : mm->mmap_base;
mm->unmap_area(mm, addr);
mm->mmap_cache = NULL; /* Kill the cache. */
}
......
......@@ -1871,10 +1871,6 @@ unsigned long arch_get_unmapped_area(struct file *file, unsigned long addr,
return -ENOMEM;
}
void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
{
}
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen,
int even_cows)
......
......@@ -295,7 +295,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
{
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
}
#endif
......
/*
* zbud.c
*
* Copyright (C) 2013, Seth Jennings, IBM
*
* Concepts based on zcache internal zbud allocator by Dan Magenheimer.
*
* zbud is an special purpose allocator for storing compressed pages. Contrary
* to what its name may suggest, zbud is not a buddy allocator, but rather an
* allocator that "buddies" two compressed pages together in a single memory
* page.
*
* While this design limits storage density, it has simple and deterministic
* reclaim properties that make it preferable to a higher density approach when
* reclaim will be used.
*
* zbud works by storing compressed pages, or "zpages", together in pairs in a
* single memory page called a "zbud page". The first buddy is "left
* justifed" at the beginning of the zbud page, and the last buddy is "right
* justified" at the end of the zbud page. The benefit is that if either
* buddy is freed, the freed buddy space, coalesced with whatever slack space
* that existed between the buddies, results in the largest possible free region
* within the zbud page.
*
* zbud also provides an attractive lower bound on density. The ratio of zpages
* to zbud pages can not be less than 1. This ensures that zbud can never "do
* harm" by using more pages to store zpages than the uncompressed zpages would
* have used on their own.
*
* zbud pages are divided into "chunks". The size of the chunks is fixed at
* compile time and determined by NCHUNKS_ORDER below. Dividing zbud pages
* into chunks allows organizing unbuddied zbud pages into a manageable number
* of unbuddied lists according to the number of free chunks available in the
* zbud page.
*
* The zbud API differs from that of conventional allocators in that the
* allocation function, zbud_alloc(), returns an opaque handle to the user,
* not a dereferenceable pointer. The user must map the handle using
* zbud_map() in order to get a usable pointer by which to access the
* allocation data and unmap the handle with zbud_unmap() when operations
* on the allocation data are complete.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/atomic.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/preempt.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/zbud.h>
/*****************
* Structures
*****************/
/*
* NCHUNKS_ORDER determines the internal allocation granularity, effectively
* adjusting internal fragmentation. It also determines the number of
* freelists maintained in each pool. NCHUNKS_ORDER of 6 means that the
* allocation granularity will be in chunks of size PAGE_SIZE/64, and there
* will be 64 freelists per pool.
*/
#define NCHUNKS_ORDER 6
#define CHUNK_SHIFT (PAGE_SHIFT - NCHUNKS_ORDER)
#define CHUNK_SIZE (1 << CHUNK_SHIFT)
#define NCHUNKS (PAGE_SIZE >> CHUNK_SHIFT)
#define ZHDR_SIZE_ALIGNED CHUNK_SIZE
/**
* struct zbud_pool - stores metadata for each zbud pool
* @lock: protects all pool fields and first|last_chunk fields of any
* zbud page in the pool
* @unbuddied: array of lists tracking zbud pages that only contain one buddy;
* the lists each zbud page is added to depends on the size of
* its free region.
* @buddied: list tracking the zbud pages that contain two buddies;
* these zbud pages are full
* @lru: list tracking the zbud pages in LRU order by most recently
* added buddy.
* @pages_nr: number of zbud pages in the pool.
* @ops: pointer to a structure of user defined operations specified at
* pool creation time.
*
* This structure is allocated at pool creation time and maintains metadata
* pertaining to a particular zbud pool.
*/
struct zbud_pool {
spinlock_t lock;
struct list_head unbuddied[NCHUNKS];
struct list_head buddied;
struct list_head lru;
u64 pages_nr;
struct zbud_ops *ops;
};
/*
* struct zbud_header - zbud page metadata occupying the first chunk of each
* zbud page.
* @buddy: links the zbud page into the unbuddied/buddied lists in the pool
* @lru: links the zbud page into the lru list in the pool
* @first_chunks: the size of the first buddy in chunks, 0 if free
* @last_chunks: the size of the last buddy in chunks, 0 if free
*/
struct zbud_header {
struct list_head buddy;
struct list_head lru;
unsigned int first_chunks;
unsigned int last_chunks;
bool under_reclaim;
};
/*****************
* Helpers
*****************/
/* Just to make the code easier to read */
enum buddy {
FIRST,
LAST
};
/* Converts an allocation size in bytes to size in zbud chunks */
static int size_to_chunks(int size)
{
return (size + CHUNK_SIZE - 1) >> CHUNK_SHIFT;
}
#define for_each_unbuddied_list(_iter, _begin) \
for ((_iter) = (_begin); (_iter) < NCHUNKS; (_iter)++)
/* Initializes the zbud header of a newly allocated zbud page */
static struct zbud_header *init_zbud_page(struct page *page)
{
struct zbud_header *zhdr = page_address(page);
zhdr->first_chunks = 0;
zhdr->last_chunks = 0;
INIT_LIST_HEAD(&zhdr->buddy);
INIT_LIST_HEAD(&zhdr->lru);
zhdr->under_reclaim = 0;
return zhdr;
}
/* Resets the struct page fields and frees the page */
static void free_zbud_page(struct zbud_header *zhdr)
{
__free_page(virt_to_page(zhdr));
}
/*
* Encodes the handle of a particular buddy within a zbud page
* Pool lock should be held as this function accesses first|last_chunks
*/
static unsigned long encode_handle(struct zbud_header *zhdr, enum buddy bud)
{
unsigned long handle;
/*
* For now, the encoded handle is actually just the pointer to the data
* but this might not always be the case. A little information hiding.
* Add CHUNK_SIZE to the handle if it is the first allocation to jump
* over the zbud header in the first chunk.
*/
handle = (unsigned long)zhdr;
if (bud == FIRST)
/* skip over zbud header */
handle += ZHDR_SIZE_ALIGNED;
else /* bud == LAST */
handle += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT);
return handle;
}
/* Returns the zbud page where a given handle is stored */
static struct zbud_header *handle_to_zbud_header(unsigned long handle)
{
return (struct zbud_header *)(handle & PAGE_MASK);
}
/* Returns the number of free chunks in a zbud page */
static int num_free_chunks(struct zbud_header *zhdr)
{
/*
* Rather than branch for different situations, just use the fact that
* free buddies have a length of zero to simplify everything. -1 at the
* end for the zbud header.
*/
return NCHUNKS - zhdr->first_chunks - zhdr->last_chunks - 1;
}
/*****************
* API Functions
*****************/
/**
* zbud_create_pool() - create a new zbud pool
* @gfp: gfp flags when allocating the zbud pool structure
* @ops: user-defined operations for the zbud pool
*
* Return: pointer to the new zbud pool or NULL if the metadata allocation
* failed.
*/
struct zbud_pool *zbud_create_pool(gfp_t gfp, struct zbud_ops *ops)
{
struct zbud_pool *pool;
int i;
pool = kmalloc(sizeof(struct zbud_pool), gfp);
if (!pool)
return NULL;
spin_lock_init(&pool->lock);
for_each_unbuddied_list(i, 0)
INIT_LIST_HEAD(&pool->unbuddied[i]);
INIT_LIST_HEAD(&pool->buddied);
INIT_LIST_HEAD(&pool->lru);
pool->pages_nr = 0;
pool->ops = ops;
return pool;
}
/**
* zbud_destroy_pool() - destroys an existing zbud pool
* @pool: the zbud pool to be destroyed
*
* The pool should be emptied before this function is called.
*/
void zbud_destroy_pool(struct zbud_pool *pool)
{
kfree(pool);
}
/**
* zbud_alloc() - allocates a region of a given size
* @pool: zbud pool from which to allocate
* @size: size in bytes of the desired allocation
* @gfp: gfp flags used if the pool needs to grow
* @handle: handle of the new allocation
*
* This function will attempt to find a free region in the pool large enough to
* satisfy the allocation request. A search of the unbuddied lists is
* performed first. If no suitable free region is found, then a new page is
* allocated and added to the pool to satisfy the request.
*
* gfp should not set __GFP_HIGHMEM as highmem pages cannot be used
* as zbud pool pages.
*
* Return: 0 if success and handle is set, otherwise -EINVAL is the size or
* gfp arguments are invalid or -ENOMEM if the pool was unable to allocate
* a new page.
*/
int zbud_alloc(struct zbud_pool *pool, int size, gfp_t gfp,
unsigned long *handle)
{
int chunks, i, freechunks;
struct zbud_header *zhdr = NULL;
enum buddy bud;
struct page *page;
if (size <= 0 || gfp & __GFP_HIGHMEM)
return -EINVAL;
if (size > PAGE_SIZE - ZHDR_SIZE_ALIGNED)
return -ENOSPC;
chunks = size_to_chunks(size);
spin_lock(&pool->lock);
/* First, try to find an unbuddied zbud page. */
zhdr = NULL;
for_each_unbuddied_list(i, chunks) {
if (!list_empty(&pool->unbuddied[i])) {
zhdr = list_first_entry(&pool->unbuddied[i],
struct zbud_header, buddy);
list_del(&zhdr->buddy);
if (zhdr->first_chunks == 0)
bud = FIRST;
else
bud = LAST;
goto found;
}
}
/* Couldn't find unbuddied zbud page, create new one */
spin_unlock(&pool->lock);
page = alloc_page(gfp);
if (!page)
return -ENOMEM;
spin_lock(&pool->lock);
pool->pages_nr++;
zhdr = init_zbud_page(page);
bud = FIRST;
found:
if (bud == FIRST)
zhdr->first_chunks = chunks;
else
zhdr->last_chunks = chunks;
if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0) {
/* Add to unbuddied list */
freechunks = num_free_chunks(zhdr);
list_add(&zhdr->buddy, &pool->unbuddied[freechunks]);
} else {
/* Add to buddied list */
list_add(&zhdr->buddy, &pool->buddied);
}
/* Add/move zbud page to beginning of LRU */
if (!list_empty(&zhdr->lru))
list_del(&zhdr->lru);
list_add(&zhdr->lru, &pool->lru);
*handle = encode_handle(zhdr, bud);
spin_unlock(&pool->lock);
return 0;
}
/**
* zbud_free() - frees the allocation associated with the given handle
* @pool: pool in which the allocation resided
* @handle: handle associated with the allocation returned by zbud_alloc()
*
* In the case that the zbud page in which the allocation resides is under
* reclaim, as indicated by the PG_reclaim flag being set, this function
* only sets the first|last_chunks to 0. The page is actually freed
* once both buddies are evicted (see zbud_reclaim_page() below).
*/
void zbud_free(struct zbud_pool *pool, unsigned long handle)
{
struct zbud_header *zhdr;
int freechunks;
spin_lock(&pool->lock);
zhdr = handle_to_zbud_header(handle);
/* If first buddy, handle will be page aligned */
if ((handle - ZHDR_SIZE_ALIGNED) & ~PAGE_MASK)
zhdr->last_chunks = 0;
else
zhdr->first_chunks = 0;
if (zhdr->under_reclaim) {
/* zbud page is under reclaim, reclaim will free */
spin_unlock(&pool->lock);
return;
}
/* Remove from existing buddy list */
list_del(&zhdr->buddy);
if (zhdr->first_chunks == 0 && zhdr->last_chunks == 0) {
/* zbud page is empty, free */
list_del(&zhdr->lru);
free_zbud_page(zhdr);
pool->pages_nr--;
} else {
/* Add to unbuddied list */
freechunks = num_free_chunks(zhdr);
list_add(&zhdr->buddy, &pool->unbuddied[freechunks]);
}
spin_unlock(&pool->lock);
}
#define list_tail_entry(ptr, type, member) \
list_entry((ptr)->prev, type, member)
/**
* zbud_reclaim_page() - evicts allocations from a pool page and frees it
* @pool: pool from which a page will attempt to be evicted
* @retires: number of pages on the LRU list for which eviction will
* be attempted before failing
*
* zbud reclaim is different from normal system reclaim in that the reclaim is
* done from the bottom, up. This is because only the bottom layer, zbud, has
* information on how the allocations are organized within each zbud page. This
* has the potential to create interesting locking situations between zbud and
* the user, however.
*
* To avoid these, this is how zbud_reclaim_page() should be called:
* The user detects a page should be reclaimed and calls zbud_reclaim_page().
* zbud_reclaim_page() will remove a zbud page from the pool LRU list and call
* the user-defined eviction handler with the pool and handle as arguments.
*
* If the handle can not be evicted, the eviction handler should return
* non-zero. zbud_reclaim_page() will add the zbud page back to the
* appropriate list and try the next zbud page on the LRU up to
* a user defined number of retries.
*
* If the handle is successfully evicted, the eviction handler should
* return 0 _and_ should have called zbud_free() on the handle. zbud_free()
* contains logic to delay freeing the page if the page is under reclaim,
* as indicated by the setting of the PG_reclaim flag on the underlying page.
*
* If all buddies in the zbud page are successfully evicted, then the
* zbud page can be freed.
*
* Returns: 0 if page is successfully freed, otherwise -EINVAL if there are
* no pages to evict or an eviction handler is not registered, -EAGAIN if
* the retry limit was hit.
*/
int zbud_reclaim_page(struct zbud_pool *pool, unsigned int retries)
{
int i, ret, freechunks;
struct zbud_header *zhdr;
unsigned long first_handle = 0, last_handle = 0;
spin_lock(&pool->lock);
if (!pool->ops || !pool->ops->evict || list_empty(&pool->lru) ||
retries == 0) {
spin_unlock(&pool->lock);
return -EINVAL;
}
for (i = 0; i < retries; i++) {
zhdr = list_tail_entry(&pool->lru, struct zbud_header, lru);
list_del(&zhdr->lru);
list_del(&zhdr->buddy);
/* Protect zbud page against free */
zhdr->under_reclaim = true;
/*
* We need encode the handles before unlocking, since we can
* race with free that will set (first|last)_chunks to 0
*/
first_handle = 0;
last_handle = 0;
if (zhdr->first_chunks)
first_handle = encode_handle(zhdr, FIRST);
if (zhdr->last_chunks)
last_handle = encode_handle(zhdr, LAST);
spin_unlock(&pool->lock);
/* Issue the eviction callback(s) */
if (first_handle) {
ret = pool->ops->evict(pool, first_handle);
if (ret)
goto next;
}
if (last_handle) {
ret = pool->ops->evict(pool, last_handle);
if (ret)
goto next;
}
next:
spin_lock(&pool->lock);
zhdr->under_reclaim = false;
if (zhdr->first_chunks == 0 && zhdr->last_chunks == 0) {
/*
* Both buddies are now free, free the zbud page and
* return success.
*/
free_zbud_page(zhdr);
pool->pages_nr--;
spin_unlock(&pool->lock);
return 0;
} else if (zhdr->first_chunks == 0 ||
zhdr->last_chunks == 0) {
/* add to unbuddied list */
freechunks = num_free_chunks(zhdr);
list_add(&zhdr->buddy, &pool->unbuddied[freechunks]);
} else {
/* add to buddied list */
list_add(&zhdr->buddy, &pool->buddied);
}
/* add to beginning of LRU */
list_add(&zhdr->lru, &pool->lru);
}
spin_unlock(&pool->lock);
return -EAGAIN;
}
/**
* zbud_map() - maps the allocation associated with the given handle
* @pool: pool in which the allocation resides
* @handle: handle associated with the allocation to be mapped
*
* While trivial for zbud, the mapping functions for others allocators
* implementing this allocation API could have more complex information encoded
* in the handle and could create temporary mappings to make the data
* accessible to the user.
*
* Returns: a pointer to the mapped allocation
*/
void *zbud_map(struct zbud_pool *pool, unsigned long handle)
{
return (void *)(handle);
}
/**
* zbud_unmap() - maps the allocation associated with the given handle
* @pool: pool in which the allocation resides
* @handle: handle associated with the allocation to be unmapped
*/
void zbud_unmap(struct zbud_pool *pool, unsigned long handle)
{
}
/**
* zbud_get_pool_size() - gets the zbud pool size in pages
* @pool: pool whose size is being queried
*
* Returns: size in pages of the given pool. The pool lock need not be
* taken to access pages_nr.
*/
u64 zbud_get_pool_size(struct zbud_pool *pool)
{
return pool->pages_nr;
}
static int __init init_zbud(void)
{
/* Make sure the zbud header will fit in one chunk */
BUILD_BUG_ON(sizeof(struct zbud_header) > ZHDR_SIZE_ALIGNED);
pr_info("loaded\n");
return 0;
}
static void __exit exit_zbud(void)
{
pr_info("unloaded\n");
}
module_init(init_zbud);
module_exit(exit_zbud);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>");
MODULE_DESCRIPTION("Buddy Allocator for Compressed Pages");
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册