提交 9323b417 编写于 作者: Y Yu Zhao 提交者: YuLinjia

mm: multi-gen LRU: minimal implementation

mainline inclusion
from mainline-v6.1-rc1
commit ac35a490
category: feature
bugzilla: https://gitee.com/openeuler/open-source-summer/issues/I55Z0L
CVE: NA
Reference: https://android-review.googlesource.com/c/kernel/common/+/2050911/10

----------------------------------------------------------------------

To avoid confusion, the terms "promotion" and "demotion" will be
applied to the multi-gen LRU, as a new convention; the terms
"activation" and "deactivation" will be applied to the active/inactive
LRU, as usual.

The aging produces young generations. Given an lruvec, it increments
max_seq when max_seq-min_seq+1 approaches MIN_NR_GENS. The aging
promotes hot pages to the youngest generation when it finds them
accessed through page tables; the demotion of cold pages happens
consequently when it increments max_seq. The aging has the complexity
O(nr_hot_pages), since it is only interested in hot pages. Promotion
in the aging path does not require any LRU list operations, only the
updates of the gen counter and lrugen->nr_pages[]; demotion, unless as
the result of the increment of max_seq, requires LRU list operations,
e.g., lru_deactivate_fn().

The eviction consumes old generations. Given an lruvec, it increments
min_seq when the lists indexed by min_seq%MAX_NR_GENS become empty. A
feedback loop modeled after the PID controller monitors refaults over
anon and file types and decides which type to evict when both types
are available from the same generation.

Each generation is divided into multiple tiers. Tiers represent
different ranges of numbers of accesses through file descriptors. A
page accessed N times through file descriptors is in tier
order_base_2(N). Tiers do not have dedicated lrugen->lists[], only
bits in page->flags. In contrast to moving across generations, which
requires the LRU lock, moving across tiers only involves operations on
page->flags. The feedback loop also monitors refaults over all tiers
and decides when to protect pages in which tiers (N>1), using the
first tier (N=0,1) as a baseline. The first tier contains single-use
unmapped clean pages, which are most likely the best choices. The
eviction moves a page to the next generation, i.e., min_seq+1, if the
feedback loop decides so. This approach has the following advantages:
1. It removes the cost of activation in the buffered access path by
   inferring whether pages accessed multiple times through file
   descriptors are statistically hot and thus worth protecting in the
   eviction path.
2. It takes pages accessed through page tables into account and avoids
   overprotecting pages accessed multiple times through file
   descriptors. (Pages accessed through page tables are in the first
   tier, since N=0.)
3. More tiers provide better protection for pages accessed more than
   twice through file descriptors, when under heavy buffered I/O
   workloads.

Server benchmark results:
  Single workload:
    fio (buffered I/O): +[38, 40]%
                         IOPS         BW
      5.18-ed464352: 2547k        9989MiB/s
      patch1-6:          3540k        13.5GiB/s

  Single workload:
    memcached (anon): +[103, 107]%
                         Ops/sec      KB/sec
      5.18-ed464352: 469048.66    18243.91
      patch1-6:          964656.80    37520.88

  Configurations:
    CPU: two Xeon 6154
    Mem: total 256G

    Node 1 was only used as a ram disk to reduce the variance in the
    results.

    patch drivers/block/brd.c <<EOF
    99,100c99,100
    < 	gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM;
    < 	page = alloc_page(gfp_flags);
    ---
    > 	gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM | __GFP_THISNODE;
    > 	page = alloc_pages_node(1, gfp_flags, 0);
    EOF

    cat >>/etc/systemd/system.conf <<EOF
    CPUAffinity=numa
    NUMAPolicy=bind
    NUMAMask=0
    EOF

    cat >>/etc/memcached.conf <<EOF
    -m 184320
    -s /var/run/memcached/memcached.sock
    -a 0766
    -t 36
    -B binary
    EOF

    cat fio.sh
    modprobe brd rd_nr=1 rd_size=113246208
    swapoff -a
    mkfs.ext4 /dev/ram0
    mount -t ext4 /dev/ram0 /mnt

    mkdir /sys/fs/cgroup/user.slice/test
    echo 38654705664 >/sys/fs/cgroup/user.slice/test/memory.max
    echo $$ >/sys/fs/cgroup/user.slice/test/cgroup.procs
    fio -name=mglru --numjobs=72 --directory=/mnt --size=1408m \
      --buffered=1 --ioengine=io_uring --iodepth=128 \
      --iodepth_batch_submit=32 --iodepth_batch_complete=32 \
      --rw=randread --random_distribution=random --norandommap \
      --time_based --ramp_time=10m --runtime=5m --group_reporting

    cat memcached.sh
    modprobe brd rd_nr=1 rd_size=113246208
    swapoff -a
    mkswap /dev/ram0
    swapon /dev/ram0

    memtier_benchmark -S /var/run/memcached/memcached.sock \
      -P memcache_binary -n allkeys --key-minimum=1 \
      --key-maximum=65000000 --key-pattern=P:P -c 1 -t 36 \
      --ratio 1:0 --pipeline 8 -d 2000

    memtier_benchmark -S /var/run/memcached/memcached.sock \
      -P memcache_binary -n allkeys --key-minimum=1 \
      --key-maximum=65000000 --key-pattern=R:R -c 1 -t 36 \
      --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed

Client benchmark results:
  kswapd profiles:
    5.18-ed464352
      39.56%  page_vma_mapped_walk
      19.32%  lzo1x_1_do_compress (real work)
       7.18%  do_raw_spin_lock
       4.23%  _raw_spin_unlock_irq
       2.26%  vma_interval_tree_subtree_search
       2.12%  vma_interval_tree_iter_next
       2.11%  folio_referenced_one
       1.90%  anon_vma_interval_tree_iter_first
       1.47%  ptep_clear_flush
       0.97%  __anon_vma_interval_tree_subtree_search

    patch1-6
      36.13%  lzo1x_1_do_compress (real work)
      19.16%  page_vma_mapped_walk
       6.55%  _raw_spin_unlock_irq
       4.02%  do_raw_spin_lock
       2.32%  anon_vma_interval_tree_iter_first
       2.11%  ptep_clear_flush
       1.76%  __zram_bvec_write
       1.64%  folio_referenced_one
       1.40%  memmove
       1.35%  obj_malloc

  Configurations:
    CPU: single Snapdragon 7c
    Mem: total 4G

    Chrome OS MemoryPressure [1]

[1] https://chromium.googlesource.com/chromiumos/platform/tast-tests/

Link: https://lore.kernel.org/r/20220309021230.721028-7-yuzhao@google.com/Signed-off-by: NYu Zhao <yuzhao@google.com>
Acked-by: NBrian Geffon <bgeffon@google.com>
Acked-by: NJan Alexander Steffens (heftig) <heftig@archlinux.org>
Acked-by: NOleksandr Natalenko <oleksandr@natalenko.name>
Acked-by: NSteven Barrett <steven@liquorix.net>
Acked-by: NSuleiman Souhlal <suleiman@google.com>
Tested-by: NDaniel Byrne <djbyrne@mtu.edu>
Tested-by: NDonald Carr <d@chaos-reins.com>
Tested-by: NHolger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: NKonstantin Kharlamov <Hi-Angel@yandex.ru>
Tested-by: NShuang Zhai <szhai2@cs.rochester.edu>
Tested-by: NSofia Trinh <sofia.trinh@edi.works>
Tested-by: NVaibhav Jain <vaibhav@linux.ibm.com>
Bug: 227651406
Signed-off-by: NKalesh Singh <kaleshsingh@google.com>
Change-Id: I3fe4850006d7984cd9f4fd46134b826609dc2f86
Signed-off-by: NYuLinjia <3110442349@qq.com>
上级 dca02ff3
...@@ -125,6 +125,19 @@ static inline int lru_gen_from_seq(unsigned long seq) ...@@ -125,6 +125,19 @@ static inline int lru_gen_from_seq(unsigned long seq)
return seq % MAX_NR_GENS; return seq % MAX_NR_GENS;
} }
static inline int lru_hist_from_seq(unsigned long seq)
{
return seq % NR_HIST_GENS;
}
static inline int lru_tier_from_refs(int refs)
{
VM_BUG_ON(refs > BIT(LRU_REFS_WIDTH));
/* see the comment on MAX_NR_TIERS */
return order_base_2(refs + 1);
}
static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen) static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen)
{ {
unsigned long max_seq = lruvec->lrugen.max_seq; unsigned long max_seq = lruvec->lrugen.max_seq;
...@@ -170,6 +183,15 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct page *page, ...@@ -170,6 +183,15 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct page *page,
__update_lru_size(lruvec, lru, zone, -delta); __update_lru_size(lruvec, lru, zone, -delta);
return; return;
} }
/* promotion */
if (!lru_gen_is_active(lruvec, old_gen) && lru_gen_is_active(lruvec, new_gen)) {
__update_lru_size(lruvec, lru, zone, -delta);
__update_lru_size(lruvec, lru + LRU_ACTIVE, zone, delta);
}
/* demotion requires isolation, e.g., lru_deactivate_fn() */
VM_BUG_ON(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen));
} }
static inline bool lru_gen_add_page(struct lruvec *lruvec, struct page *page, bool reclaiming) static inline bool lru_gen_add_page(struct lruvec *lruvec, struct page *page, bool reclaiming)
...@@ -234,6 +256,8 @@ static inline bool lru_gen_del_page(struct lruvec *lruvec, struct page *page, bo ...@@ -234,6 +256,8 @@ static inline bool lru_gen_del_page(struct lruvec *lruvec, struct page *page, bo
gen = ((new_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; gen = ((new_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
new_flags &= ~LRU_GEN_MASK; new_flags &= ~LRU_GEN_MASK;
if (!(new_flags & BIT(PG_referenced)))
new_flags &= ~(LRU_REFS_MASK | LRU_REFS_FLAGS);
/* for shrink_page_list() */ /* for shrink_page_list() */
if (reclaiming) if (reclaiming)
new_flags &= ~(BIT(PG_referenced) | BIT(PG_reclaim)); new_flags &= ~(BIT(PG_referenced) | BIT(PG_reclaim));
......
...@@ -309,12 +309,34 @@ enum lruvec_flags { ...@@ -309,12 +309,34 @@ enum lruvec_flags {
#define MIN_NR_GENS 2U #define MIN_NR_GENS 2U
#define MAX_NR_GENS 4U #define MAX_NR_GENS 4U
/*
* Each generation is divided into multiple tiers. Tiers represent different
* ranges of numbers of accesses through file descriptors. A page accessed N
* times through file descriptors is in tier order_base_2(N). A page in the
* first tier (N=0,1) is marked by PG_referenced unless it was faulted in
* though page tables or read ahead. A page in any other tier (N>1) is marked
* by PG_referenced and PG_workingset.
*
* In contrast to moving across generations which requires the LRU lock, moving
* across tiers only requires operations on page->flags and therefore has a
* negligible cost in the buffered access path. In the eviction path,
* comparisons of refaulted/(evicted+protected) from the first tier and the
* rest infer whether pages accessed multiple times through file descriptors
* are statistically hot and thus worth protecting.
*
* MAX_NR_TIERS is set to 4 so that the multi-gen LRU can support twice of the
* categories of the active/inactive LRU when keeping track of accesses through
* file descriptors. It requires MAX_NR_TIERS-2 additional bits in page->flags.
*/
#define MAX_NR_TIERS 4U
#ifndef __GENERATING_BOUNDS_H #ifndef __GENERATING_BOUNDS_H
struct lruvec; struct lruvec;
#define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF)
#define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF)
#define LRU_REFS_FLAGS (BIT(PG_referenced) | BIT(PG_workingset))
#ifdef CONFIG_LRU_GEN #ifdef CONFIG_LRU_GEN
...@@ -323,6 +345,16 @@ enum { ...@@ -323,6 +345,16 @@ enum {
LRU_GEN_FILE, LRU_GEN_FILE,
}; };
#define MIN_LRU_BATCH BITS_PER_LONG
#define MAX_LRU_BATCH (MIN_LRU_BATCH * 128)
/* whether to keep historical stats from evicted generations */
#ifdef CONFIG_LRU_GEN_STATS
#define NR_HIST_GENS MAX_NR_GENS
#else
#define NR_HIST_GENS 1U
#endif
/* /*
* The youngest generation number is stored in max_seq for both anon and file * The youngest generation number is stored in max_seq for both anon and file
* types as they are aged on an equal footing. The oldest generation numbers are * types as they are aged on an equal footing. The oldest generation numbers are
...@@ -342,6 +374,15 @@ struct lru_gen_struct { ...@@ -342,6 +374,15 @@ struct lru_gen_struct {
struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
/* the sizes of the above lists */ /* the sizes of the above lists */
unsigned long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; unsigned long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
/* the exponential moving average of refaulted */
unsigned long avg_refaulted[ANON_AND_FILE][MAX_NR_TIERS];
/* the exponential moving average of evicted+protected */
unsigned long avg_total[ANON_AND_FILE][MAX_NR_TIERS];
/* the first tier doesn't need protection, hence the minus one */
unsigned long protected[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS - 1];
/* can be modified without holding the LRU lock */
atomic_long_t evicted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS];
atomic_long_t refaulted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS];
}; };
void lru_gen_init_lruvec(struct lruvec *lruvec); void lru_gen_init_lruvec(struct lruvec *lruvec);
......
...@@ -24,7 +24,7 @@ int main(void) ...@@ -24,7 +24,7 @@ int main(void)
DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t)); DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
#ifdef CONFIG_LRU_GEN #ifdef CONFIG_LRU_GEN
DEFINE(LRU_GEN_WIDTH, order_base_2(MAX_NR_GENS + 1)); DEFINE(LRU_GEN_WIDTH, order_base_2(MAX_NR_GENS + 1));
DEFINE(LRU_REFS_WIDTH, 0); DEFINE(LRU_REFS_WIDTH, MAX_NR_TIERS - 2);
#else #else
DEFINE(LRU_GEN_WIDTH, 0); DEFINE(LRU_GEN_WIDTH, 0);
DEFINE(LRU_REFS_WIDTH, 0); DEFINE(LRU_REFS_WIDTH, 0);
......
...@@ -985,6 +985,7 @@ config CLEAR_FREELIST_PAGE ...@@ -985,6 +985,7 @@ config CLEAR_FREELIST_PAGE
source "mm/damon/Kconfig" source "mm/damon/Kconfig"
# multi-gen LRU {
config LRU_GEN config LRU_GEN
bool "Multi-Gen LRU" bool "Multi-Gen LRU"
depends on MMU depends on MMU
...@@ -993,4 +994,14 @@ config LRU_GEN ...@@ -993,4 +994,14 @@ config LRU_GEN
help help
A high performance LRU implementation to overcommit memory. A high performance LRU implementation to overcommit memory.
config LRU_GEN_STATS
bool "Full stats for debugging"
depends on LRU_GEN
help
Do not enable this option unless you plan to look at historical stats
from evicted generations for debugging purpose.
This option has a per-memcg and per-node memory overhead.
# }
endmenu endmenu
...@@ -401,6 +401,43 @@ static void __lru_cache_activate_page(struct page *page) ...@@ -401,6 +401,43 @@ static void __lru_cache_activate_page(struct page *page)
local_unlock(&lru_pvecs.lock); local_unlock(&lru_pvecs.lock);
} }
#ifdef CONFIG_LRU_GEN
static void page_inc_refs(struct page *page)
{
unsigned long refs;
unsigned long old_flags, new_flags;
if (PageUnevictable(page))
return;
/* see the comment on MAX_NR_TIERS */
do {
new_flags = old_flags = READ_ONCE(page->flags);
if (!(new_flags & BIT(PG_referenced))) {
new_flags |= BIT(PG_referenced);
continue;
}
if (!(new_flags & BIT(PG_workingset))) {
new_flags |= BIT(PG_workingset);
continue;
}
refs = new_flags & LRU_REFS_MASK;
refs = min(refs + BIT(LRU_REFS_PGOFF), LRU_REFS_MASK);
new_flags &= ~LRU_REFS_MASK;
new_flags |= refs;
} while (new_flags != old_flags &&
cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
}
#else
static void page_inc_refs(struct page *page)
{
}
#endif /* CONFIG_LRU_GEN */
/* /*
* Mark a page as having seen activity. * Mark a page as having seen activity.
* *
...@@ -415,6 +452,11 @@ void mark_page_accessed(struct page *page) ...@@ -415,6 +452,11 @@ void mark_page_accessed(struct page *page)
{ {
page = compound_head(page); page = compound_head(page);
if (lru_gen_enabled()) {
page_inc_refs(page);
return;
}
if (!PageReferenced(page)) { if (!PageReferenced(page)) {
SetPageReferenced(page); SetPageReferenced(page);
} else if (PageUnevictable(page)) { } else if (PageUnevictable(page)) {
......
此差异已折叠。
...@@ -185,7 +185,6 @@ static unsigned int bucket_order __read_mostly; ...@@ -185,7 +185,6 @@ static unsigned int bucket_order __read_mostly;
static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction, static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction,
bool workingset) bool workingset)
{ {
eviction >>= bucket_order;
eviction &= EVICTION_MASK; eviction &= EVICTION_MASK;
eviction = (eviction << MEM_CGROUP_ID_SHIFT) | memcgid; eviction = (eviction << MEM_CGROUP_ID_SHIFT) | memcgid;
eviction = (eviction << NODES_SHIFT) | pgdat->node_id; eviction = (eviction << NODES_SHIFT) | pgdat->node_id;
...@@ -210,10 +209,116 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, ...@@ -210,10 +209,116 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
*memcgidp = memcgid; *memcgidp = memcgid;
*pgdat = NODE_DATA(nid); *pgdat = NODE_DATA(nid);
*evictionp = entry << bucket_order; *evictionp = entry;
*workingsetp = workingset; *workingsetp = workingset;
} }
#ifdef CONFIG_LRU_GEN
static int page_lru_refs(struct page *page)
{
unsigned long flags = READ_ONCE(page->flags);
BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SHIFT);
/* see the comment on MAX_NR_TIERS */
return flags & BIT(PG_workingset) ? (flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF : 0;
}
static void *lru_gen_eviction(struct page *page)
{
int hist, tier;
unsigned long token;
unsigned long min_seq;
struct lruvec *lruvec;
struct lru_gen_struct *lrugen;
int type = page_is_file_lru(page);
int refs = page_lru_refs(page);
int delta = thp_nr_pages(page);
bool workingset = PageWorkingset(page);
struct mem_cgroup *memcg = page_memcg(page);
struct pglist_data *pgdat = page_pgdat(page);
lruvec = mem_cgroup_lruvec(memcg, pgdat);
lrugen = &lruvec->lrugen;
min_seq = READ_ONCE(lrugen->min_seq[type]);
token = (min_seq << LRU_REFS_WIDTH) | refs;
hist = lru_hist_from_seq(min_seq);
tier = lru_tier_from_refs(refs + workingset);
atomic_long_add(delta, &lrugen->evicted[hist][type][tier]);
return pack_shadow(mem_cgroup_id(memcg), pgdat, token, workingset);
}
static void lru_gen_refault(struct page *page, void *shadow)
{
int hist, tier, refs;
int memcg_id;
bool workingset;
unsigned long token;
unsigned long min_seq;
struct lruvec *lruvec;
struct lru_gen_struct *lrugen;
struct mem_cgroup *memcg;
struct pglist_data *pgdat;
int type = page_is_file_lru(page);
int delta = thp_nr_pages(page);
unpack_shadow(shadow, &memcg_id, &pgdat, &token, &workingset);
refs = token & (BIT(LRU_REFS_WIDTH) - 1);
if (refs && !workingset)
return;
if (page_pgdat(page) != pgdat)
return;
rcu_read_lock();
memcg = page_memcg_rcu(page);
if (mem_cgroup_id(memcg) != memcg_id)
goto unlock;
token >>= LRU_REFS_WIDTH;
lruvec = mem_cgroup_lruvec(memcg, pgdat);
lrugen = &lruvec->lrugen;
min_seq = READ_ONCE(lrugen->min_seq[type]);
if (token != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)))
goto unlock;
hist = lru_hist_from_seq(min_seq);
tier = lru_tier_from_refs(refs + workingset);
atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]);
mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta);
/*
* Count the following two cases as stalls:
* 1. For pages accessed through page tables, hotter pages pushed out
* hot pages which refaulted immediately.
* 2. For pages accessed through file descriptors, numbers of accesses
* might have been beyond the limit.
*/
if (lru_gen_in_fault() || refs + workingset == BIT(LRU_REFS_WIDTH)) {
SetPageWorkingset(page);
mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta);
}
unlock:
rcu_read_unlock();
}
#else
static void *lru_gen_eviction(struct page *page)
{
return NULL;
}
static void lru_gen_refault(struct page *page, void *shadow)
{
}
#endif /* CONFIG_LRU_GEN */
/** /**
* workingset_age_nonresident - age non-resident entries as LRU ages * workingset_age_nonresident - age non-resident entries as LRU ages
* @lruvec: the lruvec that was aged * @lruvec: the lruvec that was aged
...@@ -262,11 +367,15 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) ...@@ -262,11 +367,15 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg)
VM_BUG_ON_PAGE(page_count(page), page); VM_BUG_ON_PAGE(page_count(page), page);
VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page);
if (lru_gen_enabled())
return lru_gen_eviction(page);
lruvec = mem_cgroup_lruvec(target_memcg, pgdat); lruvec = mem_cgroup_lruvec(target_memcg, pgdat);
workingset_age_nonresident(lruvec, thp_nr_pages(page)); workingset_age_nonresident(lruvec, thp_nr_pages(page));
/* XXX: target_memcg can be NULL, go through lruvec */ /* XXX: target_memcg can be NULL, go through lruvec */
memcgid = mem_cgroup_id(lruvec_memcg(lruvec)); memcgid = mem_cgroup_id(lruvec_memcg(lruvec));
eviction = atomic_long_read(&lruvec->nonresident_age); eviction = atomic_long_read(&lruvec->nonresident_age);
eviction >>= bucket_order;
return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page)); return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page));
} }
...@@ -294,7 +403,13 @@ void workingset_refault(struct page *page, void *shadow) ...@@ -294,7 +403,13 @@ void workingset_refault(struct page *page, void *shadow)
bool workingset; bool workingset;
int memcgid; int memcgid;
if (lru_gen_enabled()) {
lru_gen_refault(page, shadow);
return;
}
unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset); unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
eviction <<= bucket_order;
rcu_read_lock(); rcu_read_lock();
/* /*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册