提交 8a5b20ae 编写于 作者: J Joonsoo Kim 提交者: Linus Torvalds

slub: fix off by one in number of slab tests

min_partial means minimum number of slab cached in node partial list.
So, if nr_partial is less than it, we keep newly empty slab on node
partial list rather than freeing it.  But if nr_partial is equal or
greater than it, it means that we have enough partial slabs so should
free newly empty slab.  Current implementation missed the equal case so
if we set min_partial is 0, then, at least one slab could be cached.
This is critical problem to kmemcg destroying logic because it doesn't
works properly if some slabs is cached.  This patch fixes this problem.

Fixes 91cb69620284 ("slub: make dead memcg caches discard free slabs
immediately").
Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: NVladimir Davydov <vdavydov@parallels.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Acked-by: NDavid Rientjes <rientjes@google.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 dc78327c
...@@ -1881,7 +1881,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, ...@@ -1881,7 +1881,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
new.frozen = 0; new.frozen = 0;
if (!new.inuse && n->nr_partial > s->min_partial) if (!new.inuse && n->nr_partial >= s->min_partial)
m = M_FREE; m = M_FREE;
else if (new.freelist) { else if (new.freelist) {
m = M_PARTIAL; m = M_PARTIAL;
...@@ -1992,7 +1992,7 @@ static void unfreeze_partials(struct kmem_cache *s, ...@@ -1992,7 +1992,7 @@ static void unfreeze_partials(struct kmem_cache *s,
new.freelist, new.counters, new.freelist, new.counters,
"unfreezing slab")); "unfreezing slab"));
if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) { if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) {
page->next = discard_page; page->next = discard_page;
discard_page = page; discard_page = page;
} else { } else {
...@@ -2620,7 +2620,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, ...@@ -2620,7 +2620,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
return; return;
} }
if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) if (unlikely(!new.inuse && n->nr_partial >= s->min_partial))
goto slab_empty; goto slab_empty;
/* /*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册