提交 8f1e33da 编写于 作者: C Christoph Lameter 提交者: Pekka Enberg

slub: Switch per cpu partial page support off for debugging

Eric saw an issue with accounting of slabs during validation. Its not
possible to determine accurately how many per cpu partial slabs exist at
any time so this switches off per cpu partial pages during debug.
Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: NChristoph Lameter <cl@linux.com>
Signed-off-by: NPekka Enberg <penberg@kernel.org>
上级 dc47ce90
...@@ -3028,7 +3028,9 @@ static int kmem_cache_open(struct kmem_cache *s, ...@@ -3028,7 +3028,9 @@ static int kmem_cache_open(struct kmem_cache *s,
* per node list when we run out of per cpu objects. We only fetch 50% * per node list when we run out of per cpu objects. We only fetch 50%
* to keep some capacity around for frees. * to keep some capacity around for frees.
*/ */
if (s->size >= PAGE_SIZE) if (kmem_cache_debug(s))
s->cpu_partial = 0;
else if (s->size >= PAGE_SIZE)
s->cpu_partial = 2; s->cpu_partial = 2;
else if (s->size >= 1024) else if (s->size >= 1024)
s->cpu_partial = 6; s->cpu_partial = 6;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册