提交 9f264904 编写于 作者: A Alex Shi 提交者: Pekka Enberg

slub: correct comments error for per cpu partial

Correct comment errors, that mistake cpu partial objects number as pages
number, may make reader misunderstand.
Signed-off-by: NAlex Shi <alex.shi@intel.com>
Reviewed-by: NChristoph Lameter <cl@linux.com>
Signed-off-by: NPekka Enberg <penberg@kernel.org>
上级 12d79634
...@@ -82,7 +82,7 @@ struct kmem_cache { ...@@ -82,7 +82,7 @@ struct kmem_cache {
int size; /* The size of an object including meta data */ int size; /* The size of an object including meta data */
int objsize; /* The size of an object without meta data */ int objsize; /* The size of an object without meta data */
int offset; /* Free pointer offset. */ int offset; /* Free pointer offset. */
int cpu_partial; /* Number of per cpu partial pages to keep around */ int cpu_partial; /* Number of per cpu partial objects to keep around */
struct kmem_cache_order_objects oo; struct kmem_cache_order_objects oo;
/* Allocation and freeing of slabs */ /* Allocation and freeing of slabs */
......
...@@ -3084,7 +3084,7 @@ static int kmem_cache_open(struct kmem_cache *s, ...@@ -3084,7 +3084,7 @@ static int kmem_cache_open(struct kmem_cache *s,
* *
* A) The number of objects from per cpu partial slabs dumped to the * A) The number of objects from per cpu partial slabs dumped to the
* per node list when we reach the limit. * per node list when we reach the limit.
* B) The number of objects in partial partial slabs to extract from the * B) The number of objects in cpu partial slabs to extract from the
* per node list when we run out of per cpu objects. We only fetch 50% * per node list when we run out of per cpu objects. We only fetch 50%
* to keep some capacity around for frees. * to keep some capacity around for frees.
*/ */
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册