提交 67b6c900 编写于 作者: D Dave Hansen 提交者: Pekka Enberg

mm: slub: work around unneeded lockdep warning

The slub code does some setup during early boot in
early_kmem_cache_node_alloc() with some local data.  There is no
possible way that another CPU can see this data, so the slub code
doesn't unnecessarily lock it.  However, some new lockdep asserts
check to make sure that add_partial() _always_ has the list_lock
held.

Just add the locking, even though it is technically unnecessary.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: NDavid Rientjes <rientjes@google.com>
Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: NPekka Enberg <penberg@kernel.org>
上级 433a91ff
...@@ -2890,7 +2890,13 @@ static void early_kmem_cache_node_alloc(int node) ...@@ -2890,7 +2890,13 @@ static void early_kmem_cache_node_alloc(int node)
init_kmem_cache_node(n); init_kmem_cache_node(n);
inc_slabs_node(kmem_cache_node, node, page->objects); inc_slabs_node(kmem_cache_node, node, page->objects);
/*
* the lock is for lockdep's sake, not for any actual
* race protection
*/
spin_lock(&n->list_lock);
add_partial(n, page, DEACTIVATE_TO_HEAD); add_partial(n, page, DEACTIVATE_TO_HEAD);
spin_unlock(&n->list_lock);
} }
static void free_kmem_cache_nodes(struct kmem_cache *s) static void free_kmem_cache_nodes(struct kmem_cache *s)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册