提交 eadac03e 编写于 作者: P Pranith Kumar 提交者: Tejun Heo

percpu: Replace smp_read_barrier_depends() with lockless_dereference()

Recently lockless_dereference() was added which can be used in place of
hard-coding smp_read_barrier_depends(). The following PATCH makes the change.
Signed-off-by: NPranith Kumar <bobby.prani@gmail.com>
Signed-off-by: NTejun Heo <tj@kernel.org>
上级 cceb9bd6
...@@ -128,10 +128,8 @@ static inline void percpu_ref_kill(struct percpu_ref *ref) ...@@ -128,10 +128,8 @@ static inline void percpu_ref_kill(struct percpu_ref *ref)
static inline bool __ref_is_percpu(struct percpu_ref *ref, static inline bool __ref_is_percpu(struct percpu_ref *ref,
unsigned long __percpu **percpu_countp) unsigned long __percpu **percpu_countp)
{ {
unsigned long percpu_ptr = ACCESS_ONCE(ref->percpu_count_ptr);
/* paired with smp_store_release() in percpu_ref_reinit() */ /* paired with smp_store_release() in percpu_ref_reinit() */
smp_read_barrier_depends(); unsigned long percpu_ptr = lockless_dereference(ref->percpu_count_ptr);
if (unlikely(percpu_ptr & __PERCPU_REF_ATOMIC)) if (unlikely(percpu_ptr & __PERCPU_REF_ATOMIC))
return false; return false;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册