提交 919fc6e3 编写于 作者: P Paul E. McKenney 提交者: Ingo Molnar

powerpc: Full barrier for smp_mb__after_unlock_lock()

The powerpc lock acquisition sequence is as follows:

	lwarx; cmpwi; bne; stwcx.; lwsync;

Lock release is as follows:

	lwsync; stw;

If CPU 0 does a store (say, x=1) then a lock release, and CPU 1
does a lock acquisition then a load (say, r1=y), then there is
no guarantee of a full memory barrier between the store to 'x'
and the load from 'y'. To see this, suppose that CPUs 0 and 1
are hardware threads in the same core that share a store buffer,
and that CPU 2 is in some other core, and that CPU 2 does the
following:

	y = 1; sync; r2 = x;

If 'x' and 'y' are both initially zero, then the lock
acquisition and release sequences above can result in r1 and r2
both being equal to zero, which could not happen if unlock+lock
was a full barrier.

This commit therefore makes powerpc's
smp_mb__after_unlock_lock() be a full barrier.
Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: <linux-arch@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 6303b9c8
......@@ -28,6 +28,8 @@
#include <asm/synch.h>
#include <asm/ppc-opcode.h>
#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
#define arch_spin_is_locked(x) ((x)->slock != 0)
#ifdef CONFIG_PPC64
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册