提交 f428ebd1 编写于 作者: P Peter Zijlstra 提交者: Arnaldo Carvalho de Melo

perf tools: Fix AAAAARGH64 memory barriers

Someone got the load and store barriers mixed up for AAAAARGH64.  Turn
them the right side up.
Reported-by: NWill Deacon <will.deacon@arm.com>
Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
Fixes: a94d342b ("tools/perf: Add required memory barriers")
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/20140124154002.GF31570@twins.programming.kicks-ass.netSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
上级 950b8354
......@@ -100,8 +100,8 @@
#ifdef __aarch64__
#define mb() asm volatile("dmb ish" ::: "memory")
#define wmb() asm volatile("dmb ishld" ::: "memory")
#define rmb() asm volatile("dmb ishst" ::: "memory")
#define wmb() asm volatile("dmb ishst" ::: "memory")
#define rmb() asm volatile("dmb ishld" ::: "memory")
#define cpu_relax() asm volatile("yield" ::: "memory")
#endif
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册