提交 e4b522d7 编写于 作者: D Duncan Sands 提交者: Andi Kleen

[PATCH] x86-64: fix asm constraints in i386 atomic_add_return

Since v->counter is both read and written, it should be an output as well
as an input for the asm.  The current code only gets away with this because
counter is volatile.  Also, according to Documents/atomic_ops.txt,
atomic_add_return should provide a memory barrier, in particular a compiler
barrier, so the asm should be marked as clobbering memory.

Test case:

#include <stdio.h>

typedef struct { int counter; } atomic_t; /* NB: no "volatile" */

#define ATOMIC_INIT(i)	{ (i) }

#define atomic_read(v)		((v)->counter)

static __inline__ int atomic_add_return(int i, atomic_t *v)
{
	int __i = i;

	__asm__ __volatile__(
		"lock; xaddl %0, %1;"
		:"=r"(i)
		:"m"(v->counter), "0"(i));
/*	__asm__ __volatile__(
		"lock; xaddl %0, %1"
		:"+r" (i), "+m" (v->counter)
		: : "memory"); */
	return i + __i;
}

int main (void) {
	atomic_t a = ATOMIC_INIT(0);
	int x;

	x = atomic_add_return (1, &a);
	if ((x!=1) || (atomic_read(&a)!=1))
		printf("fail: %i, %i\n", x, atomic_read(&a));
}
Signed-off-by: NDuncan Sands <baldrick@free.fr>
Signed-off-by: NAndi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Acked-by: NDavid Howells <dhowells@redhat.com>
Signed-off-by: NAndrew Morton <akpm@osdl.org>
上级 d263b213
...@@ -187,9 +187,9 @@ static __inline__ int atomic_add_return(int i, atomic_t *v) ...@@ -187,9 +187,9 @@ static __inline__ int atomic_add_return(int i, atomic_t *v)
/* Modern 486+ processor */ /* Modern 486+ processor */
__i = i; __i = i;
__asm__ __volatile__( __asm__ __volatile__(
LOCK_PREFIX "xaddl %0, %1;" LOCK_PREFIX "xaddl %0, %1"
:"=r"(i) :"+r" (i), "+m" (v->counter)
:"m"(v->counter), "0"(i)); : : "memory");
return i + __i; return i + __i;
#ifdef CONFIG_M386 #ifdef CONFIG_M386
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册