提交 1077fa36 编写于 作者: A Alexander Duyck 提交者: David S. Miller

arch: Add lightweight memory barriers dma_rmb() and dma_wmb()

There are a number of situations where the mandatory barriers rmb() and
wmb() are used to order memory/memory operations in the device drivers
and those barriers are much heavier than they actually need to be.  For
example in the case of PowerPC wmb() calls the heavy-weight sync
instruction when for coherent memory operations all that is really needed
is an lsync or eieio instruction.

This commit adds a coherent only version of the mandatory memory barriers
rmb() and wmb().  In most cases this should result in the barrier being the
same as the SMP barriers for the SMP case, however in some cases we use a
barrier that is somewhere in between rmb() and smp_rmb().  For example on
ARM the rmb barriers break down as follows:

  Barrier   Call     Explanation
  --------- -------- ----------------------------------
  rmb()     dsb()    Data synchronization barrier - system
  dma_rmb() dmb(osh) data memory barrier - outer sharable
  smp_rmb() dmb(ish) data memory barrier - inner sharable

These new barriers are not as safe as the standard rmb() and wmb().
Specifically they do not guarantee ordering between coherent and incoherent
memories.  The primary use case for these would be to enforce ordering of
reads and writes when accessing coherent memory that is shared between the
CPU and a device.

It may also be noted that there is no dma_mb().  Most architectures don't
provide a good mechanism for performing a coherent only full barrier without
resorting to the same mechanism used in mb().  As such there isn't much to
be gained in trying to define such a function.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Miller <davem@davemloft.net>
Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: NWill Deacon <will.deacon@arm.com>
Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 8a449718
...@@ -1633,6 +1633,48 @@ There are some more advanced barrier functions: ...@@ -1633,6 +1633,48 @@ There are some more advanced barrier functions:
operations" subsection for information on where to use these. operations" subsection for information on where to use these.
(*) dma_wmb();
(*) dma_rmb();
These are for use with consistent memory to guarantee the ordering
of writes or reads of shared memory accessible to both the CPU and a
DMA capable device.
For example, consider a device driver that shares memory with a device
and uses a descriptor status value to indicate if the descriptor belongs
to the device or the CPU, and a doorbell to notify it when new
descriptors are available:
if (desc->status != DEVICE_OWN) {
/* do not read data until we own descriptor */
dma_rmb();
/* read/modify data */
read_data = desc->data;
desc->data = write_data;
/* flush modifications before status update */
dma_wmb();
/* assign ownership */
desc->status = DEVICE_OWN;
/* force memory to sync before notifying device via MMIO */
wmb();
/* notify device of new descriptors */
writel(DESC_NOTIFY, doorbell);
}
The dma_rmb() allows us guarantee the device has released ownership
before we read the data from the descriptor, and he dma_wmb() allows
us to guarantee the data is written to the descriptor before the device
can see it now has ownership. The wmb() is needed to guarantee that the
cache coherent memory writes have completed before attempting a write to
the cache incoherent MMIO region.
See Documentation/DMA-API.txt for more information on consistent memory.
MMIO WRITE BARRIER MMIO WRITE BARRIER
------------------ ------------------
......
...@@ -43,10 +43,14 @@ ...@@ -43,10 +43,14 @@
#define mb() do { dsb(); outer_sync(); } while (0) #define mb() do { dsb(); outer_sync(); } while (0)
#define rmb() dsb() #define rmb() dsb()
#define wmb() do { dsb(st); outer_sync(); } while (0) #define wmb() do { dsb(st); outer_sync(); } while (0)
#define dma_rmb() dmb(osh)
#define dma_wmb() dmb(oshst)
#else #else
#define mb() barrier() #define mb() barrier()
#define rmb() barrier() #define rmb() barrier()
#define wmb() barrier() #define wmb() barrier()
#define dma_rmb() barrier()
#define dma_wmb() barrier()
#endif #endif
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
......
...@@ -32,6 +32,9 @@ ...@@ -32,6 +32,9 @@
#define rmb() dsb(ld) #define rmb() dsb(ld)
#define wmb() dsb(st) #define wmb() dsb(st)
#define dma_rmb() dmb(oshld)
#define dma_wmb() dmb(oshst)
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
......
...@@ -39,6 +39,9 @@ ...@@ -39,6 +39,9 @@
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define dma_rmb() mb()
#define dma_wmb() mb()
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
# define smp_mb() mb() # define smp_mb() mb()
#else #else
......
...@@ -4,8 +4,6 @@ ...@@ -4,8 +4,6 @@
#include <asm/metag_mem.h> #include <asm/metag_mem.h>
#define nop() asm volatile ("NOP") #define nop() asm volatile ("NOP")
#define mb() wmb()
#define rmb() barrier()
#ifdef CONFIG_METAG_META21 #ifdef CONFIG_METAG_META21
...@@ -41,11 +39,13 @@ static inline void wr_fence(void) ...@@ -41,11 +39,13 @@ static inline void wr_fence(void)
#endif /* !CONFIG_METAG_META21 */ #endif /* !CONFIG_METAG_META21 */
static inline void wmb(void) /* flush writes through the write combiner */
{ #define mb() wr_fence()
/* flush writes through the write combiner */ #define rmb() barrier()
wr_fence(); #define wmb() mb()
}
#define dma_rmb() rmb()
#define dma_wmb() wmb()
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
#define fence() do { } while (0) #define fence() do { } while (0)
......
...@@ -75,20 +75,21 @@ ...@@ -75,20 +75,21 @@
#include <asm/wbflush.h> #include <asm/wbflush.h>
#define wmb() fast_wmb()
#define rmb() fast_rmb()
#define mb() wbflush() #define mb() wbflush()
#define iob() wbflush() #define iob() wbflush()
#else /* !CONFIG_CPU_HAS_WB */ #else /* !CONFIG_CPU_HAS_WB */
#define wmb() fast_wmb()
#define rmb() fast_rmb()
#define mb() fast_mb() #define mb() fast_mb()
#define iob() fast_iob() #define iob() fast_iob()
#endif /* !CONFIG_CPU_HAS_WB */ #endif /* !CONFIG_CPU_HAS_WB */
#define wmb() fast_wmb()
#define rmb() fast_rmb()
#define dma_wmb() fast_wmb()
#define dma_rmb() fast_rmb()
#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP) #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
# ifdef CONFIG_CPU_CAVIUM_OCTEON # ifdef CONFIG_CPU_CAVIUM_OCTEON
# define smp_mb() __sync() # define smp_mb() __sync()
......
...@@ -36,8 +36,6 @@ ...@@ -36,8 +36,6 @@
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#ifdef CONFIG_SMP
#ifdef __SUBARCH_HAS_LWSYNC #ifdef __SUBARCH_HAS_LWSYNC
# define SMPWMB LWSYNC # define SMPWMB LWSYNC
#else #else
...@@ -45,12 +43,17 @@ ...@@ -45,12 +43,17 @@
#endif #endif
#define __lwsync() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory") #define __lwsync() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
#define dma_rmb() __lwsync()
#define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
#ifdef CONFIG_SMP
#define smp_lwsync() __lwsync()
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() __lwsync() #define smp_rmb() __lwsync()
#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") #define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
#else #else
#define __lwsync() barrier() #define smp_lwsync() barrier()
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
...@@ -72,7 +75,7 @@ ...@@ -72,7 +75,7 @@
#define smp_store_release(p, v) \ #define smp_store_release(p, v) \
do { \ do { \
compiletime_assert_atomic_type(*p); \ compiletime_assert_atomic_type(*p); \
__lwsync(); \ smp_lwsync(); \
ACCESS_ONCE(*p) = (v); \ ACCESS_ONCE(*p) = (v); \
} while (0) } while (0)
...@@ -80,7 +83,7 @@ do { \ ...@@ -80,7 +83,7 @@ do { \
({ \ ({ \
typeof(*p) ___p1 = ACCESS_ONCE(*p); \ typeof(*p) ___p1 = ACCESS_ONCE(*p); \
compiletime_assert_atomic_type(*p); \ compiletime_assert_atomic_type(*p); \
__lwsync(); \ smp_lwsync(); \
___p1; \ ___p1; \
}) })
......
...@@ -24,6 +24,8 @@ ...@@ -24,6 +24,8 @@
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define dma_rmb() rmb()
#define dma_wmb() wmb()
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
......
...@@ -37,6 +37,9 @@ do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \ ...@@ -37,6 +37,9 @@ do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \
#define rmb() __asm__ __volatile__("":::"memory") #define rmb() __asm__ __volatile__("":::"memory")
#define wmb() __asm__ __volatile__("":::"memory") #define wmb() __asm__ __volatile__("":::"memory")
#define dma_rmb() rmb()
#define dma_wmb() wmb()
#define set_mb(__var, __value) \ #define set_mb(__var, __value) \
do { __var = __value; membar_safe("#StoreLoad"); } while(0) do { __var = __value; membar_safe("#StoreLoad"); } while(0)
......
...@@ -24,13 +24,16 @@ ...@@ -24,13 +24,16 @@
#define wmb() asm volatile("sfence" ::: "memory") #define wmb() asm volatile("sfence" ::: "memory")
#endif #endif
#ifdef CONFIG_SMP
#define smp_mb() mb()
#ifdef CONFIG_X86_PPRO_FENCE #ifdef CONFIG_X86_PPRO_FENCE
# define smp_rmb() rmb() #define dma_rmb() rmb()
#else #else
# define smp_rmb() barrier() #define dma_rmb() barrier()
#endif #endif
#define dma_wmb() barrier()
#ifdef CONFIG_SMP
#define smp_mb() mb()
#define smp_rmb() dma_rmb()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define set_mb(var, value) do { (void)xchg(&var, value); } while (0) #define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
#else /* !SMP */ #else /* !SMP */
......
...@@ -29,17 +29,18 @@ ...@@ -29,17 +29,18 @@
#endif /* CONFIG_X86_32 */ #endif /* CONFIG_X86_32 */
#ifdef CONFIG_SMP
#define smp_mb() mb()
#ifdef CONFIG_X86_PPRO_FENCE #ifdef CONFIG_X86_PPRO_FENCE
#define smp_rmb() rmb() #define dma_rmb() rmb()
#else /* CONFIG_X86_PPRO_FENCE */ #else /* CONFIG_X86_PPRO_FENCE */
#define smp_rmb() barrier() #define dma_rmb() barrier()
#endif /* CONFIG_X86_PPRO_FENCE */ #endif /* CONFIG_X86_PPRO_FENCE */
#define dma_wmb() barrier()
#define smp_wmb() barrier() #ifdef CONFIG_SMP
#define smp_mb() mb()
#define smp_rmb() dma_rmb()
#define smp_wmb() barrier()
#define set_mb(var, value) do { (void)xchg(&var, value); } while (0) #define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
#else /* CONFIG_SMP */ #else /* CONFIG_SMP */
......
...@@ -42,6 +42,14 @@ ...@@ -42,6 +42,14 @@
#define wmb() mb() #define wmb() mb()
#endif #endif
#ifndef dma_rmb
#define dma_rmb() rmb()
#endif
#ifndef dma_wmb
#define dma_wmb() wmb()
#endif
#ifndef read_barrier_depends #ifndef read_barrier_depends
#define read_barrier_depends() do { } while (0) #define read_barrier_depends() do { } while (0)
#endif #endif
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册