提交 f0a0e6f2 编写于 作者: P Paul E. McKenney

rcu: Clarify memory-ordering properties of grace-period primitives

This commit explicitly states the memory-ordering properties of the
RCU grace-period primitives.  Although these properties were in some
sense implied by the fundmental property of RCU ("a grace period must
wait for all pre-existing RCU read-side critical sections to complete"),
stating it explicitly will be a great labor-saving device.
Reported-by: NOleg Nesterov <oleg@redhat.com>
Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: NOleg Nesterov <oleg@redhat.com>
上级 67afeed2
...@@ -90,6 +90,25 @@ extern void do_trace_rcu_torture_read(char *rcutorturename, ...@@ -90,6 +90,25 @@ extern void do_trace_rcu_torture_read(char *rcutorturename,
* that started after call_rcu() was invoked. RCU read-side critical * that started after call_rcu() was invoked. RCU read-side critical
* sections are delimited by rcu_read_lock() and rcu_read_unlock(), * sections are delimited by rcu_read_lock() and rcu_read_unlock(),
* and may be nested. * and may be nested.
*
* Note that all CPUs must agree that the grace period extended beyond
* all pre-existing RCU read-side critical section. On systems with more
* than one CPU, this means that when "func()" is invoked, each CPU is
* guaranteed to have executed a full memory barrier since the end of its
* last RCU read-side critical section whose beginning preceded the call
* to call_rcu(). It also means that each CPU executing an RCU read-side
* critical section that continues beyond the start of "func()" must have
* executed a memory barrier after the call_rcu() but before the beginning
* of that RCU read-side critical section. Note that these guarantees
* include CPUs that are offline, idle, or executing in user mode, as
* well as CPUs that are executing in the kernel.
*
* Furthermore, if CPU A invoked call_rcu() and CPU B invoked the
* resulting RCU callback function "func()", then both CPU A and CPU B are
* guaranteed to execute a full memory barrier during the time interval
* between the call to call_rcu() and the invocation of "func()" -- even
* if CPU A and CPU B are the same CPU (but again only if the system has
* more than one CPU).
*/ */
extern void call_rcu(struct rcu_head *head, extern void call_rcu(struct rcu_head *head,
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
...@@ -118,6 +137,9 @@ extern void call_rcu(struct rcu_head *head, ...@@ -118,6 +137,9 @@ extern void call_rcu(struct rcu_head *head,
* OR * OR
* - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context. * - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context.
* These may be nested. * These may be nested.
*
* See the description of call_rcu() for more detailed information on
* memory ordering guarantees.
*/ */
extern void call_rcu_bh(struct rcu_head *head, extern void call_rcu_bh(struct rcu_head *head,
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
...@@ -137,6 +159,9 @@ extern void call_rcu_bh(struct rcu_head *head, ...@@ -137,6 +159,9 @@ extern void call_rcu_bh(struct rcu_head *head,
* OR * OR
* anything that disables preemption. * anything that disables preemption.
* These may be nested. * These may be nested.
*
* See the description of call_rcu() for more detailed information on
* memory ordering guarantees.
*/ */
extern void call_rcu_sched(struct rcu_head *head, extern void call_rcu_sched(struct rcu_head *head,
void (*func)(struct rcu_head *rcu)); void (*func)(struct rcu_head *rcu));
......
...@@ -2228,10 +2228,28 @@ static inline int rcu_blocking_is_gp(void) ...@@ -2228,10 +2228,28 @@ static inline int rcu_blocking_is_gp(void)
* rcu_read_lock_sched(). * rcu_read_lock_sched().
* *
* This means that all preempt_disable code sequences, including NMI and * This means that all preempt_disable code sequences, including NMI and
* hardware-interrupt handlers, in progress on entry will have completed * non-threaded hardware-interrupt handlers, in progress on entry will
* before this primitive returns. However, this does not guarantee that * have completed before this primitive returns. However, this does not
* softirq handlers will have completed, since in some kernels, these * guarantee that softirq handlers will have completed, since in some
* handlers can run in process context, and can block. * kernels, these handlers can run in process context, and can block.
*
* Note that this guarantee implies further memory-ordering guarantees.
* On systems with more than one CPU, when synchronize_sched() returns,
* each CPU is guaranteed to have executed a full memory barrier since the
* end of its last RCU-sched read-side critical section whose beginning
* preceded the call to synchronize_sched(). In addition, each CPU having
* an RCU read-side critical section that extends beyond the return from
* synchronize_sched() is guaranteed to have executed a full memory barrier
* after the beginning of synchronize_sched() and before the beginning of
* that RCU read-side critical section. Note that these guarantees include
* CPUs that are offline, idle, or executing in user mode, as well as CPUs
* that are executing in the kernel.
*
* Furthermore, if CPU A invoked synchronize_sched(), which returned
* to its caller on CPU B, then both CPU A and CPU B are guaranteed
* to have executed a full memory barrier during the execution of
* synchronize_sched() -- even if CPU A and CPU B are the same CPU (but
* again only if the system has more than one CPU).
* *
* This primitive provides the guarantees made by the (now removed) * This primitive provides the guarantees made by the (now removed)
* synchronize_kernel() API. In contrast, synchronize_rcu() only * synchronize_kernel() API. In contrast, synchronize_rcu() only
...@@ -2259,6 +2277,9 @@ EXPORT_SYMBOL_GPL(synchronize_sched); ...@@ -2259,6 +2277,9 @@ EXPORT_SYMBOL_GPL(synchronize_sched);
* read-side critical sections have completed. RCU read-side critical * read-side critical sections have completed. RCU read-side critical
* sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), * sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(),
* and may be nested. * and may be nested.
*
* See the description of synchronize_sched() for more detailed information
* on memory ordering guarantees.
*/ */
void synchronize_rcu_bh(void) void synchronize_rcu_bh(void)
{ {
......
...@@ -670,6 +670,9 @@ EXPORT_SYMBOL_GPL(kfree_call_rcu); ...@@ -670,6 +670,9 @@ EXPORT_SYMBOL_GPL(kfree_call_rcu);
* concurrently with new RCU read-side critical sections that began while * concurrently with new RCU read-side critical sections that began while
* synchronize_rcu() was waiting. RCU read-side critical sections are * synchronize_rcu() was waiting. RCU read-side critical sections are
* delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested. * delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested.
*
* See the description of synchronize_sched() for more detailed information
* on memory ordering guarantees.
*/ */
void synchronize_rcu(void) void synchronize_rcu(void)
{ {
...@@ -875,6 +878,11 @@ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited); ...@@ -875,6 +878,11 @@ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
/** /**
* rcu_barrier - Wait until all in-flight call_rcu() callbacks complete. * rcu_barrier - Wait until all in-flight call_rcu() callbacks complete.
*
* Note that this primitive does not necessarily wait for an RCU grace period
* to complete. For example, if there are no RCU callbacks queued anywhere
* in the system, then rcu_barrier() is within its rights to return
* immediately, without waiting for anything, much less an RCU grace period.
*/ */
void rcu_barrier(void) void rcu_barrier(void)
{ {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册