提交 1d203884 编写于 作者: Z Zhihao Cheng 提交者: Yang Yingliang

locking/percpu-rwsem: use this_cpu_{inc|dec}() for read_count

hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA

---------------------------

The __this_cpu*() accessors are (in general) IRQ-unsafe which, given
that percpu-rwsem is a blocking primitive, should be just fine.

However, file_end_write() is used from IRQ context and will cause
load-store issues.

Fixing it by using the IRQ-safe this_cpu_*() for operations on
read_count. This will generate more expensive code on a number of
platforms, which might cause a performance regression for some of the
other percpu-rwsem users.

If any such is reported, we can consider alternative solutions.

Fixes: 70fe2f48 ("aio: fix freeze protection of aio writes")
Signed-off-by: NHou Tao <houtao1@huawei.com>
Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200915140750.137881-1-houtao1@huawei.comSigned-off-by: NZhihao Cheng <chengzhihao1@huawei.com>
Reviewed-by: NHou Tao <houtao1@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 aa531fe9
...@@ -44,7 +44,7 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore * ...@@ -44,7 +44,7 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *
* and that one the synchronize_sched() is done, the writer will see * and that one the synchronize_sched() is done, the writer will see
* anything we did within this RCU-sched read-size critical section. * anything we did within this RCU-sched read-size critical section.
*/ */
__this_cpu_inc(*sem->read_count); this_cpu_inc(*sem->read_count);
if (unlikely(!rcu_sync_is_idle(&sem->rss))) if (unlikely(!rcu_sync_is_idle(&sem->rss)))
__percpu_down_read(sem, false); /* Unconditional memory barrier */ __percpu_down_read(sem, false); /* Unconditional memory barrier */
barrier(); barrier();
...@@ -68,7 +68,7 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem) ...@@ -68,7 +68,7 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
/* /*
* Same as in percpu_down_read(). * Same as in percpu_down_read().
*/ */
__this_cpu_inc(*sem->read_count); this_cpu_inc(*sem->read_count);
if (unlikely(!rcu_sync_is_idle(&sem->rss))) if (unlikely(!rcu_sync_is_idle(&sem->rss)))
ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */ ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */
preempt_enable(); preempt_enable();
...@@ -94,7 +94,7 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem ...@@ -94,7 +94,7 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem
* Same as in percpu_down_read(). * Same as in percpu_down_read().
*/ */
if (likely(rcu_sync_is_idle(&sem->rss))) if (likely(rcu_sync_is_idle(&sem->rss)))
__this_cpu_dec(*sem->read_count); this_cpu_dec(*sem->read_count);
else else
__percpu_up_read(sem); /* Unconditional memory barrier */ __percpu_up_read(sem); /* Unconditional memory barrier */
preempt_enable(); preempt_enable();
......
...@@ -99,7 +99,7 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem) ...@@ -99,7 +99,7 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem)
* zero, as that is the only time it matters) they will also see our * zero, as that is the only time it matters) they will also see our
* critical section. * critical section.
*/ */
__this_cpu_dec(*sem->read_count); this_cpu_dec(*sem->read_count);
/* Prod writer to recheck readers_active */ /* Prod writer to recheck readers_active */
rcuwait_wake_up(&sem->writer); rcuwait_wake_up(&sem->writer);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册