提交 37e95624 编写于 作者: J Jason Low 提交者: Ingo Molnar

locking/rwsem: Allow conservative optimistic spinning when readers have lock

Commit 4fc828e2 ("locking/rwsem: Support optimistic spinning")
introduced a major performance regression for workloads such as
xfs_repair which mix read and write locking of the mmap_sem across
many threads. The result was xfs_repair ran 5x slower on 3.16-rc2
than on 3.15 and using 20x more system CPU time.

Perf profiles indicate in some workloads that significant time can
be spent spinning on !owner. This is because we don't set the lock
owner when readers(s) obtain the rwsem.

In this patch, we'll modify rwsem_can_spin_on_owner() such that we'll
return false if there is no lock owner. The rationale is that if we
just entered the slowpath, yet there is no lock owner, then there is
a possibility that a reader has the lock. To be conservative, we'll
avoid spinning in these situations.

This patch reduced the total run time of the xfs_repair workload from
about 4 minutes 24 seconds down to approximately 1 minute 26 seconds,
back to close to the same performance as on 3.15.

Retesting of AIM7, which were some of the workloads used to test the
original optimistic spinning code, confirmed that we still get big
performance gains with optimistic spinning, even with this additional
regression fix. Davidlohr found that while the 'custom' workload took
a performance hit of ~-14% to throughput for >300 users with this
additional patch, the overall gain with optimistic spinning is
still ~+45%. The 'disk' workload even improved by ~+15% at >1000 users.
Tested-by: NDave Chinner <dchinner@redhat.com>
Acked-by: NDavidlohr Bueso <davidlohr@hp.com>
Signed-off-by: NJason Low <jason.low2@hp.com>
Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1404532172.2572.30.camel@j-VirtualBoxSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 1795cd9b
...@@ -285,10 +285,10 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem) ...@@ -285,10 +285,10 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{ {
struct task_struct *owner; struct task_struct *owner;
bool on_cpu = true; bool on_cpu = false;
if (need_resched()) if (need_resched())
return 0; return false;
rcu_read_lock(); rcu_read_lock();
owner = ACCESS_ONCE(sem->owner); owner = ACCESS_ONCE(sem->owner);
...@@ -297,9 +297,9 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) ...@@ -297,9 +297,9 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
rcu_read_unlock(); rcu_read_unlock();
/* /*
* If sem->owner is not set, the rwsem owner may have * If sem->owner is not set, yet we have just recently entered the
* just acquired it and not set the owner yet or the rwsem * slowpath, then there is a possibility reader(s) may have the lock.
* has been released. * To be safe, avoid spinning in these situations.
*/ */
return on_cpu; return on_cpu;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册