1. 22 4月, 2016 1 次提交
    • M
      locking/rwsem: Provide down_write_killable() · 916633a4
      Michal Hocko 提交于
      Now that all the architectures implement the necessary glue code
      we can introduce down_write_killable(). The only difference wrt. regular
      down_write() is that the slow path waits in TASK_KILLABLE state and the
      interruption by the fatal signal is reported as -EINTR to the caller.
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
      Cc: Signed-off-by: Jason Low <jason.low2@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1460041951-22347-12-git-send-email-mhocko@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      916633a4
  2. 18 2月, 2015 1 次提交
    • D
      locking/rwsem: Set lock ownership ASAP · 7a215f89
      Davidlohr Bueso 提交于
      In order to optimize the spinning step, we need to set the lock
      owner as soon as the lock is acquired; after a successful counter
      cmpxchg operation, that is. This is particularly useful as rwsems
      need to set the owner to nil for readers, so there is a greater
      chance of falling out of the spinning. Currently we only set the
      owner much later in the game, in the more generic level -- latency
      can be specially bad when waiting for a node->next pointer when
      releasing the osq in up_write calls.
      
      As such, update the owner inside rwsem_try_write_lock (when the
      lock is obtained after blocking) and rwsem_try_write_lock_unqueued
      (when the lock is obtained while spinning). This requires creating
      a new internal rwsem.h header to share the owner related calls.
      
      Also cleanup some headers for mutex and rwsem.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: http://lkml.kernel.org/r/1422609267-15102-4-git-send-email-dave@stgolabs.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7a215f89
  3. 16 7月, 2014 1 次提交
  4. 05 6月, 2014 1 次提交
    • D
      locking/rwsem: Support optimistic spinning · 4fc828e2
      Davidlohr Bueso 提交于
      We have reached the point where our mutexes are quite fine tuned
      for a number of situations. This includes the use of heuristics
      and optimistic spinning, based on MCS locking techniques.
      
      Exclusive ownership of read-write semaphores are, conceptually,
      just about the same as mutexes, making them close cousins. To
      this end we need to make them both perform similarly, and
      right now, rwsems are simply not up to it. This was discovered
      by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make
      rmap_walk_anon() and try_to_unmap_anon() more scalable) and
      similarly, converting some other mutexes (ie: i_mmap_mutex) to
      rwsems. This creates a situation where users have to choose
      between a rwsem and mutex taking into account this important
      performance difference. Specifically, biggest difference between
      both locks is when we fail to acquire a mutex in the fastpath,
      optimistic spinning comes in to play and we can avoid a large
      amount of unnecessary sleeping and overhead of moving tasks in
      and out of wait queue. Rwsems do not have such logic.
      
      This patch, based on the work from Tim Chen and I, adds support
      for write-side optimistic spinning when the lock is contended.
      It also includes support for the recently added cancelable MCS
      locking for adaptive spinning. Note that is is only applicable
      to the xadd method, and the spinlock rwsem variant remains intact.
      
      Allowing optimistic spinning before putting the writer on the wait
      queue reduces wait queue contention and provided greater chance
      for the rwsem to get acquired. With these changes, rwsem is on par
      with mutex. The performance benefits can be seen on a number of
      workloads. For instance, on a 8 socket, 80 core 64bit Westmere box,
      aim7 shows the following improvements in throughput:
      
       +--------------+---------------------+-----------------+
       |   Workload   | throughput-increase | number of users |
       +--------------+---------------------+-----------------+
       | alltests     | 20%                 | >1000           |
       | custom       | 27%, 60%            | 10-100, >1000   |
       | high_systime | 36%, 30%            | >100, >1000     |
       | shared       | 58%, 29%            | 10-100, >1000   |
       +--------------+---------------------+-----------------+
      
      There was also improvement on smaller systems, such as a quad-core
      x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up
      to +60% in throughput for over 50 clients. Additionally, benefits
      were also noticed in exim (mail server) workloads. Furthermore, no
      performance regression have been seen at all.
      
      Based-on-work-from: Tim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      [peterz: rej fixup due to comment patches, sched/rt.h header]
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Alex Shi <alex.shi@linaro.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: "Scott J Norton" <scott.norton@hp.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1399055055.6275.15.camel@buesod1.americas.hpqcorp.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4fc828e2
  5. 06 11月, 2013 1 次提交
  6. 24 3月, 2013 1 次提交
  7. 12 1月, 2013 1 次提交
  8. 29 3月, 2012 1 次提交
  9. 31 10月, 2011 1 次提交
  10. 27 7月, 2011 1 次提交
  11. 21 7月, 2011 1 次提交
  12. 18 12月, 2007 1 次提交
  13. 20 7月, 2007 1 次提交
  14. 09 5月, 2007 1 次提交
  15. 04 7月, 2006 2 次提交