1. 03 6月, 2016 2 次提交
    • D
      locking/rwsem: Rework zeroing reader waiter->task · e3851390
      Davidlohr Bueso 提交于
      Readers that are awoken will expect a nil ->task indicating
      that a wakeup has occurred. Because of the way readers are
      implemented, there's a small chance that the waiter will never
      block in the slowpath (rwsem_down_read_failed), and therefore
      requires some form of reference counting to avoid the following
      scenario:
      
      rwsem_down_read_failed()		rwsem_wake()
        get_task_struct();
        spin_lock_irq(&wait_lock);
        list_add_tail(&waiter.list)
        spin_unlock_irq(&wait_lock);
      					  raw_spin_lock_irqsave(&wait_lock)
      					  __rwsem_do_wake()
        while (1) {
          set_task_state(TASK_UNINTERRUPTIBLE);
      					    waiter->task = NULL
          if (!waiter.task) // true
            break;
          schedule() // never reached
      
         __set_task_state(TASK_RUNNING);
       do_exit();
      					    wake_up_process(tsk); // boom
      
      ... and therefore race with do_exit() when the caller returns.
      
      There is also a mismatch between the smp_mb() and its documentation,
      in that the serialization is done between reading the task and the
      nil store. Furthermore, in addition to having the overlapping of
      loads and stores to waiter->task guaranteed to be ordered within
      that CPU, both wake_up_process() originally and now wake_q_add()
      already imply barriers upon successful calls, which serves the
      comment.
      
      Now, as an alternative to perhaps inverting the checks in the blocker
      side (which has its own penalty in that schedule is unavoidable),
      with lockless wakeups this situation is naturally addressed and we
      can just use the refcount held by wake_q_add(), instead doing so
      explicitly. Of course, we must guarantee that the nil store is done
      as the _last_ operation in that the task must already be marked for
      deletion to not fall into the race above. Spurious wakeups are also
      handled transparently in that the task's reference is only removed
      when wake_up_q() is actually called _after_ the nil store.
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman.Long@hpe.com
      Cc: dave@stgolabs.net
      Cc: jason.low2@hp.com
      Cc: peter@hurleysoftware.com
      Link: http://lkml.kernel.org/r/1463165787-25937-3-git-send-email-dave@stgolabs.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e3851390
    • D
      locking/rwsem: Enable lockless waiter wakeup(s) · 133e89ef
      Davidlohr Bueso 提交于
      As wake_qs gain users, we can teach rwsems about them such that
      waiters can be awoken without the wait_lock. This is for both
      readers and writer, the former being the most ideal candidate
      as we can batch the wakeups shortening the critical region that
      much more -- ie writer task blocking a bunch of tasks waiting to
      service page-faults (mmap_sem readers).
      
      In general applying wake_qs to rwsem (xadd) is not difficult as
      the wait_lock is intended to be released soon _anyways_, with
      the exception of when a writer slowpath will proactively wakeup
      any queued readers if it sees that the lock is owned by a reader,
      in which we simply do the wakeups with the lock held (see comment
      in __rwsem_down_write_failed_common()).
      
      Similar to other locking primitives, delaying the waiter being
      awoken does allow, at least in theory, the lock to be stolen in
      the case of writers, however no harm was seen in this (in fact
      lock stealing tends to be a _good_ thing in most workloads), and
      this is a tiny window anyways.
      
      Some page-fault (pft) and mmap_sem intensive benchmarks show some
      pretty constant reduction in systime (by up to ~8 and ~10%) on a
      2-socket, 12 core AMD box. In addition, on an 8-core Westmere doing
      page allocations (page_test)
      
      aim9:
      	 4.6-rc6				4.6-rc6
      						rwsemv2
      Min      page_test   378167.89 (  0.00%)   382613.33 (  1.18%)
      Min      exec_test      499.00 (  0.00%)      502.67 (  0.74%)
      Min      fork_test     3395.47 (  0.00%)     3537.64 (  4.19%)
      Hmean    page_test   395433.06 (  0.00%)   414693.68 (  4.87%)
      Hmean    exec_test      499.67 (  0.00%)      505.30 (  1.13%)
      Hmean    fork_test     3504.22 (  0.00%)     3594.95 (  2.59%)
      Stddev   page_test    17426.57 (  0.00%)    26649.92 (-52.93%)
      Stddev   exec_test        0.47 (  0.00%)        1.41 (-199.05%)
      Stddev   fork_test       63.74 (  0.00%)       32.59 ( 48.86%)
      Max      page_test   429873.33 (  0.00%)   456960.00 (  6.30%)
      Max      exec_test      500.33 (  0.00%)      507.66 (  1.47%)
      Max      fork_test     3653.33 (  0.00%)     3650.90 ( -0.07%)
      
      	     4.6-rc6     4.6-rc6
      			 rwsemv2
      User            1.12        0.04
      System          0.23        0.04
      Elapsed       727.27      721.98
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman.Long@hpe.com
      Cc: dave@stgolabs.net
      Cc: jason.low2@hp.com
      Cc: peter@hurleysoftware.com
      Link: http://lkml.kernel.org/r/1463165787-25937-2-git-send-email-dave@stgolabs.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      133e89ef
  2. 16 5月, 2016 1 次提交
    • P
      locking/rwsem: Fix down_write_killable() · 04cafed7
      Peter Zijlstra 提交于
      The new signal_pending exit path in __rwsem_down_write_failed_common()
      was fingered as breaking his kernel by Tetsuo Handa.
      
      Upon inspection it was found that there are two things wrong with it;
      
       - it forgets to remove WAITING_BIAS if it leaves the list empty, or
       - it forgets to wake further waiters that were blocked on the now
         removed waiter.
      
      Especially the first issue causes new lock attempts to block and stall
      indefinitely, as the code assumes that pending waiters mean there is
      an owner that will wake when it releases the lock.
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Tested-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Tested-by: NMichal Hocko <mhocko@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Link: http://lkml.kernel.org/r/20160512115745.GP3192@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      04cafed7
  3. 13 4月, 2016 1 次提交
    • M
      locking/rwsem: Introduce basis for down_write_killable() · d4799608
      Michal Hocko 提交于
      Introduce a generic implementation necessary for down_write_killable().
      
      This is a trivial extension of the already existing down_write() call
      which can be interrupted by SIGKILL.  This patch doesn't provide
      down_write_killable() yet because arches have to provide the necessary
      pieces before.
      
      rwsem_down_write_failed() which is a generic slow path for the
      write lock is extended to take a task state and renamed to
      __rwsem_down_write_failed_common(). The return value is either a valid
      semaphore pointer or ERR_PTR(-EINTR).
      
      rwsem_down_write_failed_killable() is exported as a new way to wait for
      the lock and be killable.
      
      For rwsem-spinlock implementation the current __down_write() it updated
      in a similar way as __rwsem_down_write_failed_common() except it doesn't
      need new exports just visible __down_write_killable().
      
      Architectures which are not using the generic rwsem implementation are
      supposed to provide their __down_write_killable() implementation and
      use rwsem_down_write_failed_killable() for the slow path.
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
      Cc: Signed-off-by: Jason Low <jason.low2@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1460041951-22347-7-git-send-email-mhocko@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d4799608
  4. 06 10月, 2015 1 次提交
  5. 08 5月, 2015 1 次提交
  6. 07 3月, 2015 1 次提交
    • J
      locking/rwsem: Fix lock optimistic spinning when owner is not running · 9198f6ed
      Jason Low 提交于
      Ming reported soft lockups occurring when running xfstest due to
      the following tip:locking/core commit:
      
        b3fd4f03 ("locking/rwsem: Avoid deceiving lock spinners")
      
      When doing optimistic spinning in rwsem, threads should stop
      spinning when the lock owner is not running. While a thread is
      spinning on owner, if the owner reschedules, owner->on_cpu
      returns false and we stop spinning.
      
      However, this commit essentially caused the check to get
      ignored because when we break out of the spin loop due to
      !on_cpu, we continue spinning if sem->owner != NULL.
      
      This patch fixes this by making sure we stop spinning if the
      owner is not running. Furthermore, just like with mutexes,
      refactor the code such that we don't have separate checks for
      owner_running(). This makes it more straightforward in terms of
      why we exit the spin on owner loop and we would also avoid
      needing to "guess" why we broke out of the loop to make this
      more readable.
      Reported-and-tested-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Acked-by: NDavidlohr Bueso <dave@stgolabs.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: http://lkml.kernel.org/r/1425714331.2475.388.camel@j-VirtualBoxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9198f6ed
  7. 24 2月, 2015 1 次提交
  8. 18 2月, 2015 4 次提交
  9. 04 2月, 2015 1 次提交
  10. 03 10月, 2014 1 次提交
  11. 16 9月, 2014 1 次提交
  12. 17 7月, 2014 1 次提交
    • D
      arch, locking: Ciao arch_mutex_cpu_relax() · 3a6bfbc9
      Davidlohr Bueso 提交于
      The arch_mutex_cpu_relax() function, introduced by 34b133f8, is
      hacky and ugly. It was added a few years ago to address the fact
      that common cpu_relax() calls include yielding on s390, and thus
      impact the optimistic spinning functionality of mutexes. Nowadays
      we use this function well beyond mutexes: rwsem, qrwlock, mcs and
      lockref. Since the macro that defines the call is in the mutex header,
      any users must include mutex.h and the naming is misleading as well.
      
      This patch (i) renames the call to cpu_relax_lowlatency  ("relax, but
      only if you can do it with very low latency") and (ii) defines it in
      each arch's asm/processor.h local header, just like for regular cpu_relax
      functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax,
      and thus we can take it out of mutex.h. While this can seem redundant,
      I believe it is a good choice as it allows us to move out arch specific
      logic from generic locking primitives and enables future(?) archs to
      transparently define it, similarly to System Z.
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bharat Bhushan <r65777@freescale.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Joseph Myers <joseph@codesourcery.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Qais Yousef <qais.yousef@imgtec.com>
      Cc: Qiaowei Ren <qiaowei.ren@intel.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Stratos Karafotis <stratosk@semaphore.gr>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vasily Kulikov <segoon@openwall.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Wolfram Sang <wsa@the-dreams.de>
      Cc: adi-buildroot-devel@lists.sourceforge.net
      Cc: linux390@de.ibm.com
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-am33-list@redhat.com
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-cris-kernel@axis.com
      Cc: linux-hexagon@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux@lists.openrisc.net
      Cc: linux-m32r-ja@ml.linux-m32r.org
      Cc: linux-m32r@ml.linux-m32r.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-metag@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-parisc@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3a6bfbc9
  13. 16 7月, 2014 4 次提交
    • D
      locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER · 5db6c6fe
      Davidlohr Bueso 提交于
      Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
      encapsulate the dependencies for rwsem optimistic spinning.
      No logical changes here as it continues to depend on both
      SMP and the XADD algorithm variant.
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Acked-by: NJason Low <jason.low2@hp.com>
      [ Also make it depend on ARCH_SUPPORTS_ATOMIC_RMW. ]
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1405112406-13052-2-git-send-email-davidlohr@hp.com
      Cc: aswin@hp.com
      Cc: Chris Mason <clm@fb.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5db6c6fe
    • J
      locking/spinlocks/mcs: Introduce and use init macro and function for osq locks · 4d9d951e
      Jason Low 提交于
      Currently, we initialize the osq lock by directly setting the lock's values. It
      would be preferable if we use an init macro to do the initialization like we do
      with other locks.
      
      This patch introduces and uses a macro and function for initializing the osq lock.
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Scott Norton <scott.norton@hp.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1405358872-3732-4-git-send-email-jason.low2@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4d9d951e
    • J
      locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead · 90631822
      Jason Low 提交于
      The cancellable MCS spinlock is currently used to queue threads that are
      doing optimistic spinning. It uses per-cpu nodes, where a thread obtaining
      the lock would access and queue the local node corresponding to the CPU that
      it's running on. Currently, the cancellable MCS lock is implemented by using
      pointers to these nodes.
      
      In this patch, instead of operating on pointers to the per-cpu nodes, we
      store the CPU numbers in which the per-cpu nodes correspond to in atomic_t.
      A similar concept is used with the qspinlock.
      
      By operating on the CPU # of the nodes using atomic_t instead of pointers
      to those nodes, this can reduce the overhead of the cancellable MCS spinlock
      by 32 bits (on 64 bit systems).
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Scott Norton <scott.norton@hp.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1405358872-3732-3-git-send-email-jason.low2@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      90631822
    • J
      locking/rwsem: Allow conservative optimistic spinning when readers have lock · 37e95624
      Jason Low 提交于
      Commit 4fc828e2 ("locking/rwsem: Support optimistic spinning")
      introduced a major performance regression for workloads such as
      xfs_repair which mix read and write locking of the mmap_sem across
      many threads. The result was xfs_repair ran 5x slower on 3.16-rc2
      than on 3.15 and using 20x more system CPU time.
      
      Perf profiles indicate in some workloads that significant time can
      be spent spinning on !owner. This is because we don't set the lock
      owner when readers(s) obtain the rwsem.
      
      In this patch, we'll modify rwsem_can_spin_on_owner() such that we'll
      return false if there is no lock owner. The rationale is that if we
      just entered the slowpath, yet there is no lock owner, then there is
      a possibility that a reader has the lock. To be conservative, we'll
      avoid spinning in these situations.
      
      This patch reduced the total run time of the xfs_repair workload from
      about 4 minutes 24 seconds down to approximately 1 minute 26 seconds,
      back to close to the same performance as on 3.15.
      
      Retesting of AIM7, which were some of the workloads used to test the
      original optimistic spinning code, confirmed that we still get big
      performance gains with optimistic spinning, even with this additional
      regression fix. Davidlohr found that while the 'custom' workload took
      a performance hit of ~-14% to throughput for >300 users with this
      additional patch, the overall gain with optimistic spinning is
      still ~+45%. The 'disk' workload even improved by ~+15% at >1000 users.
      Tested-by: NDave Chinner <dchinner@redhat.com>
      Acked-by: NDavidlohr Bueso <davidlohr@hp.com>
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1404532172.2572.30.camel@j-VirtualBoxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      37e95624
  14. 05 6月, 2014 2 次提交
    • A
      locking/rwsem: Fix checkpatch.pl warnings · 0cc3d011
      Andrew Morton 提交于
      WARNING: line over 80 characters
      #205: FILE: kernel/locking/rwsem-xadd.c:275:
      +		old = cmpxchg(&sem->count, count, count + RWSEM_ACTIVE_WRITE_BIAS);
      
      WARNING: line over 80 characters
      #376: FILE: kernel/locking/rwsem-xadd.c:434:
      +		 * If there were already threads queued before us and there are no
      
      WARNING: line over 80 characters
      #377: FILE: kernel/locking/rwsem-xadd.c:435:
      +		 * active writers, the lock must be read owned; so we try to wake
      
      total: 0 errors, 3 warnings, 417 lines checked
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/n/tip-pn6pslaplw031lykweojsn8c@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0cc3d011
    • D
      locking/rwsem: Support optimistic spinning · 4fc828e2
      Davidlohr Bueso 提交于
      We have reached the point where our mutexes are quite fine tuned
      for a number of situations. This includes the use of heuristics
      and optimistic spinning, based on MCS locking techniques.
      
      Exclusive ownership of read-write semaphores are, conceptually,
      just about the same as mutexes, making them close cousins. To
      this end we need to make them both perform similarly, and
      right now, rwsems are simply not up to it. This was discovered
      by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make
      rmap_walk_anon() and try_to_unmap_anon() more scalable) and
      similarly, converting some other mutexes (ie: i_mmap_mutex) to
      rwsems. This creates a situation where users have to choose
      between a rwsem and mutex taking into account this important
      performance difference. Specifically, biggest difference between
      both locks is when we fail to acquire a mutex in the fastpath,
      optimistic spinning comes in to play and we can avoid a large
      amount of unnecessary sleeping and overhead of moving tasks in
      and out of wait queue. Rwsems do not have such logic.
      
      This patch, based on the work from Tim Chen and I, adds support
      for write-side optimistic spinning when the lock is contended.
      It also includes support for the recently added cancelable MCS
      locking for adaptive spinning. Note that is is only applicable
      to the xadd method, and the spinlock rwsem variant remains intact.
      
      Allowing optimistic spinning before putting the writer on the wait
      queue reduces wait queue contention and provided greater chance
      for the rwsem to get acquired. With these changes, rwsem is on par
      with mutex. The performance benefits can be seen on a number of
      workloads. For instance, on a 8 socket, 80 core 64bit Westmere box,
      aim7 shows the following improvements in throughput:
      
       +--------------+---------------------+-----------------+
       |   Workload   | throughput-increase | number of users |
       +--------------+---------------------+-----------------+
       | alltests     | 20%                 | >1000           |
       | custom       | 27%, 60%            | 10-100, >1000   |
       | high_systime | 36%, 30%            | >100, >1000     |
       | shared       | 58%, 29%            | 10-100, >1000   |
       +--------------+---------------------+-----------------+
      
      There was also improvement on smaller systems, such as a quad-core
      x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up
      to +60% in throughput for over 50 clients. Additionally, benefits
      were also noticed in exim (mail server) workloads. Furthermore, no
      performance regression have been seen at all.
      
      Based-on-work-from: Tim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      [peterz: rej fixup due to comment patches, sched/rt.h header]
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Alex Shi <alex.shi@linaro.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: "Scott J Norton" <scott.norton@hp.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1399055055.6275.15.camel@buesod1.americas.hpqcorp.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4fc828e2
  15. 05 5月, 2014 1 次提交
    • T
      rwsem: Add comments to explain the meaning of the rwsem's count field · 3cf2f34e
      Tim Chen 提交于
      It took me quite a while to understand how rwsem's count field
      mainifested itself in different scenarios.
      
      Add comments to provide a quick reference to the the rwsem's count
      field for each scenario where readers and writers are contending
      for the lock.
      
      Hopefully it will be useful for future maintenance of the code and
      for people to get up to speed on how the logic in the code works.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Alex Shi <alex.shi@linaro.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Paul E.McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1399060437.2970.146.camel@schen9-DESKSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3cf2f34e
  16. 14 2月, 2014 1 次提交
  17. 06 11月, 2013 1 次提交
  18. 08 5月, 2013 1 次提交
    • D
      rwsem: check counter to avoid cmpxchg calls · 9607a85b
      Davidlohr Bueso 提交于
      This patch tries to reduce the amount of cmpxchg calls in the writer
      failed path by checking the counter value first before issuing the
      instruction.  If ->count is not set to RWSEM_WAITING_BIAS then there is
      no point wasting a cmpxchg call.
      
      Furthermore, Michel states "I suppose it helps due to the case where
      someone else steals the lock while we're trying to acquire
      sem->wait_lock."
      
      Two very different workloads and machines were used to see how this
      patch improves throughput: pgbench on a quad-core laptop and aim7 on a
      large 8 socket box with 80 cores.
      
      Some results comparing Michel's fast-path write lock stealing
      (tps-rwsem) on a quad-core laptop running pgbench:
      
        | db_size | clients  |  tps-rwsem     |   tps-patch  |
        +---------+----------+----------------+--------------+
        | 160 MB   |       1 |           6906 |         9153 | + 32.5
        | 160 MB   |       2 |          15931 |        22487 | + 41.1%
        | 160 MB   |       4 |          33021 |        32503 |
        | 160 MB   |       8 |          34626 |        34695 |
        | 160 MB   |      16 |          33098 |        34003 |
        | 160 MB   |      20 |          31343 |        31440 |
        | 160 MB   |      30 |          28961 |        28987 |
        | 160 MB   |      40 |          26902 |        26970 |
        | 160 MB   |      50 |          25760 |        25810 |
        ------------------------------------------------------
        | 1.6 GB   |       1 |           7729 |         7537 |
        | 1.6 GB   |       2 |          19009 |        23508 | + 23.7%
        | 1.6 GB   |       4 |          33185 |        32666 |
        | 1.6 GB   |       8 |          34550 |        34318 |
        | 1.6 GB   |      16 |          33079 |        32689 |
        | 1.6 GB   |      20 |          31494 |        31702 |
        | 1.6 GB   |      30 |          28535 |        28755 |
        | 1.6 GB   |      40 |          27054 |        27017 |
        | 1.6 GB   |      50 |          25591 |        25560 |
        ------------------------------------------------------
        | 7.6 GB   |       1 |           6224 |         7469 | + 20.0%
        | 7.6 GB   |       2 |          13611 |        12778 |
        | 7.6 GB   |       4 |          33108 |        32927 |
        | 7.6 GB   |       8 |          34712 |        34878 |
        | 7.6 GB   |      16 |          32895 |        33003 |
        | 7.6 GB   |      20 |          31689 |        31974 |
        | 7.6 GB   |      30 |          29003 |        28806 |
        | 7.6 GB   |      40 |          26683 |        26976 |
        | 7.6 GB   |      50 |          25925 |        25652 |
        ------------------------------------------------------
      
      For the aim7 worloads, they overall improved on top of Michel's
      patchset.  For full graphs on how the rwsem series plus this patch
      behaves on a large 8 socket machine against a vanilla kernel:
      
        http://stgolabs.net/rwsem-aim7-results.tar.gzSigned-off-by: NDavidlohr Bueso <davidlohr.bueso@hp.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9607a85b
  19. 07 5月, 2013 13 次提交
  20. 19 2月, 2013 1 次提交
    • A
      rwsem: Implement writer lock-stealing for better scalability · ce6711f3
      Alex Shi 提交于
      Commit 5a505085 ("mm/rmap: Convert the struct anon_vma::mutex
      to an rwsem") changed struct anon_vma::mutex to an rwsem, which
      caused aim7 fork_test performance to drop by 50%.
      
      Yuanhan Liu did the following excellent analysis:
      
          https://lkml.org/lkml/2013/1/29/84
      
      and found that the regression is caused by strict, serialized,
      FIFO sequential write-ownership of rwsems. Ingo suggested
      implementing opportunistic lock-stealing for the front writer
      task in the waitqueue.
      
      Yuanhan Liu implemented lock-stealing for spinlock-rwsems,
      which indeed recovered much of the regression - confirming
      the analysis that the main factor in the regression was the
      FIFO writer-fairness of rwsems.
      
      In this patch we allow lock-stealing to happen when the first
      waiter is also writer. With that change in place the
      aim7 fork_test performance is fully recovered on my
      Intel NHM EP, NHM EX, SNB EP 2S and 4S test-machines.
      
      Reported-by: lkp@linux.intel.com
      Reported-by: NYuanhan Liu <yuanhan.liu@linux.intel.com>
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: paul.gortmaker@windriver.com
      Link: https://lkml.org/lkml/2013/1/29/84
      Link: http://lkml.kernel.org/r/1360069915-31619-1-git-send-email-alex.shi@intel.com
      [ Small stylistic fixes, updated changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ce6711f3