1. 24 4月, 2011 1 次提交
  2. 14 4月, 2011 1 次提交
  3. 31 3月, 2011 1 次提交
  4. 05 1月, 2011 1 次提交
    • G
      [S390] mutex: Introduce arch_mutex_cpu_relax() · 34b133f8
      Gerald Schaefer 提交于
      The spinning mutex implementation uses cpu_relax() in busy loops as a
      compiler barrier. Depending on the architecture, cpu_relax() may do more
      than needed in this specific mutex spin loops. On System z we also give
      up the time slice of the virtual cpu in cpu_relax(), which prevents
      effective spinning on the mutex.
      
      This patch replaces cpu_relax() in the spinning mutex code with
      arch_mutex_cpu_relax(), which can be defined by each architecture that
      selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
      this patch should not affect other architectures than System z for now.
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1290437256.7455.4.camel@thinkpad>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      34b133f8
  5. 26 11月, 2010 1 次提交
    • G
      mutexes, sched: Introduce arch_mutex_cpu_relax() · 335d7afb
      Gerald Schaefer 提交于
      The spinning mutex implementation uses cpu_relax() in busy loops as a
      compiler barrier. Depending on the architecture, cpu_relax() may do more
      than needed in this specific mutex spin loops. On System z we also give
      up the time slice of the virtual cpu in cpu_relax(), which prevents
      effective spinning on the mutex.
      
      This patch replaces cpu_relax() in the spinning mutex code with
      arch_mutex_cpu_relax(), which can be defined by each architecture that
      selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
      this patch should not affect other architectures than System z for now.
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1290437256.7455.4.camel@thinkpad>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      335d7afb
  6. 03 9月, 2010 1 次提交
  7. 19 5月, 2010 1 次提交
    • T
      mutex: Fix optimistic spinning vs. BKL · fd6be105
      Tony Breeds 提交于
      Currently, we can hit a nasty case with optimistic
      spinning on mutexes:
      
          CPU A tries to take a mutex, while holding the BKL
      
          CPU B tried to take the BLK while holding the mutex
      
      This looks like a AB-BA scenario but in practice, is
      allowed and happens due to the auto-release on
      schedule() nature of the BKL.
      
      In that case, the optimistic spinning code can get us
      into a situation where instead of going to sleep, A
      will spin waiting for B who is spinning waiting for
      A, and the only way out of that loop is the
      need_resched() test in mutex_spin_on_owner().
      
      This patch fixes it by completely disabling spinning
      if we own the BKL. This adds one more detail to the
      extensive list of reasons why it's a bad idea for
      kernel code to be holding the BKL.
      Signed-off-by: NTony Breeds <tony@bakeyournoodle.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: <stable@kernel.org>
      LKML-Reference: <20100519054636.GC12389@ozlabs.org>
      [ added an unlikely() attribute to the branch ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fd6be105
  8. 03 12月, 2009 1 次提交
  9. 30 4月, 2009 1 次提交
    • A
      mutex: add atomic_dec_and_mutex_lock(), fix · a511e3f9
      Andrew Morton 提交于
      include/linux/mutex.h:136: warning: 'mutex_lock' declared inline after being called
       include/linux/mutex.h:136: warning: previous declaration of 'mutex_lock' was here
      
      uninline it.
      
      [ Impact: clean up and uninline, address compiler warning ]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <200904292318.n3TNIsi6028340@imap1.linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a511e3f9
  10. 21 4月, 2009 1 次提交
  11. 10 4月, 2009 1 次提交
    • H
      mutex: have non-spinning mutexes on s390 by default · 36cd3c9f
      Heiko Carstens 提交于
      Impact: performance regression fix for s390
      
      The adaptive spinning mutexes will not always do what one would expect on
      virtualized architectures like s390. Especially the cpu_relax() loop in
      mutex_spin_on_owner might hurt if the mutex holding cpu has been scheduled
      away by the hypervisor.
      
      We would end up in a cpu_relax() loop when there is no chance that the
      state of the mutex changes until the target cpu has been scheduled again by
      the hypervisor.
      
      For that reason we should change the default behaviour to no-spin on s390.
      
      We do have an instruction which allows to yield the current cpu in favour of
      a different target cpu. Also we have an instruction which allows us to figure
      out if the target cpu is physically backed.
      
      However we need to do some performance tests until we can come up with
      a solution that will do the right thing on s390.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      LKML-Reference: <20090409184834.7a0df7b2@osiris.boeblingen.de.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      36cd3c9f
  12. 06 4月, 2009 1 次提交
    • H
      mutex: drop "inline" from mutex_lock() inside kernel/mutex.c · b09d2501
      H. Peter Anvin 提交于
      Impact: build fix
      
      mutex_lock() is was defined inline in kernel/mutex.c, but wasn't
      declared so not in <linux/mutex.h>.  This didn't cause a problem until
      checkin 3a2d367d9aabac486ac4444c6c7ec7a1dab16267 added the
      atomic_dec_and_mutex_lock() inline in between declaration and
      definion.
      
      This broke building with CONFIG_ALLOW_WARNINGS=n, e.g. make
      allnoconfig.
      
      Either from the source code nor the allnoconfig binary output I cannot
      find any internal references to mutex_lock() in kernel/mutex.c, so
      presumably this "inline" is now-useless legacy.
      
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <tip-3a2d367d9aabac486ac4444c6c7ec7a1dab16267@git.kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b09d2501
  13. 15 1月, 2009 4 次提交
    • C
      mutex: adaptive spinnning, performance tweaks · ac6e60ee
      Chris Mason 提交于
      Spin more agressively. This is less fair but also markedly faster.
      
      The numbers:
      
       * dbench 50 (higher is better):
        spin        1282MB/s
        v10         548MB/s
        v10 no wait 1868MB/s
      
       * 4k creates (numbers in files/second higher is better):
        spin        avg 200.60 median 193.20 std 19.71 high 305.93 low 186.82
        v10         avg 180.94 median 175.28 std 13.91 high 229.31 low 168.73
        v10 no wait avg 232.18 median 222.38 std 22.91 high 314.66 low 209.12
      
       * File stats (numbers in seconds, lower is better):
        spin        2.27s
        v10         5.1s
        v10 no wait 1.6s
      
      ( The source changes are smaller than they look, I just moved the
        need_resched checks in __mutex_lock_common after the cmpxchg. )
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac6e60ee
    • P
      mutex: implement adaptive spinning · 0d66bf6d
      Peter Zijlstra 提交于
      Change mutex contention behaviour such that it will sometimes busy wait on
      acquisition - moving its behaviour closer to that of spinlocks.
      
      This concept got ported to mainline from the -rt tree, where it was originally
      implemented for rtmutexes by Steven Rostedt, based on work by Gregory Haskins.
      
      Testing with Ingo's test-mutex application (http://lkml.org/lkml/2006/1/8/50)
      gave a 345% boost for VFS scalability on my testbox:
      
       # ./test-mutex-shm V 16 10 | grep "^avg ops"
       avg ops/sec:               296604
      
       # ./test-mutex-shm V 16 10 | grep "^avg ops"
       avg ops/sec:               85870
      
      The key criteria for the busy wait is that the lock owner has to be running on
      a (different) cpu. The idea is that as long as the owner is running, there is a
      fair chance it'll release the lock soon, and thus we'll be better off spinning
      instead of blocking/scheduling.
      
      Since regular mutexes (as opposed to rtmutexes) do not atomically track the
      owner, we add the owner in a non-atomic fashion and deal with the races in
      the slowpath.
      
      Furthermore, to ease the testing of the performance impact of this new code,
      there is means to disable this behaviour runtime (without having to reboot
      the system), when scheduler debugging is enabled (CONFIG_SCHED_DEBUG=y),
      by issuing the following command:
      
       # echo NO_OWNER_SPIN > /debug/sched_features
      
      This command re-enables spinning again (this is also the default):
      
       # echo OWNER_SPIN > /debug/sched_features
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d66bf6d
    • P
      mutex: preemption fixes · 41719b03
      Peter Zijlstra 提交于
      The problem is that dropping the spinlock right before schedule is a voluntary
      preemption point and can cause a schedule, right after which we schedule again.
      
      Fix this inefficiency by keeping preemption disabled until we schedule, do this
      by explicity disabling preemption and providing a schedule() variant that
      assumes preemption is already disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      41719b03
    • P
      mutex: small cleanup · 93d81d1a
      Peter Zijlstra 提交于
      Remove a local variable by combining an assingment and test in one.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      93d81d1a
  14. 24 11月, 2008 1 次提交
  15. 20 10月, 2008 1 次提交
  16. 29 7月, 2008 1 次提交
  17. 10 6月, 2008 1 次提交
  18. 09 2月, 2008 1 次提交
  19. 07 12月, 2007 1 次提交
  20. 12 10月, 2007 1 次提交
  21. 20 7月, 2007 2 次提交
  22. 10 5月, 2007 1 次提交
  23. 09 12月, 2006 1 次提交
  24. 04 7月, 2006 4 次提交
  25. 27 6月, 2006 1 次提交
  26. 11 1月, 2006 3 次提交
  27. 10 1月, 2006 1 次提交