1. 08 5月, 2015 1 次提交
    • W
      locking/qspinlock: Introduce a simple generic 4-byte queued spinlock · a33fda35
      Waiman Long 提交于
      This patch introduces a new generic queued spinlock implementation that
      can serve as an alternative to the default ticket spinlock. Compared
      with the ticket spinlock, this queued spinlock should be almost as fair
      as the ticket spinlock. It has about the same speed in single-thread
      and it can be much faster in high contention situations especially when
      the spinlock is embedded within the data structure to be protected.
      
      Only in light to moderate contention where the average queue depth
      is around 1-3 will this queued spinlock be potentially a bit slower
      due to the higher slowpath overhead.
      
      This queued spinlock is especially suit to NUMA machines with a large
      number of cores as the chance of spinlock contention is much higher
      in those machines. The cost of contention is also higher because of
      slower inter-node memory traffic.
      
      Due to the fact that spinlocks are acquired with preemption disabled,
      the process will not be migrated to another CPU while it is trying
      to get a spinlock. Ignoring interrupt handling, a CPU can only be
      contending in one spinlock at any one time. Counting soft IRQ, hard
      IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
      activities.  By allocating a set of per-cpu queue nodes and used them
      to form a waiting queue, we can encode the queue node address into a
      much smaller 24-bit size (including CPU number and queue node index)
      leaving one byte for the lock.
      
      Please note that the queue node is only needed when waiting for the
      lock. Once the lock is acquired, the queue node can be released to
      be used later.
      Signed-off-by: NWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-2-git-send-email-Waiman.Long@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a33fda35
  2. 14 1月, 2015 1 次提交
  3. 16 7月, 2014 2 次提交
  4. 06 6月, 2014 1 次提交
  5. 28 5月, 2013 1 次提交
  6. 13 9月, 2012 1 次提交
  7. 23 3月, 2012 1 次提交
  8. 10 4月, 2011 1 次提交
  9. 03 12月, 2009 1 次提交
  10. 14 11月, 2009 1 次提交
    • T
      locking: Make inlining decision Kconfig based · 6beb0009
      Thomas Gleixner 提交于
      commit 892a7c67 (locking: Allow arch-inlined spinlocks) implements the
      selection of which lock functions are inlined based on defines in
      arch/.../spinlock.h: #define __always_inline__LOCK_FUNCTION
      
      Despite of the name __always_inline__* the lock functions can be built
      out of line depending on config options. Also if the arch does not set
      some inline defines the generic code might set them; again depending on
      config options.
      
      This makes it unnecessary hard to figure out when and which lock
      functions are inlined. Aside of that it makes it way harder and
      messier for -rt to manipulate the lock functions.
      
      Convert the inlining decision to CONFIG switches. Each lock function
      is inlined depending on CONFIG_INLINE_*. The configs implement the
      existing dependencies. The architecture code can select ARCH_INLINE_*
      to signal that it wants the corresponding lock function inlined.
      ARCH_INLINE_* is necessary as Kconfig ignores "depends on"
      restrictions when a config element is selected.
      
      No functional change.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <20091109151428.504477141@linutronix.de>
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      6beb0009