1. 08 12月, 2014 1 次提交
  2. 03 9月, 2013 1 次提交
    • L
      lockref: implement lockless reference count updates using cmpxchg() · bc08b449
      Linus Torvalds 提交于
      Instead of taking the spinlock, the lockless versions atomically check
      that the lock is not taken, and do the reference count update using a
      cmpxchg() loop.  This is semantically identical to doing the reference
      count update protected by the lock, but avoids the "wait for lock"
      contention that you get when accesses to the reference count are
      contended.
      
      Note that a "lockref" is absolutely _not_ equivalent to an atomic_t.
      Even when the lockref reference counts are updated atomically with
      cmpxchg, the fact that they also verify the state of the spinlock means
      that the lockless updates can never happen while somebody else holds the
      spinlock.
      
      So while "lockref_put_or_lock()" looks a lot like just another name for
      "atomic_dec_and_lock()", and both optimize to lockless updates, they are
      fundamentally different: the decrement done by atomic_dec_and_lock() is
      truly independent of any lock (as long as it doesn't decrement to zero),
      so a locked region can still see the count change.
      
      The lockref structure, in contrast, really is a *locked* reference
      count.  If you hold the spinlock, the reference count will be stable and
      you can modify the reference count without using atomics, because even
      the lockless updates will see and respect the state of the lock.
      
      In order to enable the cmpxchg lockless code, the architecture needs to
      do three things:
      
       (1) Make sure that the "arch_spinlock_t" and an "unsigned int" can fit
           in an aligned u64, and have a "cmpxchg()" implementation that works
           on such a u64 data type.
      
       (2) define a helper function to test for a spinlock being unlocked
           ("arch_spin_value_unlocked()")
      
       (3) select the "ARCH_USE_CMPXCHG_LOCKREF" config variable in its
           Kconfig file.
      
      This enables it for x86-64 (but not 32-bit, we'd need to make sure
      cmpxchg() turns into the proper cmpxchg8b in order to enable it for
      32-bit mode).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bc08b449
  3. 13 8月, 2013 1 次提交
    • O
      sched: fix the theoretical signal_wake_up() vs schedule() race · e0acd0a6
      Oleg Nesterov 提交于
      This is only theoretical, but after try_to_wake_up(p) was changed
      to check p->state under p->pi_lock the code like
      
      	__set_current_state(TASK_INTERRUPTIBLE);
      	schedule();
      
      can miss a signal. This is the special case of wait-for-condition,
      it relies on try_to_wake_up/schedule interaction and thus it does
      not need mb() between __set_current_state() and if(signal_pending).
      
      However, this __set_current_state() can move into the critical
      section protected by rq->lock, now that try_to_wake_up() takes
      another lock we need to ensure that it can't be reordered with
      "if (signal_pending(current))" check inside that section.
      
      The patch is actually one-liner, it simply adds smp_wmb() before
      spin_lock_irq(rq->lock). This is what try_to_wake_up() already
      does by the same reason.
      
      We turn this wmb() into the new helper, smp_mb__before_spinlock(),
      for better documentation and to allow the architectures to change
      the default implementation.
      
      While at it, kill smp_mb__after_lock(), it has no callers.
      
      Perhaps we can also add smp_mb__before/after_spinunlock() for
      prepare_to_wait().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0acd0a6
  4. 09 8月, 2013 4 次提交
  5. 22 8月, 2012 1 次提交
  6. 31 3月, 2012 1 次提交
  7. 07 2月, 2012 1 次提交
  8. 26 11月, 2011 1 次提交
  9. 28 9月, 2011 1 次提交
  10. 30 8月, 2011 4 次提交
  11. 27 7月, 2011 1 次提交
  12. 21 7月, 2011 1 次提交
    • J
      x86: Fix write lock scalability 64-bit issue · a750036f
      Jan Beulich 提交于
      With the write lock path simply subtracting RW_LOCK_BIAS there
      is, on large systems, the theoretical possibility of overflowing
      the 32-bit value that was used so far (namely if 128 or more
      CPUs manage to do the subtraction, but don't get to do the
      inverse addition in the failure path quickly enough).
      
      A first measure is to modify RW_LOCK_BIAS itself - with the new
      value chosen, it is good for up to 2048 CPUs each allowed to
      nest over 2048 times on the read path without causing an issue.
      Quite possibly it would even be sufficient to adjust the bias a
      little further, assuming that allowing for significantly less
      nesting would suffice.
      
      However, as the original value chosen allowed for even more
      nesting levels, to support more than 2048 CPUs (possible
      currently only for 64-bit kernels) the lock itself gets widened
      to 64 bits.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/4E258E0D020000780004E3F0@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      a750036f
  13. 15 12月, 2009 4 次提交
  14. 10 7月, 2009 1 次提交
  15. 16 5月, 2009 1 次提交
    • J
      x86: Fix performance regression caused by paravirt_ops on native kernels · b4ecc126
      Jeremy Fitzhardinge 提交于
      Xiaohui Xin and some other folks at Intel have been looking into what's
      behind the performance hit of paravirt_ops when running native.
      
      It appears that the hit is entirely due to the paravirtualized
      spinlocks introduced by:
      
       | commit 8efcbab6
       | Date:   Mon Jul 7 12:07:51 2008 -0700
       |
       |     paravirt: introduce a "lock-byte" spinlock implementation
      
      The extra call/return in the spinlock path is somehow
      causing an increase in the cycles/instruction of somewhere around 2-7%
      (seems to vary quite a lot from test to test).  The working theory is
      that the CPU's pipeline is getting upset about the
      call->call->locked-op->return->return, and seems to be failing to
      speculate (though I haven't seen anything definitive about the precise
      reasons).  This doesn't entirely make sense, because the performance
      hit is also visible on unlock and other operations which don't involve
      locked instructions.  But spinlock operations clearly swamp all the
      other pvops operations, even though I can't imagine that they're
      nearly as common (there's only a .05% increase in instructions
      executed).
      
      If I disable just the pv-spinlock calls, my tests show that pvops is
      identical to non-pvops performance on native (my measurements show that
      it is actually about .1% faster, but Xiaohui shows a .05% slowdown).
      
      Summary of results, averaging 10 runs of the "mmperf" test, using a
      no-pvops build as baseline:
      
      		nopv		Pv-nospin	Pv-spin
      CPU cycles	100.00%		99.89%		102.18%
      instructions	100.00%		100.10%		100.15%
      CPI		100.00%		99.79%		102.03%
      cache ref	100.00%		100.84%		100.28%
      cache miss	100.00%		90.47%		88.56%
      cache miss rate	100.00%		89.72%		88.31%
      branches	100.00%		99.93%		100.04%
      branch miss	100.00%		103.66%		107.72%
      branch miss rt	100.00%		103.73%		107.67%
      wallclock	100.00%		99.90%		102.20%
      
      The clear effect here is that the 2% increase in CPI is
      directly reflected in the final wallclock time.
      
      (The other interesting effect is that the more ops are
      out of line calls via pvops, the lower the cache access
      and miss rates.  Not too surprising, but it suggests that
      the non-pvops kernel is over-inlined.  On the flipside,
      the branch misses go up correspondingly...)
      
      So, what's the fix?
      
      Paravirt patching turns all the pvops calls into direct calls, so
      _spin_lock etc do end up having direct calls.  For example, the compiler
      generated code for paravirtualized _spin_lock is:
      
      <_spin_lock+0>:		mov    %gs:0xb4c8,%rax
      <_spin_lock+9>:		incl   0xffffffffffffe044(%rax)
      <_spin_lock+15>:	callq  *0xffffffff805a5b30
      <_spin_lock+22>:	retq
      
      The indirect call will get patched to:
      <_spin_lock+0>:		mov    %gs:0xb4c8,%rax
      <_spin_lock+9>:		incl   0xffffffffffffe044(%rax)
      <_spin_lock+15>:	callq <__ticket_spin_lock>
      <_spin_lock+20>:	nop; nop		/* or whatever 2-byte nop */
      <_spin_lock+22>:	retq
      
      One possibility is to inline _spin_lock, etc, when building an
      optimised kernel (ie, when there's no spinlock/preempt
      instrumentation/debugging enabled).  That will remove the outer
      call/return pair, returning the instruction stream to a single
      call/return, which will presumably execute the same as the non-pvops
      case.  The downsides arel 1) it will replicate the
      preempt_disable/enable code at eack lock/unlock callsite; this code is
      fairly small, but not nothing; and 2) the spinlock definitions are
      already a very heavily tangled mass of #ifdefs and other preprocessor
      magic, and making any changes will be non-trivial.
      
      The other obvious answer is to disable pv-spinlocks.  Making them a
      separate config option is fairly easy, and it would be trivial to
      enable them only when Xen is enabled (as the only non-default user).
      But it doesn't really address the common case of a distro build which
      is going to have Xen support enabled, and leaves the open question of
      whether the native performance cost of pv-spinlocks is worth the
      performance improvement on a loaded Xen system (10% saving of overall
      system CPU when guests block rather than spin).  Still it is a
      reasonable short-term workaround.
      
      [ Impact: fix pvops performance regression when running native ]
      Analysed-by: N"Xin Xiaohui" <xiaohui.xin@intel.com>
      Analysed-by: N"Li Xin" <xin.li@intel.com>
      Analysed-by: N"Nakajima Jun" <jun.nakajima@intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Xen-devel <xen-devel@lists.xensource.com>
      LKML-Reference: <4A0B62F7.5030802@goop.org>
      [ fixed the help text ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b4ecc126
  16. 03 4月, 2009 1 次提交
  17. 10 2月, 2009 1 次提交
  18. 26 1月, 2009 1 次提交
  19. 21 1月, 2009 1 次提交
    • J
      x86: remove byte locks · afb33f8c
      Jiri Kosina 提交于
      Impact: cleanup
      
      Remove byte locks implementation, which was introduced by Jeremy in
      8efcbab6 ("paravirt: introduce a "lock-byte" spinlock implementation"),
      but turned out to be dead code that is not used by any in-kernel
      virtualization guest (Xen uses its own variant of spinlocks implementation
      and KVM is not planning to move to byte locks).
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      afb33f8c
  20. 23 10月, 2008 2 次提交
  21. 05 9月, 2008 3 次提交
  22. 20 8月, 2008 1 次提交
  23. 16 8月, 2008 1 次提交
    • M
      x86: spinlock use LOCK_PREFIX · 5bbd4c37
      Mathieu Desnoyers 提交于
      Since we are now using DS prefixes instead of NOP to remove LOCK
      prefixes, there is no longer any problems with instruction boundaries
      moving around.
      
      * Linus Torvalds (torvalds@linux-foundation.org) wrote:
      >
      >
      > On Thu, 14 Aug 2008, Mathieu Desnoyers wrote:
      > >
      > > Changing the 0x90 (single-byte nop) currently used into a 0x3E DS segment
      > > override prefix should fix this issue. Since the default of the atomic
      > > instructions is to use the DS segment anyway, it should not affect the
      > > behavior.
      >
      > Ok, so I think this is an _excellent_ patch, but I'd like to also then use
      > LOCK_PREFIX in include/asm-x86/futex.h.
      >
      > See commit 9d55b992.
      >
      >     Linus
      
      Unless there a rationale for this, I think these be changed to LOCK_PREFIX
      too.
      
      grep "lock ;" include/asm-x86/spinlock.h
               "lock ; cmpxchgw %w1,%2\n\t"
        asm volatile("lock ; xaddl %0, %1\n"
               "lock ; cmpxchgl %1,%2\n\t"
      
      Applies to 2.6.27-rc2.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      CC: Linus Torvalds <torvalds@linux-foundation.org>
      CC: H. Peter Anvin <hpa@zytor.com>
      CC: Jeremy Fitzhardinge <jeremy@goop.org>
      CC: Roland McGrath <roland@redhat.com>
      CC: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      CC: Steven Rostedt <srostedt@redhat.com>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Andrew Morton <akpm@linux-foundation.org>
      CC: David Miller <davem@davemloft.net>
      CC: Ulrich Drepper <drepper@redhat.com>
      CC: Rusty Russell <rusty@rustcorp.com.au>
      CC: Gregory Haskins <ghaskins@novell.com>
      CC: Arnaldo Carvalho de Melo <acme@redhat.com>
      CC: "Luis Claudio R. Goncalves" <lclaudio@uudg.org>
      CC: Clark Williams <williams@redhat.com>
      CC: Christoph Lameter <cl@linux-foundation.org>
      CC: Andi Kleen <andi@firstfloor.org>
      CC: Harvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5bbd4c37
  24. 15 8月, 2008 1 次提交
  25. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  26. 16 7月, 2008 2 次提交
    • J
      paravirt: introduce a "lock-byte" spinlock implementation · 8efcbab6
      Jeremy Fitzhardinge 提交于
      Implement a version of the old spinlock algorithm, in which everyone
      spins waiting for a lock byte.  In order to be compatible with the
      ticket-lock's use of a zero initializer, this uses the convention of
      '0' for unlocked and '1' for locked.
      
      This algorithm is much better than ticket locks in a virtual
      envionment, because it doesn't interact badly with the vcpu scheduler.
      If there are multiple vcpus spinning on a lock and the lock is
      released, the next vcpu to be scheduled will take the lock, rather
      than cycling around until the next ticketed vcpu gets it.
      
      To use this, you must call paravirt_use_bytelocks() very early, before
      any spinlocks have been taken.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <clameter@linux-foundation.org>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Virtualization <virtualization@lists.linux-foundation.org>
      Cc: Xen devel <xen-devel@lists.xensource.com>
      Cc: Thomas Friebel <thomas.friebel@amd.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8efcbab6
    • J
      x86/paravirt: add hooks for spinlock operations · 74d4affd
      Jeremy Fitzhardinge 提交于
      Ticket spinlocks have absolutely ghastly worst-case performance
      characteristics in a virtual environment.  If there is any contention
      for physical CPUs (ie, there are more runnable vcpus than cpus), then
      ticket locks can cause the system to end up spending 90+% of its time
      spinning.
      
      The problem is that (v)cpus waiting on a ticket spinlock will be
      granted access to the lock in strict order they got their tickets.  If
      the hypervisor scheduler doesn't give the vcpus time in that order,
      they will burn timeslices waiting for the scheduler to give the right
      vcpu some time.  In the worst case it could take O(n^2) vcpu scheduler
      timeslices for everyone waiting on the lock to get it, not counting
      new cpus trying to take the lock while the log-jam is sorted out.
      
      These hooks allow a paravirt backend to replace the spinlock
      implementation.
      
      At the very least, this could revert the implementation back to the
      old lock algorithm, which allows the next scheduled vcpu to take the
      lock, and has basically fairly good performance.
      
      It also allows the spinlocks to take advantages of the hypervisor
      features to make locks more efficient (spin and block, for example).
      
      The cost to native execution is an extra direct call when using a
      spinlock function.  There's no overhead if CONFIG_PARAVIRT is turned
      off.
      
      The lock structure is fixed at a single "unsigned int", initialized to
      zero, but the spinlock implementation can use it as it wishes.
      
      Thanks to Thomas Friebel's Xen Summit talk "Preventing Guests from
      Spinning Around" for pointing out this problem.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <clameter@linux-foundation.org>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Virtualization <virtualization@lists.linux-foundation.org>
      Cc: Xen devel <xen-devel@lists.xensource.com>
      Cc: Thomas Friebel <thomas.friebel@amd.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74d4affd
  27. 11 5月, 2008 1 次提交