1. 26 1月, 2009 1 次提交
  2. 21 1月, 2009 1 次提交
    • J
      x86: remove byte locks · afb33f8c
      Jiri Kosina 提交于
      Impact: cleanup
      
      Remove byte locks implementation, which was introduced by Jeremy in
      8efcbab6 ("paravirt: introduce a "lock-byte" spinlock implementation"),
      but turned out to be dead code that is not used by any in-kernel
      virtualization guest (Xen uses its own variant of spinlocks implementation
      and KVM is not planning to move to byte locks).
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      afb33f8c
  3. 23 10月, 2008 2 次提交
  4. 05 9月, 2008 3 次提交
  5. 20 8月, 2008 1 次提交
  6. 16 8月, 2008 1 次提交
    • M
      x86: spinlock use LOCK_PREFIX · 5bbd4c37
      Mathieu Desnoyers 提交于
      Since we are now using DS prefixes instead of NOP to remove LOCK
      prefixes, there is no longer any problems with instruction boundaries
      moving around.
      
      * Linus Torvalds (torvalds@linux-foundation.org) wrote:
      >
      >
      > On Thu, 14 Aug 2008, Mathieu Desnoyers wrote:
      > >
      > > Changing the 0x90 (single-byte nop) currently used into a 0x3E DS segment
      > > override prefix should fix this issue. Since the default of the atomic
      > > instructions is to use the DS segment anyway, it should not affect the
      > > behavior.
      >
      > Ok, so I think this is an _excellent_ patch, but I'd like to also then use
      > LOCK_PREFIX in include/asm-x86/futex.h.
      >
      > See commit 9d55b992.
      >
      >     Linus
      
      Unless there a rationale for this, I think these be changed to LOCK_PREFIX
      too.
      
      grep "lock ;" include/asm-x86/spinlock.h
               "lock ; cmpxchgw %w1,%2\n\t"
        asm volatile("lock ; xaddl %0, %1\n"
               "lock ; cmpxchgl %1,%2\n\t"
      
      Applies to 2.6.27-rc2.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      CC: Linus Torvalds <torvalds@linux-foundation.org>
      CC: H. Peter Anvin <hpa@zytor.com>
      CC: Jeremy Fitzhardinge <jeremy@goop.org>
      CC: Roland McGrath <roland@redhat.com>
      CC: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      CC: Steven Rostedt <srostedt@redhat.com>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Andrew Morton <akpm@linux-foundation.org>
      CC: David Miller <davem@davemloft.net>
      CC: Ulrich Drepper <drepper@redhat.com>
      CC: Rusty Russell <rusty@rustcorp.com.au>
      CC: Gregory Haskins <ghaskins@novell.com>
      CC: Arnaldo Carvalho de Melo <acme@redhat.com>
      CC: "Luis Claudio R. Goncalves" <lclaudio@uudg.org>
      CC: Clark Williams <williams@redhat.com>
      CC: Christoph Lameter <cl@linux-foundation.org>
      CC: Andi Kleen <andi@firstfloor.org>
      CC: Harvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5bbd4c37
  7. 15 8月, 2008 1 次提交
  8. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  9. 16 7月, 2008 2 次提交
    • J
      paravirt: introduce a "lock-byte" spinlock implementation · 8efcbab6
      Jeremy Fitzhardinge 提交于
      Implement a version of the old spinlock algorithm, in which everyone
      spins waiting for a lock byte.  In order to be compatible with the
      ticket-lock's use of a zero initializer, this uses the convention of
      '0' for unlocked and '1' for locked.
      
      This algorithm is much better than ticket locks in a virtual
      envionment, because it doesn't interact badly with the vcpu scheduler.
      If there are multiple vcpus spinning on a lock and the lock is
      released, the next vcpu to be scheduled will take the lock, rather
      than cycling around until the next ticketed vcpu gets it.
      
      To use this, you must call paravirt_use_bytelocks() very early, before
      any spinlocks have been taken.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <clameter@linux-foundation.org>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Virtualization <virtualization@lists.linux-foundation.org>
      Cc: Xen devel <xen-devel@lists.xensource.com>
      Cc: Thomas Friebel <thomas.friebel@amd.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8efcbab6
    • J
      x86/paravirt: add hooks for spinlock operations · 74d4affd
      Jeremy Fitzhardinge 提交于
      Ticket spinlocks have absolutely ghastly worst-case performance
      characteristics in a virtual environment.  If there is any contention
      for physical CPUs (ie, there are more runnable vcpus than cpus), then
      ticket locks can cause the system to end up spending 90+% of its time
      spinning.
      
      The problem is that (v)cpus waiting on a ticket spinlock will be
      granted access to the lock in strict order they got their tickets.  If
      the hypervisor scheduler doesn't give the vcpus time in that order,
      they will burn timeslices waiting for the scheduler to give the right
      vcpu some time.  In the worst case it could take O(n^2) vcpu scheduler
      timeslices for everyone waiting on the lock to get it, not counting
      new cpus trying to take the lock while the log-jam is sorted out.
      
      These hooks allow a paravirt backend to replace the spinlock
      implementation.
      
      At the very least, this could revert the implementation back to the
      old lock algorithm, which allows the next scheduled vcpu to take the
      lock, and has basically fairly good performance.
      
      It also allows the spinlocks to take advantages of the hypervisor
      features to make locks more efficient (spin and block, for example).
      
      The cost to native execution is an extra direct call when using a
      spinlock function.  There's no overhead if CONFIG_PARAVIRT is turned
      off.
      
      The lock structure is fixed at a single "unsigned int", initialized to
      zero, but the spinlock implementation can use it as it wishes.
      
      Thanks to Thomas Friebel's Xen Summit talk "Preventing Guests from
      Spinning Around" for pointing out this problem.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <clameter@linux-foundation.org>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Virtualization <virtualization@lists.linux-foundation.org>
      Cc: Xen devel <xen-devel@lists.xensource.com>
      Cc: Thomas Friebel <thomas.friebel@amd.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74d4affd
  10. 11 5月, 2008 1 次提交
  11. 17 4月, 2008 2 次提交
  12. 30 1月, 2008 5 次提交
  13. 11 10月, 2007 1 次提交