1. 14 6月, 2016 1 次提交
    • B
      powerpc/spinlock: Fix spin_unlock_wait() · 6262db7c
      Boqun Feng 提交于
      There is an ordering issue with spin_unlock_wait() on powerpc, because
      the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
      the load part of the operation with memory operations following it.
      Therefore the following event sequence can happen:
      
      CPU 1			CPU 2			CPU 3
      
      ==================	====================	==============
      						spin_unlock(&lock);
      			spin_lock(&lock):
      			  r1 = *lock; // r1 == 0;
      o = object;		o = READ_ONCE(object); // reordered here
      object = NULL;
      smp_mb();
      spin_unlock_wait(&lock);
      			  *lock = 1;
      smp_mb();
      o->dead = true;         < o = READ_ONCE(object); > // reordered upwards
      			if (o) // true
      				BUG_ON(o->dead); // true!!
      
      To fix this, we add a "nop" ll/sc loop in arch_spin_unlock_wait() on
      ppc, the "nop" ll/sc loop reads the lock
      value and writes it back atomically, in this way it will synchronize the
      view of the lock on CPU1 with that on CPU2. Therefore in the scenario
      above, either CPU2 will fail to get the lock at first or CPU1 will see
      the lock acquired by CPU2, both cases will eliminate this bug. This is a
      similar idea as what Will Deacon did for ARM64 in:
      
        d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
      
      Furthermore, if the "nop" ll/sc figures out the lock is locked, we
      actually don't need to do the "nop" ll/sc trick again, we can just do a
      normal load+check loop for the lock to be released, because in that
      case, spin_unlock_wait() is called when someone is holding the lock, and
      the store part of the "nop" ll/sc happens before the lock release of the
      current lock holder:
      
      	"nop" ll/sc -> spin_unlock()
      
      and the lock release happens before the next lock acquisition:
      
      	spin_unlock() -> spin_lock() <next holder>
      
      which means the "nop" ll/sc happens before the next lock acquisition:
      
      	"nop" ll/sc -> spin_unlock() -> spin_lock() <next holder>
      
      With a smp_mb() preceding spin_unlock_wait(), the store of object is
      guaranteed to be observed by the next lock holder:
      
      	STORE -> smp_mb() -> "nop" ll/sc
      	-> spin_unlock() -> spin_lock() <next holder>
      
      This patch therefore fixes the issue and also cleans the
      arch_spin_unlock_wait() a little bit by removing superfluous memory
      barriers in loops and consolidating the implementations for PPC32 and
      PPC64 into one.
      Suggested-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      [mpe: Inline the "nop" ll/sc loop and set EH=0, munge change log]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6262db7c
  2. 21 4月, 2015 1 次提交
  3. 25 3月, 2015 1 次提交
    • G
      cpufreq/ppc: Add missing #include <asm/smp.h> · 1f8c82ab
      Geert Uytterhoeven 提交于
      If CONFIG_SMP=n, <linux/smp.h> does not include <asm/smp.h>, causing:
      
      drivers/cpufreq/ppc-corenet-cpufreq.c: In function 'corenet_cpufreq_cpu_init':
      drivers/cpufreq/ppc-corenet-cpufreq.c:173:3: error: implicit declaration of function 'get_hard_smp_processor_id' [-Werror=implicit-funcuresh E. Warrier" <warrier@linux.vnet.ibm.com>
      X-Patchwork-Id: 443703
      Message-Id: <54EE5989.7010800@linux.vnet.ibm.com>
      To: linuxppc-dev@ozlabs.org
      Date: Wed, 25 Feb 2015 17:23:53 -0600
      
      Export __spin_yield so that the arch_spin_unlock() function can
      be invoked from a module. This will be required for modules where
      we want to take a lock that is also is acquired in hypervisor
      real mode. Because we want to avoid running any lockdep code
      (which may not be safe in real mode), this lock needs to be
      an arch_spinlock_t instead of a normal spinlock.
      Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1f8c82ab
  4. 13 8月, 2014 1 次提交
  5. 14 8月, 2013 1 次提交
  6. 21 3月, 2012 1 次提交
  7. 01 11月, 2011 1 次提交
  8. 02 9月, 2010 1 次提交
  9. 15 12月, 2009 3 次提交
  10. 09 3月, 2007 1 次提交
  11. 20 9月, 2006 1 次提交
  12. 01 7月, 2006 1 次提交
  13. 13 1月, 2006 1 次提交
    • D
      [PATCH] powerpc: Remove lppaca structure from the PACA · 3356bb9f
      David Gibson 提交于
      At present the lppaca - the structure shared with the iSeries
      hypervisor and phyp - is contained within the PACA, our own low-level
      per-cpu structure.  This doesn't have to be so, the patch below
      removes it, making a separate array of lppaca structures.
      
      This saves approximately 500*NR_CPUS bytes of image size and kernel
      memory, because we don't need aligning gap between the Linux and
      hypervisor portions of every PACA.  On the other hand it means an
      extra level of dereference in many accesses to the lppaca.
      
      The patch also gets rid of several places where we assign the paca
      address to a local variable for no particular reason.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3356bb9f
  14. 07 11月, 2005 1 次提交
    • P
      powerpc: Various UP build fixes · 2249ca9d
      Paul Mackerras 提交于
      Mostly this involves adding #include <asm/smp.h>, since that defines
      things like boot_cpuid[_phys] and [gs]et_hard_smp_processor_id, which
      are SMP-related but still needed on UP.  This incorporates fixes
      posted by Olof Johansson and Heikki Lindholm.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2249ca9d
  15. 01 11月, 2005 1 次提交
  16. 31 10月, 2005 1 次提交
  17. 10 10月, 2005 1 次提交
  18. 11 9月, 2005 1 次提交
    • I
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar 提交于
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: NGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: NHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: NMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
  19. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4