1. 16 6月, 2016 1 次提交
    • P
      locking/atomic, arch/powerpc: Implement... · a28cc7bb
      Peter Zijlstra 提交于
      locking/atomic, arch/powerpc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}{,_relaxed,_acquire,_release}()
      
      Implement FETCH-OP atomic primitives, these are very similar to the
      existing OP-RETURN primitives we already have, except they return the
      value of the atomic variable _before_ modification.
      
      This is especially useful for irreversible operations -- such as
      bitops (because it becomes impossible to reconstruct the state prior
      to modification).
      Tested-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a28cc7bb
  2. 17 2月, 2016 3 次提交
    • B
      powerpc: atomic: Implement acquire/release/relaxed variants for cmpxchg · 56c08e6d
      Boqun Feng 提交于
      Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on
      which _release variants can be built.
      
      To avoid superfluous barriers in _acquire variants, we implement these
      operations with assembly code rather use __atomic_op_acquire() to build
      them automatically.
      
      For the same reason, we keep the assembly implementation of fully
      ordered cmpxchg operations.
      
      However, we don't do the similar for _release, because that will require
      putting barriers in the middle of ll/sc loops, which is probably a bad
      idea.
      
      Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not
      compiler barriers.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      56c08e6d
    • B
      powerpc: atomic: Implement acquire/release/relaxed variants for xchg · 26760fc1
      Boqun Feng 提交于
      Implement xchg{,64}_relaxed and atomic{,64}_xchg_relaxed, based on these
      _relaxed variants, release/acquire variants and fully ordered versions
      can be built.
      
      Note that xchg{,64}_relaxed and atomic_{,64}_xchg_relaxed are not
      compiler barriers.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      26760fc1
    • B
      powerpc: atomic: Implement atomic{, 64}_*_return_* variants · dc53617c
      Boqun Feng 提交于
      On powerpc, acquire and release semantics can be achieved with
      lightweight barriers("lwsync" and "ctrl+isync"), which can be used to
      implement __atomic_op_{acquire,release}.
      
      For release semantics, since we only need to ensure all memory accesses
      that issue before must take effects before the -store- part of the
      atomics, "lwsync" is what we only need. On the platform without
      "lwsync", "sync" should be used. Therefore in __atomic_op_release() we
      use PPC_RELEASE_BARRIER.
      
      For acquire semantics, "lwsync" is what we only need for the similar
      reason.  However on the platform without "lwsync", we can use "isync"
      rather than "sync" as an acquire barrier. Therefore in
      __atomic_op_acquire() we use PPC_ACQUIRE_BARRIER, which is barrier() on
      UP, "lwsync" if available and "isync" otherwise.
      
      Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other
      variants with these helpers.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      dc53617c
  3. 27 7月, 2015 2 次提交
  4. 14 8月, 2014 1 次提交
  5. 18 4月, 2014 1 次提交
  6. 09 10月, 2012 1 次提交
    • S
      atomic: implement generic atomic_dec_if_positive() · e79bee24
      Shaohua Li 提交于
      The x86 implementation of atomic_dec_if_positive is quite generic, so make
      it available to all architectures.
      
      This is needed for "swap: add a simple detector for inappropriate swapin
      readahead".
      
      [akpm@linux-foundation.org: do the "#define foo foo" trick in the conventional manner]
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e79bee24
  7. 29 3月, 2012 1 次提交
  8. 07 3月, 2012 1 次提交
  9. 17 11月, 2011 1 次提交
    • B
      powerpc: Fix atomic_xxx_return barrier semantics · b97021f8
      Benjamin Herrenschmidt 提交于
      The Documentation/memory-barriers.txt document requires that atomic
      operations that return a value act as a memory barrier both before
      and after the actual atomic operation.
      
      Our current implementation doesn't guarantee this. More specifically,
      while a load following the isync can not be issued before stwcx. has
      completed, that completion doesn't architecturally means that the
      result of stwcx. is visible to other processors (or any previous stores
      for that matter) (typically, the other processors L1 caches can still
      hold the old value).
      
      This has caused an actual crash in RCU torture testing on Power 7
      
      This fixes it by changing those atomic ops to use new macros instead
      of RELEASE/ACQUIRE barriers, called ATOMIC_ENTRY and ATMOIC_EXIT barriers,
      which are then defined respectively to lwsync and sync.
      
      I haven't had a chance to measure the performance impact (or rather
      what I measured with kernel compiles is in the noise, I yet have to
      find a more precise benchmark)
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b97021f8
  10. 27 7月, 2011 3 次提交
  11. 17 2月, 2010 1 次提交
  12. 15 6月, 2009 1 次提交
  13. 12 6月, 2009 1 次提交
  14. 07 1月, 2009 1 次提交
  15. 19 11月, 2008 1 次提交
    • P
      powerpc: Tell gcc when we clobber the carry in inline asm · efc3624c
      Paul Mackerras 提交于
      We have several instances of inline assembly code that use the addic
      or addic. instructions, but don't include XER in the list of clobbers.
      The addic and addic. instructions affect the carry bit, which is in
      the XER register.
      
      This adds "xer" to the list of clobbers for those inline asm
      statements that use addic or addic. and didn't already have it.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      efc3624c
  16. 04 8月, 2008 1 次提交
  17. 17 8月, 2007 1 次提交
  18. 09 5月, 2007 2 次提交
  19. 16 2月, 2007 1 次提交
  20. 22 1月, 2007 1 次提交
    • R
      [POWERPC] atomic_dec_if_positive sign extension fix · 434f98c4
      Robert Jennings 提交于
      On 64-bit machines, if an atomic counter is explicitly set to a
      negative value, the atomic_dec_if_positive function will decrement and
      store the next smallest value in the atomic counter, contrary to its
      intended operation.
      
      The comparison to determine if the decrement will make the result
      negative was done by the "addic." instruction, which operates on a
      64-bit value, namely the zero-extended word loaded from the atomic
      variable.  This patch uses an explicit word compare (cmpwi) and
      changes the addic. to an addi (also changing "=&r" to "=&b" so that r0
      isn't used, and addi doesn't become li).
      
      This also fixes a bug for both 32-bit and 64-bit in that previously
      0x80000000 was considered positive, since the result after
      decrementing is positive.  Now it is considered negative.
      
      Also, I clarify the return value in the comments just to make it clear
      that the value returned is always the decremented value, even if that
      value is not stored back to the atomic counter.
      Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      434f98c4
  21. 09 7月, 2006 1 次提交
  22. 24 2月, 2006 1 次提交
  23. 13 1月, 2006 2 次提交
  24. 10 1月, 2006 1 次提交
  25. 07 1月, 2006 1 次提交
  26. 14 11月, 2005 2 次提交
  27. 10 11月, 2005 2 次提交
    • S
      powerpc: implement atomic64_t on ppc64 · 06a98dba
      Stephen Rothwell 提交于
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      06a98dba
    • D
      [PATCH] powerpc: Consolidate asm compatibility macros · 3ddfbcf1
      David Gibson 提交于
      This patch consolidates macros used to generate assembly for
      compatibility across different CPUs or configs.  A new header,
      asm-powerpc/asm-compat.h contains the main compatibility macros.  It
      uses some preprocessor magic to make the macros suitable both for use
      in .S files, and in inline asm in .c files.  Headers (bitops.h,
      uaccess.h, atomic.h, bug.h) which had their own such compatibility
      macros are changed to use asm-compat.h.
      
      ppc_asm.h is now for use in .S files *only*, and a #error enforces
      that.  As such, we're a lot more careless about namespace pollution
      here than in asm-compat.h.
      
      While we're at it, this patch adds a call to the PPC405_ERR77 macro in
      futex.h which should have had it already, but didn't.
      
      Built and booted on pSeries, Maple and iSeries (ARCH=powerpc).  Built
      for 32-bit powermac (ARCH=powerpc) and Walnut (ARCH=ppc).
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3ddfbcf1
  28. 25 9月, 2005 1 次提交
  29. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4