1. 29 3月, 2016 2 次提交
  2. 17 2月, 2016 1 次提交
  3. 13 2月, 2016 1 次提交
    • F
      atomic: Export fetch_or() · 5fd7a09c
      Frederic Weisbecker 提交于
      Export fetch_or() that's implemented and used internally by the
      scheduler. We are going to use it for NO_HZ so make it generally
      available.
      Reviewed-by: NChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      5fd7a09c
  4. 04 11月, 2015 1 次提交
    • L
      atomic: remove all traces of READ_ONCE_CTRL() and atomic*_read_ctrl() · 105ff3cb
      Linus Torvalds 提交于
      This seems to be a mis-reading of how alpha memory ordering works, and
      is not backed up by the alpha architecture manual.  The helper functions
      don't do anything special on any other architectures, and the arguments
      that support them being safe on other architectures also argue that they
      are safe on alpha.
      
      Basically, the "control dependency" is between a previous read and a
      subsequent write that is dependent on the value read.  Even if the
      subsequent write is actually done speculatively, there is no way that
      such a speculative write could be made visible to other cpu's until it
      has been committed, which requires validating the speculation.
      
      Note that most weakely ordered architectures (very much including alpha)
      do not guarantee any ordering relationship between two loads that depend
      on each other on a control dependency:
      
          read A
          if (val == 1)
              read B
      
      because the conditional may be predicted, and the "read B" may be
      speculatively moved up to before reading the value A.  So we require the
      user to insert a smp_rmb() between the two accesses to be correct:
      
          read A;
          if (A == 1)
              smp_rmb()
              read B
      
      Alpha is further special in that it can break that ordering even if the
      *address* of B depends on the read of A, because the cacheline that is
      read later may be stale unless you have a memory barrier in between the
      pointer read and the read of the value behind a pointer:
      
          read ptr
          read offset(ptr)
      
      whereas all other weakly ordered architectures guarantee that the data
      dependency (as opposed to just a control dependency) will order the two
      accesses.  As a result, alpha needs a "smp_read_barrier_depends()" in
      between those two reads for them to be ordered.
      
      The coontrol dependency that "READ_ONCE_CTRL()" and "atomic_read_ctrl()"
      had was a control dependency to a subsequent *write*, however, and
      nobody can finalize such a subsequent write without having actually done
      the read.  And were you to write such a value to a "stale" cacheline
      (the way the unordered reads came to be), that would seem to lose the
      write entirely.
      
      So the things that make alpha able to re-order reads even more
      aggressively than other weak architectures do not seem to be relevant
      for a subsequent write.  Alpha memory ordering may be strange, but
      there's no real indication that it is *that* strange.
      
      Also, the alpha architecture reference manual very explicitly talks
      about the definition of "Dependence Constraints" in section 5.6.1.7,
      where a preceding read dominates a subsequent write.
      
      Such a dependence constraint admittedly does not impose a BEFORE (alpha
      architecture term for globally visible ordering), but it does guarantee
      that there can be no "causal loop".  I don't see how you could avoid
      such a loop if another cpu could see the stored value and then impact
      the value of the first read.  Put another way: the read and the write
      could not be seen as being out of order wrt other cpus.
      
      So I do not see how these "x_ctrl()" functions can currently be necessary.
      
      I may have to eat my words at some point, but in the absense of clear
      proof that alpha actually needs this, or indeed even an explanation of
      how alpha could _possibly_ need it, I do not believe these functions are
      called for.
      
      And if it turns out that alpha really _does_ need a barrier for this
      case, that barrier still should not be "smp_read_barrier_depends()".
      We'd have to make up some new speciality barrier just for alpha, along
      with the documentation for why it really is necessary.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul E McKenney <paulmck@us.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      105ff3cb
  5. 06 10月, 2015 1 次提交
  6. 23 9月, 2015 2 次提交
  7. 12 8月, 2015 1 次提交
    • W
      locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations · 654672d4
      Will Deacon 提交于
      Whilst porting the generic qrwlock code over to arm64, it became
      apparent that any portable locking code needs finer-grained control of
      the memory-ordering guarantees provided by our atomic routines.
      
      In particular: xchg, cmpxchg, {add,sub}_return are often used in
      situations where full barrier semantics (currently the only option
      available) are not required. For example, when a reader increments a
      reader count to obtain a lock, checking the old value to see if a writer
      was present, only acquire semantics are strictly needed.
      
      This patch introduces three new ordering semantics for these operations:
      
        - *_relaxed: No ordering guarantees. This is similar to what we have
                     already for the non-return atomics (e.g. atomic_add).
      
        - *_acquire: ACQUIRE semantics, similar to smp_load_acquire.
      
        - *_release: RELEASE semantics, similar to smp_store_release.
      
      In memory-ordering speak, this means that the acquire/release semantics
      are RCpc as opposed to RCsc. Consequently a RELEASE followed by an
      ACQUIRE does not imply a full barrier, as already documented in
      memory-barriers.txt.
      
      Currently, all the new macros are conditionally mapped to the full-mb
      variants, however if the *_relaxed version is provided by the
      architecture, then the acquire/release variants are constructed by
      supplementing the relaxed routine with an explicit barrier.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman.Long@hp.com
      Cc: paulmck@linux.vnet.ibm.com
      Link: http://lkml.kernel.org/r/1438880084-18856-2-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      654672d4
  8. 27 7月, 2015 2 次提交
  9. 13 8月, 2014 1 次提交
    • P
      locking: Remove deprecated smp_mb__() barriers · 2e39465a
      Peter Zijlstra 提交于
      Its been a while and there are no in-tree users left, so remove the
      deprecated barriers.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Chen, Gong <gong.chen@linux.intel.com>
      Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: John Sullivan <jsrhbz@kanargh.force9.co.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      2e39465a
  10. 18 4月, 2014 1 次提交
    • P
      arch: Prepare for smp_mb__{before,after}_atomic() · febdbfe8
      Peter Zijlstra 提交于
      Since the smp_mb__{before,after}*() ops are fundamentally dependent on
      how an arch can implement atomics it doesn't make sense to have 3
      variants of them. They must all be the same.
      
      Furthermore, the 3 variants suggest they're only valid for those 3
      atomic ops, while we have many more where they could be applied.
      
      So move away from
      smp_mb__{before,after}_{atomic,clear}_{dec,inc,bit}() and reduce the
      interface to just the two: smp_mb__{before,after}_atomic().
      
      This patch prepares the way by introducing default implementations in
      asm-generic/barrier.h that default to a full barrier and providing
      __deprecated inlines for the previous 6 barriers if they're not
      provided by the arch.
      
      This should allow for a mostly painless transition (lots of deprecated
      warns in the interim).
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/n/tip-wr59327qdyi9mbzn6x937s4e@git.kernel.org
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: "Chen, Gong" <gong.chen@linux.intel.com>
      Cc: John Sullivan <jsrhbz@kanargh.force9.co.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      febdbfe8
  11. 09 10月, 2012 1 次提交
    • S
      atomic: implement generic atomic_dec_if_positive() · e79bee24
      Shaohua Li 提交于
      The x86 implementation of atomic_dec_if_positive is quite generic, so make
      it available to all architectures.
      
      This is needed for "swap: add a simple detector for inappropriate swapin
      readahead".
      
      [akpm@linux-foundation.org: do the "#define foo foo" trick in the conventional manner]
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e79bee24
  12. 07 3月, 2012 1 次提交
  13. 27 7月, 2011 4 次提交
  14. 20 7月, 2011 1 次提交
  15. 28 5月, 2011 1 次提交
  16. 12 11月, 2010 1 次提交
    • E
      atomic: add atomic_inc_not_zero_hint() · 3f9d35b9
      Eric Dumazet 提交于
      Followup of perf tools session in Netfilter WorkShop 2010
      
      In the network stack we make high usage of atomic_inc_not_zero() in
      contexts we know the probable value of atomic before increment (2 for udp
      sockets for example)
      
      Using a special version of atomic_inc_not_zero() giving this hint can help
      processor to use less bus transactions.
      
      On x86 (MESI protocol) for example, this avoids entering Shared state,
      because "lock cmpxchg" issues an RFO (Read For Ownership)
      
      akpm: Adds a new include/linux/atomic.h.  This means that new code should
      henceforth include linux/atomic.h and not asm/atomic.h.  The presence of
      include/linux/atomic.h will in fact cause checkpatch.pl to warn about use
      of asm/atomic.h.  The new include/linux/atomic.h becomes the place where
      arch-neutral atomic_t code should be placed.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f9d35b9