- 21 6月, 2018 4 次提交
-
-
由 Mark Rutland 提交于
Several architectures these have a near-identical implementation based on atomic_read() and atomic_cmpxchg() which we can instead define in <linux/atomic.h>, so let's do so, using something close to the existing x86 implementation with try_cmpxchg(). Where an architecture provides its own atomic_fetch_add_unless(), it must define a preprocessor symbol for it. The instrumented atomics are updated accordingly. Note that arch/arc's existing atomic_fetch_add_unless() had redundant barriers, as these are already present in its atomic_cmpxchg() implementation. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Link: https://lore.kernel.org/lkml/20180621121321.4761-7-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to define the same boilerplate. Let's add a fallback in <linux/atomic.h>, and remove the redundant implementations. Note that atomic64_add_unless() is always defined in <linux/atomic.h>, and promotes its arguments to the requisite types, so we need not do this explicitly. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-6-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
When atomic_inc_not_zero(v) isn't defined, <linux/atomic.h> will define it as falling back to atomic_add_unless((v), 1, 0), so there's no need for arch code to do so. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-3-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
While __atomic_add_unless() was originally intended as a building-block for atomic_add_unless(), it's now used in a number of places around the kernel. It's the only common atomic operation named __atomic*(), rather than atomic_*(), and for consistency it would be better named atomic_fetch_add_unless(). This lack of consistency is slightly confusing, and gets in the way of scripting atomics. Given that, let's clean things up and promote it to an official part of the atomics API, in the form of atomic_fetch_add_unless(). This patch converts definitions and invocations over to the new name, including the instrumented version, using the following script: ---- git grep -w __atomic_add_unless | while read line; do sed -i '{s/\<__atomic_add_unless\>/atomic_fetch_add_unless/}' "${line%%:*}"; done git grep -w __arch_atomic_add_unless | while read line; do sed -i '{s/\<__arch_atomic_add_unless\>/arch_atomic_fetch_add_unless/}' "${line%%:*}"; done ---- Note that we do not have atomic{64,_long}_fetch_add_unless(), which will be introduced by later patches. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-2-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 8月, 2017 1 次提交
-
-
由 Peter Zijlstra 提交于
Those architectures that have a special atomic_set implementation also need a special atomic_set_release(), because for the very same reason WRITE_ONCE() is broken for them, smp_store_release() is too. The vast majority is architectures that have spinlock hash based atomic implementation except hexagon which seems to have a hardware 'feature'. The spinlock based atomics should be SC, that is, none of them appear to place extra barriers in atomic_cmpxchg() or any of the other SC atomic primitives and therefore seem to rely on their spinlock implementation being SC (I did not fully validate all that). Therefore, the normal atomic_set() is SC and can be used at atomic_set_release(). Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: davem@davemloft.net Cc: james.hogan@imgtec.com Cc: jejb@parisc-linux.org Cc: rkuo@codeaurora.org Cc: vgupta@synopsys.com Link: http://lkml.kernel.org/r/20170609110506.yod47flaav3wgoj5@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 15 4月, 2017 1 次提交
-
-
由 Noam Camus 提交于
Make ATOMIC_INIT available for all ARC platforms (including plat-eznps) Cc: <stable@vger.kernel.org> # 4.9+ Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 01 10月, 2016 2 次提交
-
-
由 Noam Camus 提交于
Build brekeage since last changes to generic atomic operations. Added couple of missing macros which are now mandatory Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement the 64-bit atomics and elide the spinlock based generic 64-bit atomics boot tested with atomic64 self-test (and GOD bless the person who wrote them, I realized my inline assmebly is sloppy as hell) Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 20 6月, 2016 1 次提交
-
-
由 Peter Zijlstra 提交于
Resolve conflict between commits: fbffe892 ("locking/atomic, arch/arc: Implement atomic_fetch_{add,sub,and,andnot,or,xor}()") and: ed6aefed ("Revert "ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff"") Reported-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nigel Topham <ntopham@synopsys.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: linux-kernel@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 16 6月, 2016 2 次提交
-
-
由 Peter Zijlstra 提交于
Since all architectures have this implemented now natively, remove this dead code. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Implement FETCH-OP atomic primitives, these are very similar to the existing OP-RETURN primitives we already have, except they return the value of the atomic variable _before_ modification. This is especially useful for irreversible operations -- such as bitops (because it becomes impossible to reconstruct the state prior to modification). Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NVineet Gupta <vgupta@synopsys.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Noam Camus <noamc@ezchip.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 6月, 2016 2 次提交
-
-
由 Vineet Gupta 提交于
This reverts commit e78fdfef. The issue was fixed in hardware in HS2.1C release and there are no known external users of affected RTL so revert the whole delayed retry series ! Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This reverts commit 10971638. The issue was fixed in hardware in HS2.1C release and there are no known external users of affected RTL - so revert thw whole delayed retry series ! Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 09 5月, 2016 1 次提交
-
-
由 Noam Camus 提交于
We need our own implementaions since we lack LLSC support. Our extended ISA provided with optimized solution for all 32bit operations we see in these three headers. Signed-off-by: NNoam Camus <noamc@ezchip.com>
-
- 23 9月, 2015 1 次提交
-
-
由 Peter Zijlstra 提交于
This patch makes sure that atomic_{read,set}() are at least {READ,WRITE}_ONCE(). We already had the 'requirement' that atomic_read() should use ACCESS_ONCE(), and most archs had this, but a few were lacking. All are now converted to use READ_ONCE(). And, by a symmetry and general paranoia argument, upgrade atomic_set() to use WRITE_ONCE(). Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: james.hogan@imgtec.com Cc: linux-kernel@vger.kernel.org Cc: oleg@redhat.com Cc: will.deacon@arm.com Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 8月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
The increment of delay counter was 2 instructions: Arithmatic Shfit Left (ASL) + set to 1 on overflow This can be done in 1 using ROtate Left (ROL) Suggested-by: NNigel Topham <ntopham@synopsys.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 04 8月, 2015 3 次提交
-
-
由 Vineet Gupta 提交于
This is to workaround the llock/scond livelock HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping coherency transactions in the SCU. The exclusive line state keeps rotating among contenting cores leading to a never ending cycle. So break the cycle by deferring the retry of failed exclusive access (SCOND). The actual delay needed is function of number of contending cores as well as the unrelated coherency traffic from other cores. To keep the code simple, start off with small delay of 1 which would suffice most cases and in case of contention double the delay. Eventually the delay is sufficient such that the coherency pipeline is drained, thus a subsequent exclusive access would succeed. Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.comAcked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This reduces the diff in forth-coming patches and also helps understand better the incremental changes to inline asm. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Extended testing of quad core configuration revealed that this fix was insufficient. Specifically LTP open posix shm_op/23-1 would cause the hardware livelock in llock/scond loop in update_cpu_load_active() So remove this and make way for a proper workaround This reverts commit a5c8b52a. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 27 7月, 2015 3 次提交
-
-
由 Peter Zijlstra 提交于
Move the now generic definitions of atomic_{set,clear}_mask() into linux/atomic.h to avoid endless and pointless repetition. Also, provide an atomic_andnot() wrapper for those few archs that can implement that. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Peter Zijlstra 提交于
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Peter Zijlstra 提交于
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Acked-by: NVineet Gupta <vgupta@synopsys.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 6月, 2015 2 次提交
-
-
由 Vineet Gupta 提交于
A quad core SMP build could get into hardware livelock with concurrent LLOCK/SCOND. Workaround that by adding a PREFETCHW which is serialized by SCU (System Coherency Unit). It brings the cache line in Exclusive state and makes others invalidate their lines. This gives enough time for winner to complete the LLOCK/SCOND, before others can get the line back. The prefetchw in the ll/sc loop is not nice but this is the only software workaround for current version of RTL. Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
- arch_spin_lock/unlock were lacking the ACQUIRE/RELEASE barriers Since ARCv2 only provides load/load, store/store and all/all, we need the full barrier - LLOCK/SCOND based atomics, bitops, cmpxchg, which return modified values were lacking the explicit smp barriers. - Non LLOCK/SCOND varaints don't need the explicit barriers since that is implicity provided by the spin locks used to implement the critical section (the spin lock barriers in turn are also fixed in this commit as explained above Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: stable@vger.kernel.org Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 10 5月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
Fixes: f7d11e93 locking,arch,arc: Fold atomic_ops Cc: <stable@kernel.vger.org> # 3.18 Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 13 10月, 2014 1 次提交
-
-
由 Vineet Gupta 提交于
Verified by doing make headers_install as none of these files are exported to userspace
-
- 14 8月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
Many of the atomic op implementations are the same except for one instruction; fold the lot into a few CPP macros and reduce LoC. This also prepares for easy addition of new ops. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Link: http://lkml.kernel.org/r/20140508135851.886055622@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 18 4月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
The arc mb() implementation is a compiler barrier(), therefore it all doesn't matter one way or the other. Simply remove the existing definitions and use whatever is generated by the defaults. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-ua48a59wri3ybz1rz8i7uvbr@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 1月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
Move the barriers functions that depend on the atomic implementation into the atomic implementation. Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: Vineet Gupta <vgupta@synopsys.com> [for arch/arc bits] Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20131213150640.786183683@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 11 2月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
This covers the UP / SMP (with no hardware assist for atomic r-m-w) as well as ARC700 LLOCK/SCOND insns based. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-