1. 23 9月, 2015 1 次提交
    • P
      atomic, arch: Audit atomic_{read,set}() · 62e8a325
      Peter Zijlstra 提交于
      This patch makes sure that atomic_{read,set}() are at least
      {READ,WRITE}_ONCE().
      
      We already had the 'requirement' that atomic_read() should use
      ACCESS_ONCE(), and most archs had this, but a few were lacking.
      All are now converted to use READ_ONCE().
      
      And, by a symmetry and general paranoia argument, upgrade atomic_set()
      to use WRITE_ONCE().
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: james.hogan@imgtec.com
      Cc: linux-kernel@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: will.deacon@arm.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      62e8a325
  2. 27 7月, 2015 1 次提交
  3. 22 4月, 2015 1 次提交
    • H
      x86/asm: Always inline atomics · 3462bd2a
      Hagen Paul Pfeifer 提交于
      During some code analysis I realized that atomic_add(), atomic_sub()
      and friends are not necessarily inlined AND that each function
      is defined multiple times:
      
      	atomic_inc:          544 duplicates
      	atomic_dec:          215 duplicates
      	atomic_dec_and_test: 107 duplicates
      	atomic64_inc:         38 duplicates
      	[...]
      
      Each definition is exact equally, e.g.:
      
      	ffffffff813171b8 <atomic_add>:
      	55         push   %rbp
      	48 89 e5   mov    %rsp,%rbp
      	f0 01 3e   lock add %edi,(%rsi)
      	5d         pop    %rbp
      	c3         retq
      
      In turn each definition has one or more callsites (sure):
      
      	ffffffff81317c78: e8 3b f5 ff ff  callq  ffffffff813171b8 <atomic_add> [...]
      	ffffffff8131a062: e8 51 d1 ff ff  callq  ffffffff813171b8 <atomic_add> [...]
      	ffffffff8131a190: e8 23 d0 ff ff  callq  ffffffff813171b8 <atomic_add> [...]
      
      The other way around would be to remove the static linkage - but
      I prefer an enforced inlining here.
      
      	Before:
      	  text     data	  bss      dec       hex     filename
      	  81467393 19874720 20168704 121510817 73e1ba1 vmlinux.orig
      
      	After:
      	  text     data     bss      dec       hex     filename
      	  81461323 19874720 20168704 121504747 73e03eb vmlinux.inlined
      
      Yes, the inlining here makes the kernel even smaller! ;)
      
      Linus further observed:
      
      	"I have this memory of having seen that before - the size
      	 heuristics for gcc getting confused by inlining.
      	 [...]
      
      	 It might be a good idea to mark things that are basically just
      	 wrappers around a single (or a couple of) asm instruction to be
      	 always_inline."
      Signed-off-by: NHagen Paul Pfeifer <hagen@jauu.net>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1429565231-4609-1-git-send-email-hagen@jauu.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3462bd2a
  4. 03 10月, 2014 1 次提交
  5. 05 12月, 2013 1 次提交
  6. 25 9月, 2013 1 次提交
  7. 30 8月, 2011 1 次提交
  8. 27 7月, 2011 1 次提交
  9. 17 5月, 2010 1 次提交
  10. 02 3月, 2010 1 次提交
  11. 08 1月, 2010 1 次提交