1. 08 6月, 2017 1 次提交
  2. 12 1月, 2017 1 次提交
  3. 04 8月, 2015 1 次提交
  4. 21 7月, 2015 1 次提交
    • D
      locking/spinlocks: Force inlining of spinlock ops · 3490565b
      Denys Vlasenko 提交于
      With both gcc 4.7.2 and 4.9.2, sometimes GCC mysteriously
      doesn't inline very small functions we expect to be inlined.
      See:
      
          https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
      
      In particular, with this config:
      
         http://busybox.net/~vda/kernel_config
      
      there are more than a thousand copies of tiny spinlock-related
      functions:
      
        $ nm --size-sort vmlinux | grep -iF ' t ' | uniq -c | grep -v '^ *1 ' | sort -rn | grep ' spin'
          473 000000000000000b t spin_unlock_irqrestore
          292 000000000000000b t spin_unlock
          215 000000000000000b t spin_lock
          134 000000000000000b t spin_unlock_irq
          130 000000000000000b t spin_unlock_bh
          120 000000000000000b t spin_lock_irq
          106 000000000000000b t spin_lock_bh
      
      Disassembly:
      
       ffffffff81004720 <spin_lock>:
       ffffffff81004720:       55                      push   %rbp
       ffffffff81004721:       48 89 e5                mov    %rsp,%rbp
       ffffffff81004724:       e8 f8 4e e2 02          callq <_raw_spin_lock>
       ffffffff81004729:       5d                      pop    %rbp
       ffffffff8100472a:       c3                      retq
      
      This patch fixes this via s/inline/__always_inline/ in
      spinlock.h. This decreases vmlinux by about 40k:
      
            text     data      bss       dec     hex filename
        82375570 22255544 20627456 125258570 7774b4a vmlinux.before
        82335059 22255416 20627456 125217931 776ac8b vmlinux
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Graf <tgraf@suug.ch>
      Link: http://lkml.kernel.org/r/1436812263-15243-1-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3490565b
  5. 28 5月, 2015 1 次提交
    • W
      documentation: memory-barriers: Fix smp_mb__before_spinlock() semantics · d956028e
      Will Deacon 提交于
      Our current documentation claims that, when followed by an ACQUIRE,
      smp_mb__before_spinlock() orders prior loads against subsequent loads
      and stores, which isn't the intent.  This commit therefore fixes the
      documentation to state that this sequence orders only prior stores
      against subsequent loads and stores.
      
      In addition, the original intent of smp_mb__before_spinlock() was to only
      order prior loads against subsequent stores, however, people have started
      using it as if it ordered prior loads against subsequent loads and stores.
      This commit therefore also updates smp_mb__before_spinlock()'s header
      comment to reflect this new reality.
      
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d956028e
  6. 04 1月, 2015 1 次提交
  7. 13 8月, 2014 1 次提交
    • B
      locking/spinlocks: Always evaluate the second argument of spin_lock_nested() · 4999201a
      Bart Van Assche 提交于
      Evaluating a macro argument only if certain configuration options
      have been selected is confusing and error-prone. Hence always
      evaluate the second argument of spin_lock_nested().
      
      An intentional side effect of this patch is that it avoids that
      the following warning is reported for netif_addr_lock_nested()
      when building with CONFIG_DEBUG_LOCK_ALLOC=n and with W=1:
      
        include/linux/netdevice.h: In function 'netif_addr_lock_nested':
        include/linux/netdevice.h:2865:6: warning: variable 'subclass' set but not used [-Wunused-but-set-variable]
          int subclass = SINGLE_DEPTH_NESTING;
              ^
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/53E4A7F8.1040700@acm.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4999201a
  8. 16 12月, 2013 1 次提交
  9. 13 8月, 2013 1 次提交
    • O
      sched: fix the theoretical signal_wake_up() vs schedule() race · e0acd0a6
      Oleg Nesterov 提交于
      This is only theoretical, but after try_to_wake_up(p) was changed
      to check p->state under p->pi_lock the code like
      
      	__set_current_state(TASK_INTERRUPTIBLE);
      	schedule();
      
      can miss a signal. This is the special case of wait-for-condition,
      it relies on try_to_wake_up/schedule interaction and thus it does
      not need mb() between __set_current_state() and if(signal_pending).
      
      However, this __set_current_state() can move into the critical
      section protected by rq->lock, now that try_to_wake_up() takes
      another lock we need to ensure that it can't be reordered with
      "if (signal_pending(current))" check inside that section.
      
      The patch is actually one-liner, it simply adds smp_wmb() before
      spin_lock_irq(rq->lock). This is what try_to_wake_up() already
      does by the same reason.
      
      We turn this wmb() into the new helper, smp_mb__before_spinlock(),
      for better documentation and to allow the architectures to change
      the default implementation.
      
      While at it, kill smp_mb__after_lock(), it has no callers.
      
      Perhaps we can also add smp_mb__before/after_spinunlock() for
      prepare_to_wait().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0acd0a6
  10. 29 3月, 2012 2 次提交
    • D
      Remove all #inclusions of asm/system.h · 9ffc93f2
      David Howells 提交于
      Remove all #inclusions of asm/system.h preparatory to splitting and killing
      it.  Performed with the following command:
      
      perl -p -i -e 's!^#\s*include\s*<asm/system[.]h>.*\n!!' `grep -Irl '^#\s*include\s*<asm/system[.]h>' *`
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      9ffc93f2
    • D
      Add #includes needed to permit the removal of asm/system.h · 96f951ed
      David Howells 提交于
      asm/system.h is a cause of circular dependency problems because it contains
      commonly used primitive stuff like barrier definitions and uncommonly used
      stuff like switch_to() that might require MMU definitions.
      
      asm/system.h has been disintegrated by this point on all arches into the
      following common segments:
      
       (1) asm/barrier.h
      
           Moved memory barrier definitions here.
      
       (2) asm/cmpxchg.h
      
           Moved xchg() and cmpxchg() here.  #included in asm/atomic.h.
      
       (3) asm/bug.h
      
           Moved die() and similar here.
      
       (4) asm/exec.h
      
           Moved arch_align_stack() here.
      
       (5) asm/elf.h
      
           Moved AT_VECTOR_SIZE_ARCH here.
      
       (6) asm/switch_to.h
      
           Moved switch_to() here.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      96f951ed
  11. 29 2月, 2012 1 次提交
    • P
      spinlock: macroize assert_spin_locked to avoid bug.h dependency · 4ebc1b4b
      Paul Gortmaker 提交于
      In spinlock_api_smp.h we find a define for assert_raw_spin_locked
      [which uses BUG_ON].   Then assert_spin_locked (as an inline) uses
      it, meaning we need bug.h  But rather than put linux/bug.h in such
      a highly used file like spinlock.h, we can just make the un-raw
      version also a macro.  Then the required bug.h presence is limited
      just to those few files who are actually doing the assert testing.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      CC: Thomas Gleixner <tglx@linutronix.de>
      4ebc1b4b
  12. 27 7月, 2011 1 次提交
  13. 31 3月, 2011 1 次提交
  14. 07 10月, 2010 1 次提交
    • D
      Fix IRQ flag handling naming · df9ee292
      David Howells 提交于
      Fix the IRQ flag handling naming.  In linux/irqflags.h under one configuration,
      it maps:
      
      	local_irq_enable() -> raw_local_irq_enable()
      	local_irq_disable() -> raw_local_irq_disable()
      	local_irq_save() -> raw_local_irq_save()
      	...
      
      and under the other configuration, it maps:
      
      	raw_local_irq_enable() -> local_irq_enable()
      	raw_local_irq_disable() -> local_irq_disable()
      	raw_local_irq_save() -> local_irq_save()
      	...
      
      This is quite confusing.  There should be one set of names expected of the
      arch, and this should be wrapped to give another set of names that are expected
      by users of this facility.
      
      Change this to have the arch provide:
      
      	flags = arch_local_save_flags()
      	flags = arch_local_irq_save()
      	arch_local_irq_restore(flags)
      	arch_local_irq_disable()
      	arch_local_irq_enable()
      	arch_irqs_disabled_flags(flags)
      	arch_irqs_disabled()
      	arch_safe_halt()
      
      Then linux/irqflags.h wraps these to provide:
      
      	raw_local_save_flags(flags)
      	raw_local_irq_save(flags)
      	raw_local_irq_restore(flags)
      	raw_local_irq_disable()
      	raw_local_irq_enable()
      	raw_irqs_disabled_flags(flags)
      	raw_irqs_disabled()
      	raw_safe_halt()
      
      with type checking on the flags 'arguments', and then wraps those to provide:
      
      	local_save_flags(flags)
      	local_irq_save(flags)
      	local_irq_restore(flags)
      	local_irq_disable()
      	local_irq_enable()
      	irqs_disabled_flags(flags)
      	irqs_disabled()
      	safe_halt()
      
      with tracing included if enabled.
      
      The arch functions can now all be inline functions rather than some of them
      having to be macros.
      
      Signed-off-by: David Howells <dhowells@redhat.com> [X86, FRV, MN10300]
      Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> [Tile]
      Signed-off-by: Michal Simek <monstr@monstr.eu> [Microblaze]
      Tested-by: Catalin Marinas <catalin.marinas@arm.com> [ARM]
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> [AVR]
      Acked-by: Tony Luck <tony.luck@intel.com> [IA-64]
      Acked-by: Hirokazu Takata <takata@linux-m32r.org> [M32R]
      Acked-by: Greg Ungerer <gerg@uclinux.org> [M68K/M68KNOMMU]
      Acked-by: Ralf Baechle <ralf@linux-mips.org> [MIPS]
      Acked-by: Kyle McMartin <kyle@mcmartin.ca> [PA-RISC]
      Acked-by: Paul Mackerras <paulus@samba.org> [PowerPC]
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [S390]
      Acked-by: Chen Liqin <liqin.chen@sunplusct.com> [Score]
      Acked-by: Matt Fleming <matt@console-pimps.org> [SH]
      Acked-by: David S. Miller <davem@davemloft.net> [Sparc]
      Acked-by: Chris Zankel <chris@zankel.net> [Xtensa]
      Reviewed-by: Richard Henderson <rth@twiddle.net> [Alpha]
      Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> [H8300]
      Cc: starvik@axis.com [CRIS]
      Cc: jesper.nilsson@axis.com [CRIS]
      Cc: linux-cris-kernel@axis.com
      df9ee292
  15. 13 3月, 2010 1 次提交
  16. 03 3月, 2010 1 次提交
  17. 15 12月, 2009 7 次提交
  18. 24 11月, 2009 2 次提交
  19. 01 9月, 2009 2 次提交
    • H
      locking: Simplify spinlock inlining · bb7bed08
      Heiko Carstens 提交于
      For !DEBUG_SPINLOCK && !PREEMPT && SMP the spin_unlock()
      functions were always inlined by using special defines which
      would call the __raw* functions.
      
      The out-of-line variants for these functions would be generated
      anyway.
      
      Use the new per unlock/locking variant mechanism to force
      inlining of the unlock functions like before. This is not a
      functional change, we just get rid of one additional way to
      force inlining.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Horst Hartmann <horsth@linux.vnet.ibm.com>
      Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: <linux-arch@vger.kernel.org>
      LKML-Reference: <20090831124418.848735034@de.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb7bed08
    • H
      locking: Move spinlock function bodies to header file · 69d0ee73
      Heiko Carstens 提交于
      Move spinlock function bodies to header file by creating a
      static inline version of each variant. Use the inline version
      on the out-of-line code.
      
      This shouldn't make any difference besides that the spinlock
      code can now be used to generate inlined spinlock code.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Horst Hartmann <horsth@linux.vnet.ibm.com>
      Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: <linux-arch@vger.kernel.org>
      LKML-Reference: <20090831124417.859022429@de.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      69d0ee73
  20. 10 7月, 2009 1 次提交
  21. 03 4月, 2009 1 次提交
  22. 10 2月, 2009 1 次提交
  23. 11 8月, 2008 1 次提交
  24. 26 7月, 2008 1 次提交
  25. 17 4月, 2008 1 次提交
  26. 12 4月, 2008 1 次提交
  27. 09 2月, 2008 1 次提交
  28. 30 1月, 2008 1 次提交
    • N
      spinlock: lockbreak cleanup · 95c354fe
      Nick Piggin 提交于
      The break_lock data structure and code for spinlocks is quite nasty.
      Not only does it double the size of a spinlock but it changes locking to
      a potentially less optimal trylock.
      
      Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
      __raw_spin_is_contended that uses the lock data itself to determine whether
      there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
      not set.
      
      Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
      decouple it from the spinlock implementation, and make it typesafe (rwlocks
      do not have any need_lockbreak sites -- why do they even get bloated up
      with that break_lock then?).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      95c354fe
  29. 17 7月, 2007 1 次提交
  30. 05 3月, 2007 1 次提交
    • H
      [PATCH] timer/hrtimer: take per cpu locks in sane order · e81ce1f7
      Heiko Carstens 提交于
      Doing something like this on a two cpu system
      
        # echo 0 > /sys/devices/system/cpu/cpu0/online
        # echo 1 > /sys/devices/system/cpu/cpu0/online
        # echo 0 > /sys/devices/system/cpu/cpu1/online
      
      will give me this:
      
        =======================================================
        [ INFO: possible circular locking dependency detected ]
        2.6.21-rc2-g562aa1d4-dirty #7
        -------------------------------------------------------
        bash/1282 is trying to acquire lock:
         (&cpu_base->lock_key){.+..}, at: [<000000000005f17e>] hrtimer_cpu_notify+0xc6/0x240
      
        but task is already holding lock:
         (&cpu_base->lock_key#2){.+..}, at: [<000000000005f174>] hrtimer_cpu_notify+0xbc/0x240
      
        which lock already depends on the new lock.
      
      This happens because we have the following code in kernel/hrtimer.c:
      
        migrate_hrtimers(int cpu)
        [...]
        old_base = &per_cpu(hrtimer_bases, cpu);
        new_base = &get_cpu_var(hrtimer_bases);
        [...]
        spin_lock(&new_base->lock);
        spin_lock(&old_base->lock);
      
      Which means the spinlocks are taken in an order which depends on which cpu
      gets shut down from which other cpu. Therefore lockdep complains that there
      might be an ABBA deadlock. Since migrate_hrtimers() gets only called on
      cpu hotplug it's safe to assume that it isn't executed concurrently on a
      
      The same problem exists in kernel/timer.c: migrate_timers().
      
      As pointed out by Christian Borntraeger one possible solution to avoid
      the locking order complaints would be to make sure that the locks are
      always taken in the same order. E.g. by taking the lock of the cpu with
      the lower number first.
      
      To achieve this we introduce two new spinlock functions double_spin_lock
      and double_spin_unlock which lock or unlock two locks in a given order.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Christian Borntraeger <cborntra@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e81ce1f7
  31. 12 2月, 2007 1 次提交