1. 14 8月, 2014 1 次提交
  2. 21 12月, 2012 1 次提交
    • S
      lib: atomic64: Initialize locks statically to fix early users · fcc16882
      Stephen Boyd 提交于
      The atomic64 library uses a handful of static spin locks to implement
      atomic 64-bit operations on architectures without support for atomic
      64-bit instructions.
      
      Unfortunately, the spinlocks are initialized in a pure initcall and that
      is too late for the vfs namespace code which wants to use atomic64
      operations before the initcall is run.
      
      This became a problem as of commit 8823c079: "vfs: Add setns support
      for the mount namespace".
      
      This leads to BUG messages such as:
      
        BUG: spinlock bad magic on CPU#0, swapper/0/0
         lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
          do_raw_spin_lock+0x158/0x198
          _raw_spin_lock_irqsave+0x4c/0x58
          atomic64_add_return+0x30/0x5c
          alloc_mnt_ns.clone.14+0x44/0xac
          create_mnt_ns+0xc/0x54
          mnt_init+0x120/0x1d4
          vfs_caches_init+0xe0/0x10c
          start_kernel+0x29c/0x300
      
      coming out early on during boot when spinlock debugging is enabled.
      
      Fix this by initializing the spinlocks statically at compile time.
      Reported-and-tested-by: NVaibhav Bedia <vaibhav.bedia@ti.com>
      Tested-by: NTony Lindgren <tony@atomide.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fcc16882
  3. 08 3月, 2012 1 次提交
  4. 14 9月, 2011 1 次提交
  5. 13 9月, 2011 1 次提交
    • S
      locking, lib/atomic64: Annotate atomic64_lock::lock as raw · f59ca058
      Shan Hai 提交于
      The spinlock protected atomic64 operations must be irq safe as they
      are used in hard interrupt context and cannot be preempted on -rt:
      
       NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
        LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
       Call Trace:
        [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
        [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
        [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
        [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
        [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
        [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c
      
      So annotate it.
      
      In mainline this change documents the low level nature of
      the lock - otherwise there's no functional difference. Lockdep
      and Sparse checking will work as usual.
      Signed-off-by: NShan Hai <haishan.bai@gmail.com>
      Reviewed-by: NYong Zhang <yong.zhang0@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f59ca058
  6. 27 7月, 2011 1 次提交
  7. 02 3月, 2010 1 次提交
  8. 30 7月, 2009 1 次提交
  9. 15 6月, 2009 1 次提交
    • P
      lib: Provide generic atomic64_t implementation · 09d4e0ed
      Paul Mackerras 提交于
      Many processor architectures have no 64-bit atomic instructions, but
      we need atomic64_t in order to support the perf_counter subsystem.
      
      This adds an implementation of 64-bit atomic operations using hashed
      spinlocks to provide atomicity.  For each atomic operation, the address
      of the atomic64_t variable is hashed to an index into an array of 16
      spinlocks.  That spinlock is taken (with interrupts disabled) around the
      operation, which can then be coded non-atomically within the lock.
      
      On UP, all the spinlock manipulation goes away and we simply disable
      interrupts around each operation.  In fact gcc eliminates the whole
      atomic64_lock variable as well.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      09d4e0ed