1. 01 7月, 2014 15 次提交
  2. 30 6月, 2014 2 次提交
    • S
      ftrace: Add ftrace_rec_counter() macro to simplify the code · 0376bde1
      Steven Rostedt (Red Hat) 提交于
      The ftrace dynamic record has a flags element that also has a counter.
      Instead of hard coding "rec->flags & ~FTRACE_FL_MASK" all over the
      place. Use a macro instead.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0376bde1
    • S
      ftrace: Allow no regs if no more callbacks require it · 4fbb48cb
      Steven Rostedt (Red Hat) 提交于
      When registering a function callback for the function tracer, the ops
      can specify if it wants to save full regs (like an interrupt would)
      for each function that it traces, or if it does not care about regs
      and just wants to have the fastest return possible.
      
      Once a ops has registered a function, if other ops register that
      function they all will receive the regs too. That's because it does
      the work once, it does it for everyone.
      
      Now if the ops wanting regs unregisters the function so that there's
      only ops left that do not care about regs, those ops will still
      continue getting regs and going through the work for it on that
      function. This is because the disabling of the rec counter only
      sees the ops registered, and does not see the ops that are still
      attached, and does not know if the current ops that are still attached
      want regs or not. To play it safe, it just keeps regs being processed
      until no function is registered anymore.
      
      Instead of doing that, check the ops that are still registered for that
      function and if none want regs for it anymore, then disable the
      processing of regs.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4fbb48cb
  3. 24 6月, 2014 5 次提交
    • A
      kernel/watchdog.c: print traces for all cpus on lockup detection · ed235875
      Aaron Tomlin 提交于
      A 'softlockup' is defined as a bug that causes the kernel to loop in
      kernel mode for more than a predefined period to time, without giving
      other tasks a chance to run.
      
      Currently, upon detection of this condition by the per-cpu watchdog
      task, debug information (including a stack trace) is sent to the system
      log.
      
      On some occasions, we have observed that the "victim" rather than the
      actual "culprit" (i.e.  the owner/holder of the contended resource) is
      reported to the user.  Often this information has proven to be
      insufficient to assist debugging efforts.
      
      To avoid loss of useful debug information, for architectures which
      support NMI, this patch makes it possible to improve soft lockup
      reporting.  This is accomplished by issuing an NMI to each cpu to obtain
      a stack trace.
      
      If NMI is not supported we just revert back to the old method.  A sysctl
      and boot-time parameter is available to toggle this feature.
      
      [dzickus@redhat.com: add CONFIG_SMP in certain areas]
      [akpm@linux-foundation.org: additional CONFIG_SMP=n optimisations]
      [mq@suse.cz: fix warning]
      Signed-off-by: NAaron Tomlin <atomlin@redhat.com>
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Mateusz Guzik <mguzik@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NJan Moskyto Matejka <mq@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ed235875
    • D
      mm, pcp: allow restoring percpu_pagelist_fraction default · 7cd2b0a3
      David Rientjes 提交于
      Oleg reports a division by zero error on zero-length write() to the
      percpu_pagelist_fraction sysctl:
      
          divide error: 0000 [#1] SMP DEBUG_PAGEALLOC
          CPU: 1 PID: 9142 Comm: badarea_io Not tainted 3.15.0-rc2-vm-nfs+ #19
          Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
          task: ffff8800d5aeb6e0 ti: ffff8800d87a2000 task.ti: ffff8800d87a2000
          RIP: 0010: percpu_pagelist_fraction_sysctl_handler+0x84/0x120
          RSP: 0018:ffff8800d87a3e78  EFLAGS: 00010246
          RAX: 0000000000000f89 RBX: ffff88011f7fd000 RCX: 0000000000000000
          RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000010
          RBP: ffff8800d87a3e98 R08: ffffffff81d002c8 R09: ffff8800d87a3f50
          R10: 000000000000000b R11: 0000000000000246 R12: 0000000000000060
          R13: ffffffff81c3c3e0 R14: ffffffff81cfddf8 R15: ffff8801193b0800
          FS:  00007f614f1e9740(0000) GS:ffff88011f440000(0000) knlGS:0000000000000000
          CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
          CR2: 00007f614f1fa000 CR3: 00000000d9291000 CR4: 00000000000006e0
          Call Trace:
            proc_sys_call_handler+0xb3/0xc0
            proc_sys_write+0x14/0x20
            vfs_write+0xba/0x1e0
            SyS_write+0x46/0xb0
            tracesys+0xe1/0xe6
      
      However, if the percpu_pagelist_fraction sysctl is set by the user, it
      is also impossible to restore it to the kernel default since the user
      cannot write 0 to the sysctl.
      
      This patch allows the user to write 0 to restore the default behavior.
      It still requires a fraction equal to or larger than 8, however, as
      stated by the documentation for sanity.  If a value in the range [1, 7]
      is written, the sysctl will return EINVAL.
      
      This successfully solves the divide by zero issue at the same time.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reported-by: NOleg Drokin <green@linuxhacker.ru>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7cd2b0a3
    • D
      kernel/watchdog.c: remove preemption restrictions when restarting lockup detector · bde92cf4
      Don Zickus 提交于
      Peter Wu noticed the following splat on his machine when updating
      /proc/sys/kernel/watchdog_thresh:
      
        BUG: sleeping function called from invalid context at mm/slub.c:965
        in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: init
        3 locks held by init/1:
         #0:  (sb_writers#3){.+.+.+}, at: [<ffffffff8117b663>] vfs_write+0x143/0x180
         #1:  (watchdog_proc_mutex){+.+.+.}, at: [<ffffffff810e02d3>] proc_dowatchdog+0x33/0x110
         #2:  (cpu_hotplug.lock){.+.+.+}, at: [<ffffffff810589c2>] get_online_cpus+0x32/0x80
        Preemption disabled at:[<ffffffff810e0384>] proc_dowatchdog+0xe4/0x110
      
        CPU: 0 PID: 1 Comm: init Not tainted 3.16.0-rc1-testing #34
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
        Call Trace:
          dump_stack+0x4e/0x7a
          __might_sleep+0x11d/0x190
          kmem_cache_alloc_trace+0x4e/0x1e0
          perf_event_alloc+0x55/0x440
          perf_event_create_kernel_counter+0x26/0xe0
          watchdog_nmi_enable+0x75/0x140
          update_timers_all_cpus+0x53/0xa0
          proc_dowatchdog+0xe4/0x110
          proc_sys_call_handler+0xb3/0xc0
          proc_sys_write+0x14/0x20
          vfs_write+0xad/0x180
          SyS_write+0x49/0xb0
          system_call_fastpath+0x16/0x1b
        NMI watchdog: disabled (cpu0): hardware events not enabled
      
      What happened is after updating the watchdog_thresh, the lockup detector
      is restarted to utilize the new value.  Part of this process involved
      disabling preemption.  Once preemption was disabled, perf tried to
      allocate a new event (as part of the restart).  This caused the above
      BUG_ON as you can't sleep with preemption disabled.
      
      The preemption restriction seemed agressive as we are not doing anything
      on that particular cpu, but with all the online cpus (which are
      protected by the get_online_cpus lock).  Remove the restriction and the
      BUG_ON goes away.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Reported-by: NPeter Wu <peter@lekensteyn.nl>
      Tested-by: NPeter Wu <peter@lekensteyn.nl>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: <stable@vger.kernel.org>		[3.13+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bde92cf4
    • P
      kexec: save PG_head_mask in VMCOREINFO · b3acc56b
      Petr Tesarik 提交于
      To allow filtering of huge pages, makedumpfile must be able to identify
      them in the dump.  This can be done by checking the appropriate page
      flag, so communicate its value to makedumpfile through the VMCOREINFO
      interface.
      
      There's only one small catch.  Depending on how many page flags are
      available on a given architecture, this bit can be called PG_head or
      PG_compound.
      
      I sent a similar patch back in 2012, but Eric Biederman did not like
      using an #ifdef.  So, this time I'm adding a common symbol
      (PG_head_mask) instead.
      
      See https://lkml.org/lkml/2012/11/28/91 for the previous version.
      Signed-off-by: NPetr Tesarik <ptesarik@suse.cz>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Alexey Kardashevskiy <aik@ozlabs.ru>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3acc56b
    • S
      CPU hotplug, smp: flush any pending IPI callbacks before CPU offline · 8d056c48
      Srivatsa S. Bhat 提交于
      There is a race between the CPU offline code (within stop-machine) and
      the smp-call-function code, which can lead to getting IPIs on the
      outgoing CPU, *after* it has gone offline.
      
      Specifically, this can happen when using
      smp_call_function_single_async() to send the IPI, since this API allows
      sending asynchronous IPIs from IRQ disabled contexts.  The exact race
      condition is described below.
      
      During CPU offline, in stop-machine, we don't enforce any rule in the
      _DISABLE_IRQ stage, regarding the order in which the outgoing CPU and
      the other CPUs disable their local interrupts.  Due to this, we can
      encounter a situation in which an IPI is sent by one of the other CPUs
      to the outgoing CPU (while it is *still* online), but the outgoing CPU
      ends up noticing it only *after* it has gone offline.
      
                    CPU 1                                         CPU 2
                (Online CPU)                               (CPU going offline)
      
             Enter _PREPARE stage                          Enter _PREPARE stage
      
                                                           Enter _DISABLE_IRQ stage
      
                                                         =
             Got a device interrupt, and                 | Didn't notice the IPI
             the interrupt handler sent an               | since interrupts were
             IPI to CPU 2 using                          | disabled on this CPU.
             smp_call_function_single_async()            |
                                                         =
      
             Enter _DISABLE_IRQ stage
      
             Enter _RUN stage                              Enter _RUN stage
      
                                        =
             Busy loop with interrupts  |                  Invoke take_cpu_down()
             disabled.                  |                  and take CPU 2 offline
                                        =
      
             Enter _EXIT stage                             Enter _EXIT stage
      
             Re-enable interrupts                          Re-enable interrupts
      
                                                           The pending IPI is noted
                                                           immediately, but alas,
                                                           the CPU is offline at
                                                           this point.
      
      This of course, makes the smp-call-function IPI handler code running on
      CPU 2 unhappy and it complains about "receiving an IPI on an offline
      CPU".
      
      One real example of the scenario on CPU 1 is the block layer's
      complete-request call-path:
      
      	__blk_complete_request() [interrupt-handler]
      	    raise_blk_irq()
      	        smp_call_function_single_async()
      
      However, if we look closely, the block layer does check that the target
      CPU is online before firing the IPI.  So in this case, it is actually
      the unfortunate ordering/timing of events in the stop-machine phase that
      leads to receiving IPIs after the target CPU has gone offline.
      
      In reality, getting a late IPI on an offline CPU is not too bad by
      itself (this can happen even due to hardware latencies in IPI
      send-receive).  It is a bug only if the target CPU really went offline
      without executing all the callbacks queued on its list.  (Note that a
      CPU is free to execute its pending smp-call-function callbacks in a
      batch, without waiting for the corresponding IPIs to arrive for each one
      of those callbacks).
      
      So, fixing this issue can be broken up into two parts:
      
      1. Ensure that a CPU goes offline only after executing all the
         callbacks queued on it.
      
      2. Modify the warning condition in the smp-call-function IPI handler
         code such that it warns only if an offline CPU got an IPI *and* that
         CPU had gone offline with callbacks still pending in its queue.
      
      Achieving part 1 is straight-forward - just flush (execute) all the
      queued callbacks on the outgoing CPU in the CPU_DYING stage[1],
      including those callbacks for which the source CPU's IPIs might not have
      been received on the outgoing CPU yet.  Once we do this, an IPI that
      arrives late on the CPU going offline (either due to the race mentioned
      above, or due to hardware latencies) will be completely harmless, since
      the outgoing CPU would have executed all the queued callbacks before
      going offline.
      
      Overall, this fix (parts 1 and 2 put together) additionally guarantees
      that we will see a warning only when the *IPI-sender code* is buggy -
      that is, if it queues the callback _after_ the target CPU has gone
      offline.
      
      [1].  The CPU_DYING part needs a little more explanation: by the time we
      execute the CPU_DYING notifier callbacks, the CPU would have already
      been marked offline.  But we want to flush out the pending callbacks at
      this stage, ignoring the fact that the CPU is offline.  So restructure
      the IPI handler code so that we can by-pass the "is-cpu-offline?" check
      in this particular case.  (Of course, the right solution here is to fix
      CPU hotplug to mark the CPU offline _after_ invoking the CPU_DYING
      notifiers, but this requires a lot of audit to ensure that this change
      doesn't break any existing code; hence lets go with the solution
      proposed above until that is done).
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Suggested-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Mike Galbraith <mgalbraith@suse.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Tested-by: NSachin Kamat <sachin.kamat@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8d056c48
  4. 21 6月, 2014 3 次提交
  5. 17 6月, 2014 2 次提交
  6. 16 6月, 2014 1 次提交
    • T
      rtmutex: Plug slow unlock race · 27e35715
      Thomas Gleixner 提交于
      When the rtmutex fast path is enabled the slow unlock function can
      create the following situation:
      
      spin_lock(foo->m->wait_lock);
      foo->m->owner = NULL;
      	    			rt_mutex_lock(foo->m); <-- fast path
      				free = atomic_dec_and_test(foo->refcnt);
      				rt_mutex_unlock(foo->m); <-- fast path
      				if (free)
      				   kfree(foo);
      
      spin_unlock(foo->m->wait_lock); <--- Use after free.
      
      Plug the race by changing the slow unlock to the following scheme:
      
           while (!rt_mutex_has_waiters(m)) {
           	    /* Clear the waiters bit in m->owner */
      	    clear_rt_mutex_waiters(m);
            	    owner = rt_mutex_owner(m);
            	    spin_unlock(m->wait_lock);
            	    if (cmpxchg(m->owner, owner, 0) == owner)
            	       return;
            	    spin_lock(m->wait_lock);
           }
      
      So in case of a new waiter incoming while the owner tries the slow
      path unlock we have two situations:
      
       unlock(wait_lock);
      					lock(wait_lock);
       cmpxchg(p, owner, 0) == owner
       	    	   			mark_rt_mutex_waiters(lock);
      	 				acquire(lock);
      
      Or:
      
       unlock(wait_lock);
      					lock(wait_lock);
      	 				mark_rt_mutex_waiters(lock);
       cmpxchg(p, owner, 0) != owner
      					enqueue_waiter();
      					unlock(wait_lock);
       lock(wait_lock);
       wakeup_next waiter();
       unlock(wait_lock);
      					lock(wait_lock);
      					acquire(lock);
      
      If the fast path is disabled, then the simple
      
         m->owner = NULL;
         unlock(m->wait_lock);
      
      is sufficient as all access to m->owner is serialized via
      m->wait_lock;
      
      Also document and clarify the wakeup_next_waiter function as suggested
      by Oleg Nesterov.
      Reported-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140611183852.937945560@linutronix.de
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      27e35715
  7. 14 6月, 2014 1 次提交
  8. 11 6月, 2014 4 次提交
  9. 10 6月, 2014 3 次提交
  10. 09 6月, 2014 3 次提交
  11. 07 6月, 2014 1 次提交
    • T
      rtmutex: Detect changes in the pi lock chain · 82084984
      Thomas Gleixner 提交于
      When we walk the lock chain, we drop all locks after each step. So the
      lock chain can change under us before we reacquire the locks. That's
      harmless in principle as we just follow the wrong lock path. But it
      can lead to a false positive in the dead lock detection logic:
      
      T0 holds L0
      T0 blocks on L1 held by T1
      T1 blocks on L2 held by T2
      T2 blocks on L3 held by T3
      T4 blocks on L4 held by T4
      
      Now we walk the chain
      
      lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> 
           lock T2 ->  adjust T2 ->  drop locks
      
      T2 times out and blocks on L0
      
      Now we continue:
      
      lock T2 -> lock L0 -> deadlock detected, but it's not a deadlock at all.
      
      Brad tried to work around that in the deadlock detection logic itself,
      but the more I looked at it the less I liked it, because it's crystal
      ball magic after the fact.
      
      We actually can detect a chain change very simple:
      
      lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->
      
           next_lock = T2->pi_blocked_on->lock;
      
      drop locks
      
      T2 times out and blocks on L0
      
      Now we continue:
      
      lock T2 -> 
      
           if (next_lock != T2->pi_blocked_on->lock)
           	   return;
      
      So if we detect that T2 is now blocked on a different lock we stop the
      chain walk. That's also correct in the following scenario:
      
      lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->
      
           next_lock = T2->pi_blocked_on->lock;
      
      drop locks
      
      T3 times out and drops L3
      T2 acquires L3 and blocks on L4 now
      
      Now we continue:
      
      lock T2 -> 
      
           if (next_lock != T2->pi_blocked_on->lock)
           	   return;
      
      We don't have to follow up the chain at that point, because T2
      propagated our priority up to T4 already.
      
      [ Folded a cleanup patch from peterz ]
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reported-by: NBrad Mouring <bmouring@ni.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140605152801.930031935@linutronix.de
      Cc: stable@vger.kernel.org
      82084984