1. 30 12月, 2009 5 次提交
  2. 28 12月, 2009 1 次提交
  3. 22 12月, 2009 1 次提交
  4. 15 12月, 2009 1 次提交
  5. 14 12月, 2009 13 次提交
  6. 12 12月, 2009 2 次提交
    • A
      tty: Move the leader test in disassociate · 5ec93d11
      Alan Cox 提交于
      There are two call points, both want to check that tty->signal->leader is
      set. Move the test into disassociate_ctty() as that will make locking
      changes easier in a bit
      Signed-off-by: NAlan Cox <alan@linux.intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      5ec93d11
    • S
      tracing: Add stack trace to irqsoff tracer · cc51a0fc
      Steven Rostedt 提交于
      The irqsoff and friends tracers help in finding causes of latency in the
      kernel. The also work with the function tracer to show what was happening
      when interrupts or preemption are disabled. But the function tracer has
      a bit of an overhead and can cause exagerated readings.
      
      Currently, when tracing with /proc/sys/kernel/ftrace_enabled = 0, where the
      function tracer is disabled, the information that is provided can end up
      being useless. For example, a 2 and a half millisecond latency only showed:
      
       # tracer: preemptirqsoff
       #
       # preemptirqsoff latency trace v1.1.5 on 2.6.32
       # --------------------------------------------------------------------
       # latency: 2463 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: -4242 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: _spin_lock_irqsave
       #  => ended at:   remove_wait_queue
       #
       #
       #                  _------=> CPU#
       #                 / _-----=> irqs-off
       #                | / _----=> need-resched
       #                || / _---=> hardirq/softirq
       #                ||| / _--=> preempt-depth
       #                |||| /_--=> lock-depth
       #                |||||/     delay
       #  cmd     pid   |||||| time  |   caller
       #     \   /      ||||||   \   |   /
       hackbenc-4242    2d....    0us!: trace_hardirqs_off <-_spin_lock_irqsave
       hackbenc-4242    2...1. 2463us+: _spin_unlock_irqrestore <-remove_wait_queue
       hackbenc-4242    2...1. 2466us : trace_preempt_on <-remove_wait_queue
      
      The above lets us know that hackbench with pid 2463 grabbed a spin lock
      somewhere and enabled preemption at remove_wait_queue. This helps a little
      but where this actually happened is not informative.
      
      This patch adds the stack dump to the end of the irqsoff tracer. This provides
      the following output:
      
       hackbenc-4242    2d....    0us!: trace_hardirqs_off <-_spin_lock_irqsave
       hackbenc-4242    2...1. 2463us+: _spin_unlock_irqrestore <-remove_wait_queue
       hackbenc-4242    2...1. 2466us : trace_preempt_on <-remove_wait_queue
       hackbenc-4242    2...1. 2467us : <stack trace>
        => sub_preempt_count
        => _spin_unlock_irqrestore
        => remove_wait_queue
        => free_poll_entry
        => poll_freewait
        => do_sys_poll
        => sys_poll
        => system_call_fastpath
      
      Now we see that the culprit of this latency was the free_poll_entry code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      cc51a0fc
  7. 11 12月, 2009 10 次提交
    • S
      tracing: Add trace_dump_stack() · 03889384
      Steven Rostedt 提交于
      I've been asked a few times about how to find out what is calling
      some location in the kernel. One way is to use dynamic function tracing
      and implement the func_stack_trace. But this only finds out who is
      calling a particular function. It does not tell you who is calling
      that function and entering a specific if conditional.
      
      I have myself implemented a quick version of trace_dump_stack() for
      this purpose a few times, and just needed it now. This is when I realized
      that this would be a good tool to have in the kernel like trace_printk().
      
      Using trace_dump_stack() is similar to dump_stack() except that it
      writes to the trace buffer instead and can be used in critical locations.
      
      For example:
      
      @@ -5485,8 +5485,12 @@ need_resched_nonpreemptible:
       	if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
       		if (unlikely(signal_pending_state(prev->state, prev)))
       			prev->state = TASK_RUNNING;
      -		else
      +		else {
       			deactivate_task(rq, prev, 1);
      +			trace_printk("Deactivating task %s:%d\n",
      +				     prev->comm, prev->pid);
      +			trace_dump_stack();
      +		}
       		switch_count = &prev->nvcsw;
       	}
      
      Produces:
      
                 <...>-3249  [001]   296.105269: schedule: Deactivating task ntpd:3249
                 <...>-3249  [001]   296.105270: <stack trace>
       => schedule
       => schedule_hrtimeout_range
       => poll_schedule_timeout
       => do_select
       => core_sys_select
       => sys_select
       => system_call_fastpath
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      03889384
    • J
      kgdb: Always process the whole breakpoint list on activate or deactivate · 7f8b7ed6
      Jason Wessel 提交于
      This patch fixes 2 edge cases in using kgdb in conjunction with gdb.
      
      1) kgdb_deactivate_sw_breakpoints() should process the entire array of
         breakpoints.  The failure to do so results in breakpoints that you
         cannot remove, because a break point can only be removed if its
         state flag is set to BP_SET.
      
         The easy way to duplicate this problem is to plant a break point in
         a kernel module and then unload the kernel module.
      
      2) kgdb_activate_sw_breakpoints() should process the entire array of
         breakpoints.  The failure to do so results in missed breakpoints
         when a breakpoint cannot be activated.
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      7f8b7ed6
    • J
      kgdb: continue and warn on signal passing from gdb · d625e9c0
      Jason Wessel 提交于
      On some architectures for the segv trap, gdb wants to pass the signal
      back on continue.  For kgdb this is not the default behavior, because
      it can cause the kernel to crash if you arbitrarily pass back a
      exception outside of kgdb.
      
      Instead of causing instability, pass a message back to gdb about the
      supported kgdb signal passing and execute a standard kgdb continue
      operation.
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      d625e9c0
    • J
      kgdb: allow for cpu switch when single stepping · 028e7b17
      Jason Wessel 提交于
      The kgdb core should not assume that a single step operation of a
      kernel thread will complete on the same CPU.  The single step flag is
      set at the "thread" level and it is possible in a multi cpu system
      that a kernel thread can get scheduled on another cpu the next time it
      is run.
      
      As a further safety net in case a slave cpu is hung, the debug master
      cpu will try 100 times before giving up and assuming control of the
      slave cpus is no longer possible.  It is more useful to be able to get
      some information out of kgdb instead of spinning forever.
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      028e7b17
    • J
      kgdb: Read buffer overflow · 84667d48
      Jason Wessel 提交于
      Roel Kluin reported an error found with Parfait.  Where we want to
      ensure that that kgdb_info[-1] never gets accessed.
      
      Also check to ensure any negative tid does not exceed the size of the
      shadow CPU array, else report critical debug context because it is an
      internal kgdb failure.
      Reported-by: NRoel Kluin <roel.kluin@gmail.com>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      84667d48
    • S
      ring-buffer: Move resize integrity check under reader lock · dd7f5943
      Steven Rostedt 提交于
      While using an application that does splice on the ftrace ring
      buffer at start up, I triggered an integrity check failure.
      
      Looking into this, I discovered that resizing the buffer performs
      an integrity check after the buffer is resized. This check unfortunately
      is preformed after it releases the reader lock. If a reader is
      reading the buffer it may cause the integrity check to trigger a
      false failure.
      
      This patch simply moves the integrity checker under the protection
      of the ring buffer reader lock.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dd7f5943
    • S
      ring-buffer: Use sync sched protection on ring buffer resizing · 18421015
      Steven Rostedt 提交于
      There was a comment in the ring buffer code that says the calling
      layers should prevent tracing or reading of the ring buffer while
      resizing. I have discovered that the tracers do not honor this
      arrangement.
      
      This patch moves the disabling and synchronizing the ring buffer to
      a higher layer during resizing. This guarantees that no writes
      are occurring while the resize takes place.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      18421015
    • T
      tracing: Fix wrong usage of strstrip in trace_ksyms · d954fbf0
      Thomas Gleixner 提交于
      strstrip returns a pointer to the first non space character, but the
      code in parse_ksym_trace_str() ignores that.
      
      strstrip is now must_check and therefor we get the correct warning:
      kernel/trace/trace_ksym.c:294: warning:
      ignoring return value of ‘strstrip’, declared with attribute warn_unused_result
      
      We are really not interested in leading whitespace here.
      
      Fix that and cleanup the dozen kfree() exit pathes.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      d954fbf0
    • I
      sched: Remove forced2_migrations stats · b9889ed1
      Ingo Molnar 提交于
      This build warning:
      
       kernel/sched.c: In function 'set_task_cpu':
       kernel/sched.c:2070: warning: unused variable 'old_rq'
      
      Made me realize that the forced2_migrations stat looks pretty
      pointless (and a misnomer) - remove it.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9889ed1
    • X
      perf_event: Fix variable initialization in other codepaths · 5e855db5
      Xiao Guangrong 提交于
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <4B20BAA6.7010609@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5e855db5
  8. 10 12月, 2009 7 次提交