1. 05 9月, 2009 7 次提交
    • S
      tracing: report error in trace if we fail to swap latency buffer · e8165dbb
      Steven Rostedt 提交于
      The irqsoff tracer will fail to swap the cpu buffer with the max
      buffer if it preempts a commit. Instead of ignoring this, this patch
      makes the tracer report it if the last max latency failed due to preempting
      a current commit.
      
      The output of the latency tracer will look like this:
      
       # tracer: irqsoff
       #
       # irqsoff latency trace v1.1.5 on 2.6.31-rc5
       # --------------------------------------------------------------------
       # latency: 112 us, #1/1, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: -4281 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: save_args
       #  => ended at:   __do_softirq
       #
       #
       #                  _------=> CPU#
       #                 / _-----=> irqs-off
       #                | / _----=> need-resched
       #                || / _---=> hardirq/softirq
       #                ||| / _--=> preempt-depth
       #                |||| /
       #                |||||     delay
       #  cmd     pid   ||||| time  |   caller
       #     \   /      |||||   \   |   /
          bash-4281    1d.s6  265us : update_max_tr_single: Failed to swap buffers due to commit in progress
      
      Note the latency time and the functions that disabled the irqs or preemption
      will still be listed.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e8165dbb
    • S
      tracing: add trace_array_printk for internal tracers to use · 659372d3
      Steven Rostedt 提交于
      This patch adds a trace_array_printk to allow a tracer to use the
      trace_printk on its own trace array.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      659372d3
    • S
      tracing: pass around ring buffer instead of tracer · e77405ad
      Steven Rostedt 提交于
      The latency tracers (irqsoff and wakeup) can swap trace buffers
      on the fly. If an event is happening and has reserved data on one of
      the buffers, and the latency tracer swaps the global buffer with the
      max buffer, the result is that the event may commit the data to the
      wrong buffer.
      
      This patch changes the API to the trace recording to be recieve the
      buffer that was used to reserve a commit. Then this buffer can be passed
      in to the commit.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e77405ad
    • S
      tracing: make tracing_reset safe for external use · f633903a
      Steven Rostedt 提交于
      Reseting the trace buffer without first disabling the buffer and
      waiting for any writers to complete, can corrupt the ring buffer.
      
      This patch makes the external version of tracing_reset safe from
      corruption by disabling the ring buffer and calling synchronize_sched.
      
      This version can no longer be called from interrupt context. But all those
      callers have been removed.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f633903a
    • S
      tracing: use timestamp to determine start of latency traces · 2f26ebd5
      Steven Rostedt 提交于
      Currently the latency tracers reset the ring buffer. Unfortunately
      if a commit is in process (due to a trace event), this can corrupt
      the ring buffer. When this happens, the ring buffer will detect
      the corruption and then permanently disable the ring buffer.
      
      The bug does not crash the system, but it does prevent further tracing
      after the bug is hit.
      
      Instead of reseting the trace buffers, the timestamp of the start of
      the trace is used instead. The buffers will still contain the previous
      data, but the output will not count any data that is before the
      timestamp of the trace.
      
      Note, this only affects the static trace output (trace) and not the
      runtime trace output (trace_pipe). The runtime trace output does not
      make sense for the latency tracers anyway.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f26ebd5
    • S
      tracing: remove users of tracing_reset · 76f0d073
      Steven Rostedt 提交于
      The function tracing_reset is deprecated for outside use of trace.c.
      
      The new function to reset the the buffers is tracing_reset_online_cpus.
      
      The reason for this is that resetting the buffers while the event
      trace points are active can corrupt the buffers, because they may
      be writing at the time of reset. The tracing_reset_online_cpus disables
      writes and waits for current writers to finish.
      
      This patch replaces all users of tracing_reset except for the latency
      tracers. Those changes require more work and will be removed in the
      following patches.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      76f0d073
    • S
      tracing: disable buffers and synchronize_sched before resetting · 621968cd
      Steven Rostedt 提交于
      Resetting the ring buffers while traces are happening can corrupt
      the ring buffer and disable it (no kernel crash to worry about).
      
      The safest thing to do is disable the ring buffers, call synchronize_sched()
      to wait for all current writers to finish and then reset the buffer.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      621968cd
  2. 04 9月, 2009 9 次提交
    • S
      tracing: disable update max tracer while reading trace · b8de7bd1
      Steven Rostedt 提交于
      When reading the tracer from the trace file, updating the max latency
      may corrupt the output. This patch disables the tracing of the max
      latency while reading the trace file.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b8de7bd1
    • S
      tracing: print out start and stop in latency traces · 8248ac05
      Steven Rostedt 提交于
      During development of the tracer, we would copy information from
      the live tracer to the max tracer with one memcpy. Since then we
      added a generic ring buffer and we handle the copies differently now.
      Unfortunately, we never copied the critical section information, and
      we lost the output:
      
       #  => started at: kmem_cache_alloc
       #  => ended at:   kmem_cache_alloc
      
      This patch adds back the critical start and end copying as well as
      removes the unused "trace_idx" and "overrun" fields of the
      trace_array_cpu structure.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8248ac05
    • S
      ring-buffer: disable all cpu buffers when one finds a problem · 077c5407
      Steven Rostedt 提交于
      Currently the way RB_WARN_ON works, is to disable either the current
      CPU buffer or all CPU buffers, depending on whether a ring_buffer or
      ring_buffer_per_cpu struct was passed into the macro.
      
      Most users of the RB_WARN_ON pass in the CPU buffer, so only the one
      CPU buffer gets disabled but the rest are still active. This may
      confuse users even though a warning is sent to the console.
      
      This patch changes the macro to disable the entire buffer even if
      the CPU buffer is passed in.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      077c5407
    • S
      ring-buffer: do not count discarded events · a1863c21
      Steven Rostedt 提交于
      The latency tracers report the number of items in the trace buffer.
      This uses the ring buffer data to calculate this. Because discarded
      events are also counted, the numbers do not match the number of items
      that are printed. The ring buffer also adds a "padding" item to the
      end of each buffer page which also gets counted as a discarded item.
      
      This patch decrements the counter to the page entries on a discard.
      This allows us to ignore discarded entries while reading the buffer.
      
      Decrementing the counter is still safe since it can only happen while
      the committing flag is still set.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a1863c21
    • S
      ring-buffer: remove ring_buffer_event_discard · dc892f73
      Steven Rostedt 提交于
      The function ring_buffer_event_discard can be used on any item in the
      ring buffer, even after the item was committed. This function provides
      no safety nets and is very race prone.
      
      An item may be safely removed from the ring buffer before it is committed
      with the ring_buffer_discard_commit.
      
      Since there are currently no users of this function, and because this
      function is racey and error prone, this patch removes it altogether.
      
      Note, removing this function also allows the counters to ignore
      all discarded events (patches will follow).
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dc892f73
    • S
      ring-buffer: fix ring_buffer_read crossing pages · 7e9391cf
      Steven Rostedt 提交于
      When the ring buffer uses an iterator (static read mode, not on the
      fly reading), when it crosses a page boundery, it will skip the first
      entry on the next page. The reason is that the last entry of a page
      is usually padding if the page is not full. The padding will not be
      returned to the user.
      
      The problem arises on ring_buffer_read because it also increments the
      iterator. Because both the read and peek use the same rb_iter_peek,
      the rb_iter_peak will return the padding but also increment to the next
      item. This is because the ring_buffer_peek will not incerment it
      itself.
      
      The ring_buffer_read will increment it again and then call rb_iter_peek
      again to get the next item. But that will be the second item, not the
      first one on the page.
      
      The reason this never showed up before, is because the ftrace utility
      always calls ring_buffer_peek first and only uses ring_buffer_read
      to increment to the next item. The ring_buffer_peek will always keep
      the pointer to a valid item and not padding. This just hid the bug.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7e9391cf
    • S
      ring-buffer: remove unnecessary cpu_relax · 1b959e18
      Steven Rostedt 提交于
      The loops in the ring buffer that use cpu_relax are not dependent on
      other CPUs. They simply came across some padding in the ring buffer and
      are skipping over them. It is a normal loop and does not require a
      cpu_relax.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1b959e18
    • S
      ring-buffer: do not swap buffers during a commit · 98277991
      Steven Rostedt 提交于
      If a commit is taking place on a CPU ring buffer, do not allow it to
      be swapped. Return -EBUSY when this is detected instead.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      98277991
    • S
      ring-buffer: do not reset while in a commit · 41b6a95d
      Steven Rostedt 提交于
      The callers of reset must ensure that no commit can be taking place
      at the time of the reset. If it does then we may corrupt the ring buffer.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      41b6a95d
  3. 31 8月, 2009 1 次提交
    • L
      tracing/filters: Defer pred allocation · 8e254c1d
      Li Zefan 提交于
      init_preds() allocates about 5392 bytes of memory (on x86_32) for
      a TRACE_EVENT. With my config, at system boot total memory occupied
      is:
      
      	5392 * (642 + 15) == 3459KB
      
      642 == cat available_events | wc -l
      15 == number of dirs in events/ftrace
      
      That's quite a lot, so we'd better defer memory allocation util
      it's needed, that's when filter is used.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      LKML-Reference: <4A9B8EA5.6020700@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8e254c1d
  4. 29 8月, 2009 1 次提交
  5. 28 8月, 2009 5 次提交
    • F
      tracing: Fix double CPP substitution in TRACE_EVENT_FN · 0dd7b747
      Frederic Weisbecker 提交于
      TRACE_EVENT_FN relays on TRACE_EVENT by reprocessing its parameters
      into the ftrace events CPP macro. This leads to a double substitution
      in some cases.
      
      For example, a bad consequence is a format always prefixed by
      "%s, %s\n" for every TRACE_EVENT_FN based events.
      
      Eg:
      	cat /debug/tracing/events/syscalls/sys_enter/format
      	[...]
      	print fmt: "%s, %s\n", "\"NR %ld (%lx, %lx, %lx, %lx, %lx, %lx)\"",\
      	"REC->id, REC->args[0], REC->args[1], REC->args[2], REC->args[3],\
      	REC->args[4], REC->args[5]"
      
      This creates a failure in post-processing tools such as perf trace or
      trace-cmd.
      
      Then drop this double substitution and replace it by a new __cpparg()
      macro that relays CPP arguments containing commas.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Josh Stone <jistone@redhat.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <1251413406-6704-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0dd7b747
    • I
      Merge branch 'tracing/core' of... · 66c6e29f
      Ingo Molnar 提交于
      Merge branch 'tracing/core' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing into tracing/core
      66c6e29f
    • S
      tracing: only show tracing_max_latency when latency tracer configured · 5d4a9dba
      Steven Rostedt 提交于
      The tracing_max_latency file should only be present when one of the
      latency tracers ({preempt|irqs}off, wakeup*) are enabled.
      
      This patch also removes tracing_thresh when latency tracers are not
      enabled, as well as compiles out code that is only used for latency
      tracers.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5d4a9dba
    • S
      tracing: remove legacy select of MARKERS by context switch tracing · c0729be9
      Steven Rostedt 提交于
      The context switch tracer was made before tracepoints were mature, and
      the original version used markers. This is no longer true and this
      patch removes the select.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c0729be9
    • F
      tracing: Undef TRACE_EVENT_FN between trace events headers inclusion · 6c347d43
      Frederic Weisbecker 提交于
      The recent commit:
      
      	tracing/events: fix the include file dependencies
      
      fixed a file dependency problem while including more than
      one trace event header file.
      
      This fix undefined TRACE_EVENT after an event header macro
      preprocessing in order to make tracepoint.h able to correctly declare
      the tracepoints necessary for the next event header file.
      
      But now we also need to undefine TRACE_EVENT_FN at the end of an event
      header file preprocessing for the same reason.
      
      This fixes the following build error:
      
      In file included from include/trace/events/napi.h:5,
                       from net/core/net-traces.c:28:
      include/linux/tracepoint.h:285:1: warning: "TRACE_EVENT_FN" redefined
      In file included from include/trace/define_trace.h:61,
                       from include/trace/events/skb.h:40,
                       from net/core/net-traces.c:27:
      include/trace/ftrace.h:50:1: warning: this is the location of the previous definition
      In file included from include/trace/events/napi.h:5,
                       from net/core/net-traces.c:28:
      include/linux/tracepoint.h:285:1: warning: "TRACE_EVENT_FN" redefined
      In file included from include/trace/define_trace.h:61,
                       from include/trace/events/skb.h:40,
                       from net/core/net-traces.c:27:
      include/trace/ftrace.h:50:1: warning: this is the location of the previous definition
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <20090827161732.GA7618@nowhere>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6c347d43
  6. 27 8月, 2009 7 次提交
    • J
      tracing: Remove FTRACE_SYSCALL_MAX definitions · 117226d1
      Jason Baron 提交于
      Remove the FTRACE_SYSCALL_MAX definitions now that we have converted the
      syscall event tracing code to use NR_syscalls.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Josh Stone <jistone@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anwin <hpa@zytor.com>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      LKML-Reference: <f2240cdc8f0b1ca7617390c8f5ec90ba2bd348cf.1251146513.git.jbaron@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      117226d1
    • J
      tracing: Convert event tracing code to use NR_syscalls · 57421dbb
      Jason Baron 提交于
      Convert the syscalls event tracing code to use NR_syscalls, instead of
      FTRACE_SYSCALL_MAX. NR_syscalls is standard accross most arches, and
      reduces code confusion/complexity.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Josh Stone <jistone@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anwin <hpa@zytor.com>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      LKML-Reference: <9b4f1a84ecae57cc6599412772efa36f0d2b815b.1251146513.git.jbaron@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      57421dbb
    • J
      tracing: Define NR_syscalls for x86_64 · a5a2f8e2
      Jason Baron 提交于
      Express the available number of syscalls in a standard way by defining
      NR_syscalls.
      
      The common way to define it is to place its definition in asm/unistd.h
      However, the number of syscalls is defined using __NR_syscall_max in
      x86-64 after building a dynamic header file "asm-offsets.h"
      
      The source file that generates this header, asm-offsets-64.c includes
      unistd.h, then if we want to express NR_syscalls from __NR_syscall_max
      in unistd.h only after generating the dynamic header file, we need a
      watchguard.
      
      If unistd.h is included from asm-offsets-64.c, then we are generating
      asm-offset.h which defines __NR_syscall_max. At this time, we don't
      want to (we can't) define NR_syscalls, then we do nothing.
      Otherwise we define NR_syscalls because we know asm-offsets.h has
      been generated.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Josh Stone <jistone@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anwin <hpa@zytor.com>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      LKML-Reference: <20090826160910.GB2658@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      a5a2f8e2
    • J
      tracing: Define NR_syscalls for x86 (32) · dd86dda2
      Jason Baron 提交于
      Add a NR_syscalls #define for x86. This is used in the syscall events
      tracing code. Todo: make it dynamic like x86_64.
      
      NR_syscalls is the usual name used to determine the number of syscalls
      supported by the current arch. We want to unify the use of this number
      across archs that support the syscall tracing. This also prepare to move
      some of the arch code to core code in the syscall tracing area.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Josh Stone <jistone@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anwin <hpa@zytor.com>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      LKML-Reference: <0f33c0f96d198fccc3ddd9ff7f5334ff5cb42706.1251146513.git.jbaron@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      dd86dda2
    • H
      tracing: Don't trace kernel thread syscalls · cc3b13c1
      Hendrik Brueckner 提交于
      Kernel threads don't call syscalls using the sysenter/sysexit
      path. Instead they directly call the sys_* or do_* functions
      that implement the syscalls inside the kernel.
      
      The current syscall tracepoints only bind the sysenter/sysexit
      path, then it has no effect to trace the kernel thread calls
      to syscalls in that path.
      Setting the TIF_SYSCALL_TRACEPOINT flag is then useless for these.
      
      Actually there is only one case when a kernel thread can reach the
      usual syscall exit tracing path: when we create a kernel thread, the
      child comes to ret_from_fork and is the fork() return is then traced.
      But this information alone is useless, then we don't want to set the
      TIF flags for these threads.
      
      Kernel threads have task_struct->mm set to NULL.
      (Thanks to Heiko for that hint ;-)
      The idea is then to check the mm field in syscall_regfunc() and
      set the flag accordingly.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      LKML-Reference: <20090825160237.GG4639@cetus.boeblingen.de.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      cc3b13c1
    • H
      tracing: Check invalid syscall nr while tracing syscalls · cd0980fc
      Hendrik Brueckner 提交于
      Most arch syscall_get_nr() implementations returns -1 if the syscall
      number is not valid.  Accessing the bit field without a check might
      result in a kernel oops (at least I saw it on s390 for ftrace selftest).
      
      Before this change, this problem did not occur, because the invalid
      syscall number (-1) caused syscall_nr_to_meta() to return NULL.
      
      There are at least two scenarios where syscall_get_nr() can return -1:
      
      1. For example, ptrace stores an invalid syscall number, and thus,
         tracing code resets it.
         (see do_syscall_trace_enter in arch/s390/kernel/ptrace.c)
      
      2. The syscall_regfunc() (kernel/tracepoint.c) sets the
         TIF_SYSCALL_FTRACE (now: TIF_SYSCALL_TRACEPOINT) flag for all threads
         which include kernel threads.
         However, the ftrace selftest triggers a kernel oops when testing
         syscall trace points:
            - The kernel thread is started as ususal (do_fork()),
            - tracing code sets TIF_SYSCALL_FTRACE,
            - the ret_from_fork() function is triggered and starts
      	ftrace_syscall_exit() with an invalid syscall number.
      
      To avoid these scenarios, I suggest to check the syscall_nr.
      
      For instance, the ftrace selftest fails for s390 (with config option
      CONFIG_FTRACE_SYSCALLS set) and produces the following kernel oops.
      
      Unable to handle kernel pointer dereference at virtual kernel address 2000000000
      
      Oops: 0038 [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 Not tainted 2.6.31-rc6-next-20090819-dirty #18
      Process kthreadd (pid: 818, task: 000000003ea207e8, ksp: 000000003e813eb8)
      Krnl PSW : 0704100180000000 00000000000ea54c (ftrace_syscall_exit+0x58/0xdc)
                 R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:0 CC:1 PM:0 EA:3
      Krnl GPRS: 0000000000000000 00000000000e0000 ffffffffffffffff 20000000008c2650
                 0000000000000007 0000000000000000 0000000000000000 0000000000000000
                 0000000000000000 0000000000000000 ffffffffffffffff 000000003e813d78
                 000000003e813f58 0000000000505ba8 000000003e813e18 000000003e813d78
      Krnl Code: 00000000000ea540: e330d0000008       ag      %r3,0(%r13)
                 00000000000ea546: a7480007           lhi     %r4,7
                 00000000000ea54a: 1442               nr      %r4,%r2
                >00000000000ea54c: e31030000090       llgc    %r1,0(%r3)
                 00000000000ea552: 5410d008           n       %r1,8(%r13)
                 00000000000ea556: 8a104000           sra     %r1,0(%r4)
                 00000000000ea55a: 5410d00c           n       %r1,12(%r13)
                 00000000000ea55e: 1211               ltr     %r1,%r1
      Call Trace:
      ([<0000000000000000>] 0x0)
       [<000000000001fa22>] do_syscall_trace_exit+0x132/0x18c
       [<000000000002d0c4>] sysc_return+0x0/0x8
       [<000000000001c738>] kernel_thread_starter+0x0/0xc
      Last Breaking-Event-Address:
       [<00000000000ea51e>] ftrace_syscall_exit+0x2a/0xdc
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      LKML-Reference: <20090825125027.GE4639@cetus.boeblingen.de.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      cd0980fc
    • H
      tracing: Add syscall tracepoints - s390 arch update · 7515bf59
      Hendrik Brueckner 提交于
      This patch includes s390 arch updates to synchronize with latest
      core changes in the syscalls tracing area.
      
      - tracing: Map syscall name to number (syscall_name_to_nr())
      - tracing: Call arch_init_ftrace_syscalls at boot
      - tracing: add support tracepoint ids (set_syscall_{enter,exit}_id())
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      LKML-Reference: <20090825123111.GD4639@cetus.boeblingen.de.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      7515bf59
  7. 26 8月, 2009 10 次提交
    • I
      Merge branch 'tracing/core' of... · 35dce1a9
      Ingo Molnar 提交于
      Merge branch 'tracing/core' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing into tracing/core
      
      Conflicts:
      	include/linux/tracepoint.h
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      35dce1a9
    • S
      tracing: add comments to explain TRACE_EVENT out of protection · 7cb2e3ee
      Steven Rostedt 提交于
      The commit:
        commit 5ac35daa
        Author: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
        tracing/events: fix the include file dependencies
      
      Moved the TRACE_EVENT out of the ifdef protection of tracepoints.h
      but uses the define of TRACE_EVENT itself as protection. This patch
      adds comments to explain why.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7cb2e3ee
    • X
      tracing/events: fix the include file dependencies · 5ac35daa
      Xiao Guangrong 提交于
      The TRACE_EVENT depends on the include/linux/tracepoint.h first
      and include/trace/ftrace.h later, if we include the ftrace.h early,
      a building error will occur.
      
      Both define TRACE_EVENT in trace_a.h and trace_b.h, if we include
      those in .c file, like this:
      
      #define CREATE_TRACE_POINTS
      include <trace/events/trace_a.h>
      include <trace/events/trace_b.h>
      
      The above will not work, because the TRACE_EVENT was re-defined by
      the previous .h file.
      Reported-by: NWei Yongjun <yjwei@cn.fujitsu.com>
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      LKML-Reference: <4A937F5E.3020802@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5ac35daa
    • Z
      ftrace: Move setting of clock-source out of options · 5079f326
      Zhaolei 提交于
      There are many clock sources for the tracing system but we can only
      enable/disable one at a time with the trace/options file.
      We can move the setting of clock-source out of options and add a separate
      file for it:
       # cat trace_clock
       [local] global
       # echo global > trace_clock
       # cat trace_clock
       local [global]
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      LKML-Reference: <4A939D08.6050604@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5079f326
    • L
      tracing/filters: Support filtering for char * strings · 87a342f5
      Li Zefan 提交于
      Usually, char * entries are dangerous in traces because the string
      can be released whereas a pointer to it can still wait to be read from
      the ring buffer.
      
      But sometimes we can assume it's safe, like in case of RO data
      (eg: __file__ or __line__, used in bkl trace event). If these RO data
      are in a module and so is the call to the trace event, then it's safe,
      because the ring buffer will be flushed once this module get unloaded.
      
      To allow char * to be treated as a string:
      
      	TRACE_EVENT(...,
      
      		TP_STRUCT__entry(
      			__field_ext(const char *, name, FILTER_PTR_STRING)
      			...
      		)
      
      		...
      	);
      
      The filtering will not dereference "char *" unless the developer
      explicitly sets FILTER_PTR_STR in __field_ext.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4A7B9287.90205@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87a342f5
    • L
      tracing/filters: Add __field_ext() to TRACE_EVENT · 43b51ead
      Li Zefan 提交于
      Add __field_ext(), so a field can be assigned to a specific
      filter_type, which matches a corresponding filter function.
      
      For example, a later patch will allow this:
      	__field_ext(const char *, str, FILTER_PTR_STR);
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4A7B9272.60507095@cn.fujitsu.com>
      
      [
        Fixed a -1 to FILTER_OTHER
        Forward ported to latest kernel.
      ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      43b51ead
    • L
      tracing/filters: Add filter_type to struct ftrace_event_field · aa38e9fc
      Li Zefan 提交于
      The type of a field is stored as a string in @type, and here
      we add @filter_type which is an enum value.
      
      This prepares for later patches, so we can specifically assign
      different @filter_type for the same @type.
      
      For example normally a "char *" field is treated as a ptr,
      but we may want it to be treated as a string when doing filting.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4A7B925E.9030605@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      aa38e9fc
    • J
      tracing: Add vim script to enable folding for function_graph traces · 6591b493
      Josh Triplett 提交于
      function_graph traces look like nested function calls, complete with
      braces denoting the start and end of functions.  function-graph-fold.vim
      teaches vim how to fold these functions, to make it more convenient to
      browse them.
      
      To use, :source function-graph-fold.vim while viewing a function_graph
      trace, or use "view -S function-graph-fold.vim some-trace" to load it
      from the command-line together with a trace.  You can then use the usual
      vim fold commands, such as "za", to open and close nested functions.
      While closed, a fold will show the total time taken for a call, as would
      normally appear on the line with the closing brace.  Folded functions
      will not include finish_task_switch(), so folding should remain
      relatively sane even through a context switch.
      
      Note that this will almost certainly only work well with a single-CPU
      trace (e.g. trace-cmd report --cpu 1).  It also takes some time to run
      (a few seconds for a large trace on my laptop).  Nevertheless, I found
      it very handy to get an overview of a trace and then drill down on
      problematic calls.
      Signed-off-by: NJosh Triplett <josh@joshtriplett.org>
      LKML-Reference: <20090806145701.GB7661@feather>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6591b493
    • S
      tracing/sched: show CPU task wakes up on in trace event · f0693c8b
      Steven Rostedt 提交于
      While debugging the scheduler push / pull algorithm, I found
      it very annoying that the sched wake up events did not show
      the CPU that the task was waking on. In order to analyze the
      scheduler, I needed that information.
      
      This patch adds recording of the CPU that a task is waking up
      on.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f0693c8b
    • J
      tracing: Create generic syscall TRACE_EVENTs · 1c569f02
      Josh Stone 提交于
      This converts the syscall_enter/exit tracepoints into TRACE_EVENTs, so
      you can have generic ftrace events that capture all system calls with
      arguments and return values.  These generic events are also renamed to
      sys_enter/exit, so they're more closely aligned to the specific
      sys_enter_foo events.
      Signed-off-by: NJosh Stone <jistone@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      LKML-Reference: <1251150194-1713-5-git-send-email-jistone@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      1c569f02