1. 04 6月, 2010 1 次提交
    • S
      tracing: Remove ftrace_preempt_disable/enable · 5168ae50
      Steven Rostedt 提交于
      The ftrace_preempt_disable/enable functions were to address a
      recursive race caused by the function tracer. The function tracer
      traces all functions which makes it easily susceptible to recursion.
      One area was preempt_enable(). This would call the scheduler and
      the schedulre would call the function tracer and loop.
      (So was it thought).
      
      The ftrace_preempt_disable/enable was made to protect against recursion
      inside the scheduler by storing the NEED_RESCHED flag. If it was
      set before the ftrace_preempt_disable() it would not call schedule
      on ftrace_preempt_enable(), thinking that if it was set before then
      it would have already scheduled unless it was already in the scheduler.
      
      This worked fine except in the case of SMP, where another task would set
      the NEED_RESCHED flag for a task on another CPU, and then kick off an
      IPI to trigger it. This could cause the NEED_RESCHED to be saved at
      ftrace_preempt_disable() but the IPI to arrive in the the preempt
      disabled section. The ftrace_preempt_enable() would not call the scheduler
      because the flag was already set before entring the section.
      
      This bug would cause a missed preemption check and cause lower latencies.
      
      Investigating further, I found that the recusion caused by the function
      tracer was not due to schedule(), but due to preempt_schedule(). Now
      that preempt_schedule is completely annotated with notrace, the recusion
      no longer is an issue.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5168ae50
  2. 15 5月, 2010 3 次提交
    • L
      tracing: Fix function declarations if !CONFIG_STACKTRACE · e1f7992e
      Li Zefan 提交于
      ftrace_trace_stack() and frace_trace_userstacke() take a
      struct ring_buffer argument, not struct trace_array. Commit
      e77405ad("tracing: pass around ring buffer instead of tracer")
      made this change.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4BE77C14.5010806@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1f7992e
    • S
      tracing: Combine event filter_active and enable into single flags field · 553552ce
      Steven Rostedt 提交于
      The filter_active and enable both use an int (4 bytes each) to
      set a single flag. We can save 4 bytes per event by combining the
      two into a single integer.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4894944	1018052	 861512	6774508	 675eec	vmlinux.id
      4894871	1012292	 861512	6768675	 674823	vmlinux.flags
      
      This gives us another 5K in savings.
      
      The modification of both the enable and filter fields are done
      under the event_mutex, so it is still safe to combine the two.
      
      Note: Although Mathieu gave his Acked-by, he would like it documented
       that the reads of flags are not protected by the mutex. The way the
       code works, these reads will not break anything, but will have a
       residual effect. Since this behavior is the same even before this
       patch, describing this situation is left to another patch, as this
       patch does not change the behavior, but just brought it to Mathieu's
       attention.
      
      v2: Updated the event trace self test to for this change.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      553552ce
    • S
      tracing: Move fields from event to class structure · 2e33af02
      Steven Rostedt 提交于
      Move the defined fields from the event to the class structure.
      Since the fields of the event are defined by the class they belong
      to, it makes sense to have the class hold the information instead
      of the individual events. The events of the same class would just
      hold duplicate information.
      
      After this change the size of the kernel dropped another 3K:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900252	1057412	 861512	6819176	 680d68	vmlinux.regs
      4900375	1053380	 861512	6815267	 67fe23	vmlinux.fields
      
      Although the text increased, this was mainly due to the C files
      having to adapt to the change. This is a constant increase, where
      new tracepoints will not increase the Text. But the big drop is
      in the data size (as well as needed allocations to hold the fields).
      This will give even more savings as more tracepoints are created.
      
      Note, if just TRACE_EVENT()s are used and not DECLARE_EVENT_CLASS()
      with several DEFINE_EVENT()s, then the savings will be lost. But
      we are pushing developers to consolidate events with DEFINE_EVENT()
      so this should not be an issue.
      
      The kprobes define a unique class to every new event, but are dynamic
      so it should not be a issue.
      
      The syscalls however have a single class but the fields for the individual
      events are different. The syscalls use a metadata to define the
      fields. I moved the fields list from the event to the metadata and
      added a "get_fields()" function to the class. This function is used
      to find the fields. For normal events and kprobes, get_fields() just
      returns a pointer to the fields list_head in the class. For syscall
      events, it returns the fields list_head in the metadata for the event.
      
      v2:  Fixed the syscall fields. The syscall metadata needs a list
           of fields for both enter and exit.
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2e33af02
  3. 28 4月, 2010 1 次提交
  4. 27 4月, 2010 1 次提交
  5. 15 4月, 2010 1 次提交
    • M
      tracing/kprobes: Support basic types on dynamic events · 93ccae7a
      Masami Hiramatsu 提交于
      Support basic types of integer (u8, u16, u32, u64, s8, s16, s32, s64) in
      kprobe tracer. With this patch, users can specify above basic types on
      each arguments after ':'. If omitted, the argument type is set as
      unsigned long (u32 or u64, arch-dependent).
      
       e.g.
        echo 'p account_system_time+0 hardirq_offset=%si:s32' > kprobe_events
      
        adds a probe recording hardirq_offset in signed-32bits value on the
        entry of account_system_time.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20100412171708.3790.18599.stgit@localhost6.localdomain6>
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      93ccae7a
  6. 26 3月, 2010 1 次提交
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  7. 06 3月, 2010 1 次提交
    • T
      function-graph: Add tracing_thresh support to function_graph tracer · 0e950173
      Tim Bird 提交于
      Add support for tracing_thresh to the function_graph tracer.  This
      version of this feature isolates the checks into new entry and
      return functions, to avoid adding more conditional code into the
      main function_graph paths.
      
      When the tracing_thresh is set and the function graph tracer is
      enabled, only the functions that took longer than the time in
      microseconds that was set in tracing_thresh are recorded. To do this
      efficiently, only the function exits are recorded:
      
       [tracing]# echo 100 > tracing_thresh
       [tracing]# echo function_graph > current_tracer
       [tracing]# cat trace
       # tracer: function_graph
       #
       # CPU  DURATION                  FUNCTION CALLS
       # |     |   |                     |   |   |   |
        1) ! 119.214 us  |  } /* smp_apic_timer_interrupt */
        1)   <========== |
        0) ! 101.527 us  |              } /* __rcu_process_callbacks */
        0) ! 126.461 us  |            } /* rcu_process_callbacks */
        0) ! 145.111 us  |          } /* __do_softirq */
        0) ! 149.667 us  |        } /* do_softirq */
        0) ! 168.817 us  |      } /* irq_exit */
        0) ! 248.254 us  |    } /* smp_apic_timer_interrupt */
      
      Also, add support for specifying tracing_thresh on the kernel
      command line.  When used like so: "tracing_thresh=200 ftrace=function_graph"
      this can be used to analyse system startup.  It is important to disable
      tracing soon after boot, in order to avoid losing the trace data.
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NTim Bird <tim.bird@am.sony.com>
      LKML-Reference: <4B87098B.4040308@am.sony.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0e950173
  8. 25 2月, 2010 1 次提交
    • J
      tracing: Fix ftrace_event_call alignment for use with gcc 4.5 · 86c38a31
      Jeff Mahoney 提交于
      GCC 4.5 introduces behavior that forces the alignment of structures to
       use the largest possible value. The default value is 32 bytes, so if
       some structures are defined with a 4-byte alignment and others aren't
       declared with an alignment constraint at all - it will align at 32-bytes.
      
       For things like the ftrace events, this results in a non-standard array.
       When initializing the ftrace subsystem, we traverse the _ftrace_events
       section and call the initialization callback for each event. When the
       structures are misaligned, we could be treating another part of the
       structure (or the zeroed out space between them) as a function pointer.
      
       This patch forces the alignment for all the ftrace_event_call structures
       to 4 bytes.
      
       Without this patch, the kernel fails to boot very early when built with
       gcc 4.5.
      
       It's trivial to check the alignment of the members of the array, so it
       might be worthwhile to add something to the build system to do that
       automatically. Unfortunately, that only covers this case. I've asked one
       of the gcc developers about adding a warning when this condition is seen.
      
      Cc: stable@kernel.org
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      LKML-Reference: <4B85770B.6010901@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      86c38a31
  9. 12 2月, 2010 1 次提交
  10. 05 2月, 2010 1 次提交
  11. 29 1月, 2010 1 次提交
    • L
      tracing: Simplify test for function_graph tracing start point · ea2c68a0
      Lai Jiangshan 提交于
      In the function graph tracer, a calling function is to be traced
      only when it is enabled through the set_graph_function file,
      or when it is nested in an enabled function.
      
      Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested
      or not. Looking at the code, we can get this:
      (trace->depth > 0) <==> (TSK_TRACE_FL_GRAPH is set)
      
      trace->depth is more explicit to tell that it is nested.
      So we use trace->depth directly and simplify the code.
      
      No functionality is changed.
      TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <4B4DB0B6.7040607@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      ea2c68a0
  12. 14 12月, 2009 1 次提交
  13. 08 12月, 2009 1 次提交
    • S
      tracing: Add pipe_close interface · c521efd1
      Steven Rostedt 提交于
      An ftrace plugin can add a pipe_open interface when the user opens
      trace_pipe. But if the plugin allocates something within the pipe_open
      it can not free it because there exists no pipe_close. The hook to
      the trace file open has a corresponding close. The closing of the
      trace_pipe file should also have a corresponding close.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c521efd1
  14. 25 11月, 2009 1 次提交
    • T
      trace/syscalls: Change ret param in struct syscall_trace_exit to long · 99df5a6a
      Tom Zanussi 提交于
      Commit ee949a86 ("tracing/syscalls:
      Use long for syscall ret format and field definitions") changed the
      syscall exit return type to long, but forgot to change it in the
      struct.
      Signed-off-by: NTom Zanussi <tzanussi@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1259133299-23594-3-git-send-email-tzanussi@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99df5a6a
  15. 08 11月, 2009 2 次提交
    • L
      ksym_tracer: Remove KSYM_SELFTEST_ENTRY · 30ff21e3
      Li Zefan 提交于
      The macro used to be used in both trace_selftest.c and
      trace_ksym.c, but no longer, so remove it from header file.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      30ff21e3
    • F
      hw-breakpoints: Rewrite the hw-breakpoints layer on top of perf events · 24f1e32c
      Frederic Weisbecker 提交于
      This patch rebase the implementation of the breakpoints API on top of
      perf events instances.
      
      Each breakpoints are now perf events that handle the
      register scheduling, thread/cpu attachment, etc..
      
      The new layering is now made as follows:
      
             ptrace       kgdb      ftrace   perf syscall
                \          |          /         /
                 \         |         /         /
                                              /
                  Core breakpoint API        /
                                            /
                           |               /
                           |              /
      
                    Breakpoints perf events
      
                           |
                           |
      
                     Breakpoints PMU ---- Debug Register constraints handling
                                          (Part of core breakpoint API)
                           |
                           |
      
                   Hardware debug registers
      
      Reasons of this rewrite:
      
      - Use the centralized/optimized pmu registers scheduling,
        implying an easier arch integration
      - More powerful register handling: perf attributes (pinned/flexible
        events, exclusive/non-exclusive, tunable period, etc...)
      
      Impact:
      
      - New perf ABI: the hardware breakpoints counters
      - Ptrace breakpoints setting remains tricky and still needs some per
        thread breakpoints references.
      
      Todo (in the order):
      
      - Support breakpoints perf counter events for perf tools (ie: implement
        perf_bpcounter_event())
      - Support from perf tools
      
      Changes in v2:
      
      - Follow the perf "event " rename
      - The ptrace regression have been fixed (ptrace breakpoint perf events
        weren't released when a task ended)
      - Drop the struct hw_breakpoint and store generic fields in
        perf_event_attr.
      - Separate core and arch specific headers, drop
        asm-generic/hw_breakpoint.h and create linux/hw_breakpoint.h
      - Use new generic len/type for breakpoint
      - Handle off case: when breakpoints api is not supported by an arch
      
      Changes in v3:
      
      - Fix broken CONFIG_KVM, we need to propagate the breakpoint api
        changes to kvm when we exit the guest and restore the bp registers
        to the host.
      
      Changes in v4:
      
      - Drop the hw_breakpoint_restore() stub as it is only used by KVM
      - EXPORT_SYMBOL_GPL hw_breakpoint_restore() as KVM can be built as a
        module
      - Restore the breakpoints unconditionally on kvm guest exit:
        TIF_DEBUG_THREAD doesn't anymore cover every cases of running
        breakpoints and vcpu->arch.switch_db_regs might not always be
        set when the guest used debug registers.
        (Waiting for a reliable optimization)
      
      Changes in v5:
      
      - Split-up the asm-generic/hw-breakpoint.h moving to
        linux/hw_breakpoint.h into a separate patch
      - Optimize the breakpoints restoring while switching from kvm guest
        to host. We only want to restore the state if we have active
        breakpoints to the host, otherwise we don't care about messed-up
        address registers.
      - Add asm/hw_breakpoint.h to Kbuild
      - Fix bad breakpoint type in trace_selftest.c
      
      Changes in v6:
      
      - Fix wrong header inclusion in trace.h (triggered a build
        error with CONFIG_FTRACE_SELFTEST
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jan Kiszka <jan.kiszka@web.de>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      24f1e32c
  16. 15 10月, 2009 3 次提交
  17. 14 10月, 2009 1 次提交
    • J
      tracing: Support multiple pids in set_pid_ftrace file · 756d17ee
      jolsa@redhat.com 提交于
      Adding the possibility to set more than 1 pid in the set_pid_ftrace
      file, thus allowing to trace more than 1 independent processes.
      
      Usage:
      
       sh-4.0# echo 284 > ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       284
       sh-4.0# echo 1 >> ./set_ftrace_pid
       sh-4.0# echo 0 >> ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       swapper tasks
       1
       284
       sh-4.0# echo 4 > ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       4
       sh-4.0# echo > ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       no pid
       sh-4.0#
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091013203425.565454612@goodmis.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      756d17ee
  18. 13 10月, 2009 1 次提交
  19. 12 10月, 2009 1 次提交
  20. 25 9月, 2009 2 次提交
    • F
      tracing/filters: Unify the regex parsing helpers · 3f6fe06d
      Frederic Weisbecker 提交于
      The filter code has stolen the regex parsing function from ftrace to
      get the regex support.
      We have duplicated this code, so factorize it in the filter area and
      make it generally available, as the filter code is the most suited to
      host this feature.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      3f6fe06d
    • F
      tracing/filters: Provide basic regex support · 1889d209
      Frederic Weisbecker 提交于
      This patch provides basic support for regular expressions in filters.
      
      It supports the following types of regexp:
      
      - *match_beginning
      - *match_middle*
      - match_end*
      - !don't match
      
      Example:
      	cd /debug/tracing/events/bkl/lock_kernel
      	echo 'file == "*reiserfs*"' > filter
      	echo 1 > enable
      
                 gedit-4941  [000]   457.735437: lock_kernel: depth: 0, fs/reiserfs/namei.c:334 reiserfs_lookup()
           sync_supers-227   [001]   461.379985: lock_kernel: depth: 0, fs/reiserfs/super.c:69 reiserfs_sync_fs()
           sync_supers-227   [000]   461.383096: lock_kernel: depth: 0, fs/reiserfs/journal.c:1069 flush_commit_list()
            reiserfs/1-1369  [001]   461.479885: lock_kernel: depth: 0, fs/reiserfs/journal.c:3509 flush_async_commits()
      
      Every string is now handled as a regexp in the filter framework, which
      helps to factorize the code for handling both simple strings and
      regexp comparisons.
      
      (The regexp parsing code has been wildly cherry picked from ftrace.c
      written by Steve.)
      
      v2: Simplify the whole and drop the filter_regex file
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      1889d209
  21. 19 9月, 2009 1 次提交
  22. 13 9月, 2009 4 次提交
    • S
      tracing: use the new trace_entries.h to create format files · 4e5292ea
      Steven Rostedt 提交于
      This patch changes the way the format files in
      
        debugfs/tracing/events/ftrace/*/format
      
      are created. It uses the new trace_entries.h file to automate the
      creation of the format files to ensure that they are always in sync
      with the actual structures. This is the same methodology used to
      create the format files for the TRACE_EVENT macro.
      
      This also updates the filter creation that was built on the creation
      of the format files.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4e5292ea
    • S
      tracing: show details of structures within the ftrace structures · d7315094
      Steven Rostedt 提交于
      Some of the internal ftrace structures use structures within. The
      output of a field saying it is just a structure is useless for a format
      file. A binary reader of the ring buffer needs to know more about
      how the fields are broken up.
      
      This patch adds to the ftrace structure macros new fields to
      describe the structures inside a structure.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d7315094
    • S
      tracing: use macros to create internal ftrace entry ring buffer structures · 0a1c49db
      Steven Rostedt 提交于
      The entries used by ftrace internal code (plugins) currently have their
      formats manually exported to userspace. That is, the format files in
      debugfs/tracing/events/ftrace/*/format are currently created by hand.
      This is a maintenance nightmare, and can easily become out of sync
      with what is actually shown.
      
      This patch uses the methodology of the TRACE_EVENT macros to build
      the structures so that their formats can be automated and this
      will keep the structures in sync with what users can see.
      
      This patch only changes the way the structures are created. Further
      patches will build off of this to automate the format files.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0a1c49db
    • C
      tracing: do not update tracing_max_latency when tracer is stopped · b5130b1e
      Carsten Emde 提交于
      The state of the function pair tracing_stop()/tracing_start() is
      correctly considered when tracer data are updated. However, the global
      and externally accessible variable tracing_max_latency is always updated
      - even when tracing is stopped.
      
      The update should only occur, if tracing was not stopped.
      Signed-off-by: NCarsten Emde <C.Emde@osadl.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b5130b1e
  23. 12 9月, 2009 1 次提交
  24. 11 9月, 2009 1 次提交
  25. 10 9月, 2009 4 次提交
  26. 05 9月, 2009 3 次提交
    • S
      tracing: add trace_array_printk for internal tracers to use · 659372d3
      Steven Rostedt 提交于
      This patch adds a trace_array_printk to allow a tracer to use the
      trace_printk on its own trace array.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      659372d3
    • S
      tracing: pass around ring buffer instead of tracer · e77405ad
      Steven Rostedt 提交于
      The latency tracers (irqsoff and wakeup) can swap trace buffers
      on the fly. If an event is happening and has reserved data on one of
      the buffers, and the latency tracer swaps the global buffer with the
      max buffer, the result is that the event may commit the data to the
      wrong buffer.
      
      This patch changes the API to the trace recording to be recieve the
      buffer that was used to reserve a commit. Then this buffer can be passed
      in to the commit.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e77405ad
    • S
      tracing: use timestamp to determine start of latency traces · 2f26ebd5
      Steven Rostedt 提交于
      Currently the latency tracers reset the ring buffer. Unfortunately
      if a commit is in process (due to a trace event), this can corrupt
      the ring buffer. When this happens, the ring buffer will detect
      the corruption and then permanently disable the ring buffer.
      
      The bug does not crash the system, but it does prevent further tracing
      after the bug is hit.
      
      Instead of reseting the trace buffers, the timestamp of the start of
      the trace is used instead. The buffers will still contain the previous
      data, but the output will not count any data that is before the
      timestamp of the trace.
      
      Note, this only affects the static trace output (trace) and not the
      runtime trace output (trace_pipe). The runtime trace output does not
      make sense for the latency tracers anyway.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f26ebd5