1. 11 5月, 2012 1 次提交
    • S
      tracing: Do not enable function event with enable · 9b63776f
      Steven Rostedt 提交于
      With the adding of function tracing event to perf, it caused a
      side effect that produces the following warning when enabling all
      events in ftrace:
      
       # echo 1 > /sys/kernel/debug/tracing/events/enable
      
      [console]
      event trace: Could not enable event function
      
      This is because when enabling all events via the debugfs system
      it ignores events that do not have a ->reg() function assigned.
      This was to skip over the ftrace internal events (as they are
      not TRACE_EVENTs). But as the ftrace function event now has
      a ->reg() function attached to it for use with perf, it is no
      longer ignored.
      
      Worse yet, this ->reg() function is being called when it should
      not be. It returns an error and causes the above warning to
      be printed.
      
      By adding a new event_call flag (TRACE_EVENT_FL_IGNORE_ENABLE)
      and have all ftrace internel event structures have it set,
      setting the events/enable will no longe try to incorrectly enable
      the function event and does not warn.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9b63776f
  2. 20 4月, 2012 1 次提交
    • S
      tracing: Fix stacktrace of latency tracers (irqsoff and friends) · db4c75cb
      Steven Rostedt 提交于
      While debugging a latency with someone on IRC (mirage335) on #linux-rt (OFTC),
      we discovered that the stacktrace output of the latency tracers
      (preemptirqsoff) was empty.
      
      This bug was caused by the creation of the dynamic length stack trace
      again (like commit 12b5da34 "tracing: Fix ent_size in trace output" was).
      
      This bug is caused by the latency tracers requiring the next event
      to determine the time between the current event and the next. But by
      grabbing the next event, the iter->ent_size is set to the next event
      instead of the current one. As the stacktrace event is the last event,
      this makes the ent_size zero and causes nothing to be printed for
      the stack trace. The dynamic stacktrace uses the ent_size to determine
      how much of the stack can be printed. The ent_size of zero means
      no stack.
      
      The simple fix is to save the iter->ent_size before finding the next event.
      
      Note, mirage335 asked to remain anonymous from LKML and git, so I will
      not add the Reported-by and Tested-by tags, even though he did report
      the issue and tested the fix.
      
      Cc: stable@vger.kernel.org # 3.1+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      db4c75cb
  3. 17 4月, 2012 1 次提交
    • S
      tracing: Fix regression with tracing_on · 348f0fc2
      Steven Rostedt 提交于
      The change to make tracing_on affect only the ftrace ring buffer, caused
      a bug where it wont affect any ring buffer. The problem was that the buffer
      of the trace_array was passed to the write function and not the trace array
      itself.
      
      The trace_array can change the buffer when running a latency tracer. If this
      happens, then the buffer being disabled may not be the buffer currently used
      by ftrace. This will cause the tracing_on file to become useless.
      
      The simple fix is to pass the trace_array to the write function instead of
      the buffer. Then the actual buffer may be changed.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      348f0fc2
  4. 14 4月, 2012 1 次提交
  5. 12 4月, 2012 1 次提交
  6. 06 4月, 2012 1 次提交
    • S
      simple_open: automatically convert to simple_open() · 234e3405
      Stephen Boyd 提交于
      Many users of debugfs copy the implementation of default_open() when
      they want to support a custom read/write function op.  This leads to a
      proliferation of the default_open() implementation across the entire
      tree.
      
      Now that the common implementation has been consolidated into libfs we
      can replace all the users of this function with simple_open().
      
      This replacement was done with the following semantic patch:
      
      <smpl>
      @ open @
      identifier open_f != simple_open;
      identifier i, f;
      @@
      -int open_f(struct inode *i, struct file *f)
      -{
      (
      -if (i->i_private)
      -f->private_data = i->i_private;
      |
      -f->private_data = i->i_private;
      )
      -return 0;
      -}
      
      @ has_open depends on open @
      identifier fops;
      identifier open.open_f;
      @@
      struct file_operations fops = {
      ...
      -.open = open_f,
      +.open = simple_open,
      ...
      };
      </smpl>
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Julia Lawall <Julia.Lawall@lip6.fr>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      234e3405
  7. 28 3月, 2012 1 次提交
    • S
      tracing: Fix ent_size in trace output · 12b5da34
      Steven Rostedt 提交于
      When reading the trace file, the records of each of the per_cpu buffers
      are examined to find the next event to print out. At the point of looking
      at the event, the size of the event is recorded. But if the first event is
      chosen, the other events in the other CPU buffers will reset the event size
      that is stored in the iterator descriptor, causing the event size passed to
      the output functions to be incorrect.
      
      In most cases this is not a problem, but for the case of stack traces, it
      is. With the change to the stack tracing to record a dynamic number of
      back traces, the output depends on the size of the entry instead of the
      fixed 8 back traces. When the entry size is not correct, the back traces
      would not be fully printed.
      
      Note, reading from the per-cpu trace files were not affected.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12b5da34
  8. 23 3月, 2012 1 次提交
  9. 21 3月, 2012 1 次提交
  10. 14 3月, 2012 2 次提交
    • M
      tracing: Fix build breakage without CONFIG_PERF_EVENTS · fa73dc94
      Mark Brown 提交于
      Today's -next fails to build for me:
      
        CC      kernel/trace/trace_export.o
      In file included from kernel/trace/trace_export.c:197: kernel/trace/trace_entries.h:58: error: 'perf_ftrace_event_register' undeclared here (not in a function)
      make[2]: *** [kernel/trace/trace_export.o] Error 1
      make[1]: *** [kernel/trace] Error 2
      make: *** [kernel] Error 2
      
      because as of ced390 (ftrace, perf: Add support to use function
      tracepoint in perf) perf_trace_event_register() is declared in trace.h
      only if CONFIG_PERF_EVENTS is enabled but I don't have that set.
      
      Ensure that we always have a definition of perf_trace_event_register()
      by making the definition unconditional.
      
      Link: http://lkml.kernel.org/r/1330426967-17067-1-git-send-email-broonie@opensource.wolfsonmicro.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fa73dc94
    • R
      ftrace: Fix function_graph for archs that test ftrace_trace_function · db6544e0
      Rajesh Bhagat 提交于
      When CONFIG_DYNAMIC_FTRACE is not set, some archs (ARM) test
      the variable function_trace_function to determine if it should
      call the function tracer. If it is not set to ftrace_stub, then
      it will call the function and return, and not call the function
      graph tracer.
      
      But some of these archs (ARM) do not have the assembly code
      to test if function tracing is enabled or not (quick stop of tracing)
      and it calls the helper routine ftrace_test_stop_func() instead.
      
      If function tracer is enabled and then disabled, the variable
      ftrace_trace_function is still set to the helper routine
      ftrace_test_stop_func(), and not to ftrace_stub. This will
      prevent the function graph tracer from ever running.
      
      Output before patch
      /debug/tracing # echo function > current_tracer
      /debug/tracing # echo function_graph > current_tracer
      /debug/tracing # cat trace
      
      Output after patch
      /debug/tracing # echo function > current_tracer
      /debug/tracing # echo function_graph > current_tracer
      /debug/tracing # cat trace
      0) ! 253.375 us | } /* irq_enter */
      0) | generic_handle_irq() {
      0) | handle_fasteoi_irq() {
      0) 9.208 us | _raw_spin_lock();
      0) | handle_irq_event() {
      0) | handle_irq_event_percpu() {
      Signed-off-by: NRajesh Bhagat <rajesh.lnx@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      db6544e0
  11. 02 3月, 2012 1 次提交
    • S
      tracing: Keep NMI watchdog from triggering when dumping trace · b892e5c8
      Steven Rostedt 提交于
      As ftrace_dump() (called by ftrace_dump_on_oops) disables interrupts
      as it dumps its output to the console, it can keep interrupts disabled
      for long periods of time. This is likely to trigger the NMI watchdog,
      and it can disrupt the output of critical data.
      
      Add a touch_nmi_watchdog() to each event that is written to the screen
      to keep the NMI watchdog from affecting the output.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b892e5c8
  12. 27 2月, 2012 1 次提交
  13. 23 2月, 2012 1 次提交
    • S
      tracing/ring-buffer: Only have tracing_on disable tracing buffers · 499e5470
      Steven Rostedt 提交于
      As the ring-buffer code is being used by other facilities in the
      kernel, having tracing_on file disable *all* buffers is not a desired
      affect. It should only disable the ftrace buffers that are being used.
      
      Move the code into the trace.c file and use the buffer disabling
      for tracing_on() and tracing_off(). This way only the ftrace buffers
      will be affected by them and other kernel utilities will not be
      confused to why their output suddenly stopped.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      499e5470
  14. 22 2月, 2012 8 次提交
  15. 21 2月, 2012 1 次提交
  16. 14 2月, 2012 1 次提交
  17. 13 2月, 2012 1 次提交
  18. 03 2月, 2012 1 次提交
  19. 08 1月, 2012 1 次提交
  20. 04 1月, 2012 1 次提交
  21. 21 12月, 2011 12 次提交
    • T
      tracing: Factorize filter creation · 38b78eb8
      Tejun Heo 提交于
      There are four places where new filter for a given filter string is
      created, which involves several different steps.  This patch factors
      those steps into create_[system_]filter() functions which in turn make
      use of create_filter_{start|finish}() for common parts.
      
      The only functional change is that if replace_filter_string() is
      requested and fails, creation fails without any side effect instead of
      being ignored.
      
      Note that system filter is now installed after the processing is
      complete which makes freeing before and then restoring filter string
      on error unncessary.
      
      -v2: Rebased to resolve conflict with 49aa2951 and updated both
           create_filter() functions to always set *filterp instead of
           requiring the caller to clear it to %NULL on entry.
      
      Link: http://lkml.kernel.org/r/1323988305-1469-2-git-send-email-tj@kernel.orgSigned-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38b78eb8
    • S
      tracing: Have stack tracing set filtered functions at boot · 762e1207
      Steven Rostedt 提交于
      Add stacktrace_filter= to the kernel command line that lets
      the user pick specific functions to check the stack on.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      762e1207
    • S
      ftrace: Allow access to the boot time function enabling · 2a85a37f
      Steven Rostedt 提交于
      Change set_ftrace_early_filter() to ftrace_set_early_filter()
      and make it a global function. This will allow other subsystems
      in the kernel to be able to enable function tracing at start
      up and reuse the ftrace function parsing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a85a37f
    • S
      tracing: Have stack_tracer use a separate list of functions · d2d45c7a
      Steven Rostedt 提交于
      The stack_tracer is used to look at every function and check
      if the current stack is bigger than the last recorded max stack size.
      When a new max is found, then it saves that stack off.
      
      Currently the stack tracer is limited by the global_ops of
      the function tracer. As the stack tracer has nothing to do with
      the ftrace function tracer, except that it uses it as its internal
      engine, the stack tracer should have its own list.
      
      A new file is added to the tracing debugfs directory called:
      
        stack_trace_filter
      
      that can be used to select which functions you want to check the stack
      on.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d2d45c7a
    • S
      ftrace: Decouple hash items from showing filtered functions · 69a3083c
      Steven Rostedt 提交于
      The set_ftrace_filter shows "hashed" functions, which are functions
      that are added with operations to them (like traceon and traceoff).
      
      As other subsystems may be able to show what functions they are
      using for function tracing, the hash items should no longer
      be shown just because the FILTER flag is set. As they have nothing
      to do with other subsystems filters.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      69a3083c
    • S
      ftrace: Allow other users of function tracing to use the output listing · fc13cb0c
      Steven Rostedt 提交于
      The function tracer is set up to allow any other subsystem (like perf)
      to use it. Ftrace already has a way to list what functions are enabled
      by the global_ops. It would be very helpful to let other users of
      the function tracer to be able to use the same code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fc13cb0c
    • S
      ftrace: Create ftrace_hash_empty() helper routine · 06a51d93
      Steven Rostedt 提交于
      There are two types of hashes in the ftrace_ops; one type
      is the filter_hash and the other is the notrace_hash. Either
      one may be null, meaning it has no elements. But when elements
      are added, the hash is allocated.
      
      Throughout the code, a check needs to be made to see if a hash
      exists or the hash has elements, but the check if the hash exists
      is usually missing causing the possible "NULL pointer dereference bug".
      
      Add a helper routine called "ftrace_hash_empty()" that returns
      true if the hash doesn't exist or its count is zero. As they mean
      the same thing.
      Last-bug-reported-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      06a51d93
    • S
      ftrace: Fix ftrace hash record update with notrace · c842e975
      Steven Rostedt 提交于
      When disabling the "notrace" records, that means we want to trace them.
      If the notrace_hash is zero, it means that we want to trace all
      records. But to disable a zero notrace_hash means nothing.
      
      The check for the notrace_hash count was incorrect with:
      
      	if (hash && !hash->count)
      		return
      
      With the correct comment above it that states that we do nothing
      if the notrace_hash has zero count. But !hash also means that
      the notrace hash has zero count. I think this was done to
      protect against dereferencing NULL. But if !hash is true, then
      we go through the following loop without doing a single thing.
      
      Fix it to:
      
      	if (!hash || !hash->count)
      		return;
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c842e975
    • S
      ftrace: Use bsearch to find record ip · 5855fead
      Steven Rostedt 提交于
      Now that each set of pages in the function list are sorted by
      ip, we can use bsearch to find a record within each set of pages.
      This speeds up the ftrace_location() function by magnitudes.
      
      For archs (like x86) that need to add a breakpoint at every function
      that will be converted from a nop to a callback and vice versa,
      the breakpoint callback needs to know if the breakpoint was for
      ftrace or not. It requires finding the breakpoint ip within the
      records. Doing a linear search is extremely inefficient. It is
      a must to be able to do a fast binary search to find these locations.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5855fead
    • S
      ftrace: Sort the mcount records on each page · 68950619
      Steven Rostedt 提交于
      Sort records by ip locations of the ftrace mcount calls on each of the
      set of pages in the function list. This helps in localizing cache
      usuage when updating the function locations, as well as gives us
      the ability to quickly find an ip location in the list.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      68950619
    • S
      ftrace: Replace record newlist with record page list · 85ae32ae
      Steven Rostedt 提交于
      As new functions come in to be initalized from mcount to nop,
      they are done by groups of pages. Whether it is the core kernel
      or a module. There's no need to keep track of these on a per record
      basis.
      
      At startup, and as any module is loaded, the functions to be
      traced are stored in a group of pages and added to the function
      list at the end. We just need to keep a pointer to the first
      page of the list that was added, and use that to know where to
      start on the list for initializing functions.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      85ae32ae
    • S
      ftrace: Allocate the mcount record pages as groups · a7900875
      Steven Rostedt 提交于
      Allocate the mcount record pages as a group of pages as big
      as can be allocated and waste no more than a single page.
      
      Grouping the mcount pages as much as possible helps with cache
      locality, as we do not need to redirect with descriptors as we
      cross from page to page. It also allows us to do more with the
      records later on (sort them with bigger benefits).
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a7900875