1. 05 12月, 2008 1 次提交
  2. 04 12月, 2008 6 次提交
    • L
      ftrace: avoid duplicated function when writing set_graph_function · faec2ec5
      Liming Wang 提交于
      Impact: fix a bug in function filter setting
      
      when writing function to set_graph_function, we should check whether it
      has existed in set_graph_function to avoid duplicating.
      Signed-off-by: NLiming Wang <liming.wang@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faec2ec5
    • S
      ftrace: add ability to only trace swapper tasks · e32d8956
      Steven Rostedt 提交于
      Impact: new feature
      
      This patch lets the swapper tasks of all CPUS be filtered by the
      set_ftrace_pid file.
      
      If '0' is echoed into this file, then all the idle tasks (aka swapper)
      is flagged to be traced.  This affects all CPU idle tasks.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e32d8956
    • S
      ftrace: use struct pid · 978f3a45
      Steven Rostedt 提交于
      Impact: clean up, extend PID filtering to PID namespaces
      
      Eric Biederman suggested using the struct pid for filtering on
      pids in the kernel. This patch is based off of a demonstration
      of an implementation that Eric sent me in an email.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      978f3a45
    • S
      ftrace: trace single pid for function graph tracer · 804a6851
      Steven Rostedt 提交于
      Impact: New feature
      
      This patch makes the changes to set_ftrace_pid apply to the function
      graph tracer.
      
        # echo $$ > /debugfs/tracing/set_ftrace_pid
        # echo function_graph > /debugfs/tracing/current_tracer
      
      Will cause only the current task to be traced. Note, the trace flags are
      also inherited by child processes, so the children of the shell
      will also be traced.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      804a6851
    • S
      ftrace: use task struct trace flag to filter on pid · 0ef8cde5
      Steven Rostedt 提交于
      Impact: clean up
      
      Use the new task struct trace flags to determine if a process should be
      traced or not.
      
      Note: this moves the searching of the pid to the slow path of setting
      the pid field. This needs to be converted to the pid name space.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0ef8cde5
    • S
      ftrace: graph of a single function · ea4e2bc4
      Steven Rostedt 提交于
      This patch adds the file:
      
         /debugfs/tracing/set_graph_function
      
      which can be used along with the function graph tracer.
      
      When this file is empty, the function graph tracer will act as
      usual. When the file has a function in it, the function graph
      tracer will only trace that function.
      
      For example:
      
       # echo blk_unplug > /debugfs/tracing/set_graph_function
       # cat /debugfs/tracing/trace
       [...]
       ------------------------------------------
       | 2)  make-19003  =>  kjournald-2219
       ------------------------------------------
      
       2)               |  blk_unplug() {
       2)               |    dm_unplug_all() {
       2)               |      dm_get_table() {
       2)      1.381 us |        _read_lock();
       2)      0.911 us |        dm_table_get();
       2)      1. 76 us |        _read_unlock();
       2) +   12.912 us |      }
       2)               |      dm_table_unplug_all() {
       2)               |        blk_unplug() {
       2)      0.778 us |          generic_unplug_device();
       2)      2.409 us |        }
       2)      5.992 us |      }
       2)      0.813 us |      dm_table_put();
       2) +   29. 90 us |    }
       2) +   34.532 us |  }
      
      You can add up to 32 functions into this file. Currently we limit it
      to 32, but this may change with later improvements.
      
      To add another function, use the append '>>':
      
        # echo sys_read >> /debugfs/tracing/set_graph_function
        # cat /debugfs/tracing/set_graph_function
        blk_unplug
        sys_read
      
      Using the '>' will clear out the function and write anew:
      
        # echo sys_write > /debug/tracing/set_graph_function
        # cat /debug/tracing/set_graph_function
        sys_write
      
      Note, if you have function graph running while doing this, the small
      time between clearing it and updating it will cause the graph to
      record all functions. This should not be an issue because after
      it sets the filter, only those functions will be recorded from then on.
      If you need to only record a particular function then set this
      file first before starting the function graph tracer. In the future
      this side effect may be corrected.
      
      The set_graph_function file is similar to the set_ftrace_filter but
      it does not take wild cards nor does it allow for more than one
      function to be set with a single write. There is no technical reason why
      this is the case, I just do not have the time yet to implement that.
      
      Note, dynamic ftrace must be enabled for this to appear because it
      uses the dynamic ftrace records to match the name to the mcount
      call sites.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea4e2bc4
  3. 03 12月, 2008 2 次提交
    • S
      ftrace: function graph return for function entry · e49dc19c
      Steven Rostedt 提交于
      Impact: feature, let entry function decide to trace or not
      
      This patch lets the graph tracer entry function decide if the tracing
      should be done at the end as well. This requires all function graph
      entry functions return 1 if it should trace, or 0 if the return should
      not be traced.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e49dc19c
    • S
      ftrace: add ftrace_graph_stop() · 14a866c5
      Steven Rostedt 提交于
      Impact: new ftrace_graph_stop function
      
      While developing more features of function graph, I hit a bug that
      caused the WARN_ON to trigger in the prepare_ftrace_return function.
      Well, it was hard for me to find out that was happening because the
      bug would not print, it would just cause a hard lockup or reboot.
      The reason is that it is not safe to call printk from this function.
      
      Looking further, I also found that it calls unregister_ftrace_graph,
      which grabs a mutex and calls kstop machine. This would definitely
      lock the box up if it were to trigger.
      
      This patch adds a fast and safe ftrace_graph_stop() which will
      stop the function tracer. Then it is safe to call the WARN ON.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14a866c5
  4. 02 12月, 2008 2 次提交
    • F
      tracing/function-graph-tracer: support for x86-64 · 48d68b20
      Frederic Weisbecker 提交于
      Impact: extend and enable the function graph tracer to 64-bit x86
      
      This patch implements the support for function graph tracer under x86-64.
      Both static and dynamic tracing are supported.
      
      This causes some small CPP conditional asm on arch/x86/kernel/ftrace.c I
      wanted to use probe_kernel_read/write to make the return address
      saving/patching code more generic but it causes tracing recursion.
      
      That would be perhaps useful to implement a notrace version of these
      function for other archs ports.
      
      Note that arch/x86/process_64.c is not traced, as in X86-32. I first
      thought __switch_to() was responsible of crashes during tracing because I
      believed current task were changed inside but that's actually not the
      case (actually yes, but not the "current" pointer).
      
      So I will have to investigate to find the functions that harm here, to
      enable tracing of the other functions inside (but there is no issue at
      this time, while process_64.c stays out of -pg flags).
      
      A little possible race condition is fixed inside this patch too. When the
      tracer allocate a return stack dynamically, the current depth is not
      initialized before but after. An interrupt could occur at this time and,
      after seeing that the return stack is allocated, the tracer could try to
      trace it with a random uninitialized depth. It's a prevention, even if I
      hadn't problems with it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Bird <tim.bird@am.sony.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d68b20
    • L
      function trace: fix a bug of single thread function trace · 66eafebc
      Liming Wang 提交于
      Impact: fix "no output from tracer" bug caused by ftrace_update_pid_func()
      
      When disabling single thread function trace using
      "echo -1 > set_ftrace_pid", the normal function trace
      has to restore to original function, otherwise the normal
      function trace will not work well.
      
      Without this commit, something like below:
      
      	$ ps |grep 850
      	  850 root      2556 S    -/bin/sh
      	$ echo 850 > /debug/tracing/set_ftrace_pid
      	$ echo function > /debug/tracing/current_tracer
      	$ echo 1 > /debug/tracing/tracing_enabled
      	$ sleep 1
      	$ echo 0 > /debug/tracing/tracing_enabled
      	$ cat /debug/tracing/trace_pipe |wc -l
      	59704
      	$ echo -1 > /debug/tracing/set_ftrace_pid
      	$ echo 1 > /debug/tracing/tracing_enabled
      	$ sleep 1
      	$ echo 0 > /debug/tracing/tracing_enabled
      	$ more /debug/tracing/trace_pipe
      		<====== nothing output now!
      			it should output trace record.
      Signed-off-by: NLiming Wang <liming.wang@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      66eafebc
  5. 28 11月, 2008 1 次提交
  6. 26 11月, 2008 5 次提交
    • S
      ftrace: let function tracing and function return run together · e53a6319
      Steven Rostedt 提交于
      Impact: feature
      
      This patch enables function tracing and function return to run together.
      I've tested this by enabling the stack tracer and return tracer, where
      both the function entry and function return are used together with
      dynamic ftrace.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e53a6319
    • S
      ftrace: use code patching for ftrace graph tracer · 5a45cfe1
      Steven Rostedt 提交于
      Impact: more efficient code for ftrace graph tracer
      
      This patch uses the dynamic patching, when available, to patch
      the function graph code into the kernel.
      
      This patch will ease the way for letting both function tracing
      and function graph tracing run together.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5a45cfe1
    • S
      ftrace: add function tracing to single thread · df4fc315
      Steven Rostedt 提交于
      Impact: feature to function trace a single thread
      
      This patch adds the ability to function trace a single thread.
      The file:
      
        /debugfs/tracing/set_ftrace_pid
      
      contains the pid to trace. Valid pids are any positive integer.
      Writing any negative number to this file will disable the pid
      tracing and the function tracer will go back to tracing all of
      threads.
      
      This feature works with both static and dynamic function tracing.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      df4fc315
    • F
      tracing/function-return-tracer: set a more human readable output · 287b6e68
      Frederic Weisbecker 提交于
      Impact: feature
      
      This patch sets a C-like output for the function graph tracing.
      For this aim, we now call two handler for each function: one on the entry
      and one other on return. This way we can draw a well-ordered call stack.
      
      The pid of the previous trace is loosely stored to be compared against
      the one of the current trace to see if there were a context switch.
      
      Without this little feature, the call tree would seem broken at
      some locations.
      We could use the sched_tracer to capture these sched_events but this
      way of processing is much more simpler.
      
      2 spaces have been chosen for indentation to fit the screen while deep
      calls. The time of execution in nanosecs is printed just after closed
      braces, it seems more easy this way to find the corresponding function.
      If the time was printed as a first column, it would be not so easy to
      find the corresponding function if it is called on a deep depth.
      
      I plan to output the return value but on 32 bits CPU, the return value
      can be 32 or 64, and its difficult to guess on which case we are.
      I don't know what would be the better solution on X86-32: only print
      eax (low-part) or even edx (high-part).
      
      Actually it's thee same problem when a function return a 8 bits value, the
      high part of eax could contain junk values...
      
      Here is an example of trace:
      
      sys_read() {
        fget_light() {
        } 526
        vfs_read() {
          rw_verify_area() {
            security_file_permission() {
              cap_file_permission() {
              } 519
            } 1564
          } 2640
          do_sync_read() {
            pipe_read() {
              __might_sleep() {
              } 511
              pipe_wait() {
                prepare_to_wait() {
                } 760
                deactivate_task() {
                  dequeue_task() {
                    dequeue_task_fair() {
                      dequeue_entity() {
                        update_curr() {
                          update_min_vruntime() {
                          } 504
                        } 1587
                        clear_buddies() {
                        } 512
                        add_cfs_task_weight() {
                        } 519
                        update_min_vruntime() {
                        } 511
                      } 5602
                      dequeue_entity() {
                        update_curr() {
                          update_min_vruntime() {
                          } 496
                        } 1631
                        clear_buddies() {
                        } 496
                        update_min_vruntime() {
                        } 527
                      } 4580
                      hrtick_update() {
                        hrtick_start_fair() {
                        } 488
                      } 1489
                    } 13700
                  } 14949
                } 16016
                msecs_to_jiffies() {
                } 496
                put_prev_task_fair() {
                } 504
                pick_next_task_fair() {
                } 489
                pick_next_task_rt() {
                } 496
                pick_next_task_fair() {
                } 489
                pick_next_task_idle() {
                } 489
      
      ------------8<---------- thread 4 ------------8<----------
      
      finish_task_switch() {
      } 1203
      do_softirq() {
        __do_softirq() {
          __local_bh_disable() {
          } 669
          rcu_process_callbacks() {
            __rcu_process_callbacks() {
              cpu_quiet() {
                rcu_start_batch() {
                } 503
              } 1647
            } 3128
            __rcu_process_callbacks() {
            } 542
          } 5362
          _local_bh_enable() {
          } 587
        } 8880
      } 9986
      kthread_should_stop() {
      } 669
      deactivate_task() {
        dequeue_task() {
          dequeue_task_fair() {
            dequeue_entity() {
              update_curr() {
                calc_delta_mine() {
                } 511
                update_min_vruntime() {
                } 511
              } 2813
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      287b6e68
    • F
      tracing/function-return-tracer: change the name into function-graph-tracer · fb52607a
      Frederic Weisbecker 提交于
      Impact: cleanup
      
      This patch changes the name of the "return function tracer" into
      function-graph-tracer which is a more suitable name for a tracing
      which makes one able to retrieve the ordered call stack during
      the code flow.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fb52607a
  7. 24 11月, 2008 1 次提交
    • F
      tracing/function-return-tracer: don't trace kfree while it frees the return stack · eae849ca
      Frederic Weisbecker 提交于
      Impact: fix a crash
      
      While I killed the cat process, I got sometimes the following (but rare)
      crash:
      
      [   65.689027] Pid: 2969, comm: cat Not tainted (2.6.28-rc6-tip #83) AMILO Li 2727
      [   65.689027] EIP: 0060:[<00000000>] EFLAGS: 00010082 CPU: 1
      [   65.689027] EIP is at 0x0
      [   65.689027] EAX: 00000000 EBX: f66cd780 ECX: c019a64a EDX: f66cd780
      [   65.689027] ESI: 00000286 EDI: f66cd780 EBP: f630be2c ESP: f630be24
      [   65.689027]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
      [   65.689027] Process cat (pid: 2969, ti=f630a000 task=f66cd780 task.ti=f630a000)
      [   65.689027] Stack:
      [   65.689027]  00000012 f630bd54 f630be7c c012c853 00000000 c0133cc9 f66cda54 f630be5c
      [   65.689027]  f630be68 f66cda54 f66cd88c f66cd878 f7070000 00000001 f630be90 c0135dbc
      [   65.689027]  f614a614 f630be68 f630be68 f65ba200 00000002 f630bf10 f630be90 c012cad6
      [   65.689027] Call Trace:
      [   65.689027]  [<c012c853>] ? do_exit+0x603/0x850
      [   65.689027]  [<c0133cc9>] ? next_signal+0x9/0x40
      [   65.689027]  [<c0135dbc>] ? dequeue_signal+0x8c/0x180
      [   65.689027]  [<c012cad6>] ? do_group_exit+0x36/0x90
      [   65.689027]  [<c013709c>] ? get_signal_to_deliver+0x20c/0x390
      [   65.689027]  [<c0102b69>] ? do_notify_resume+0x99/0x8b0
      [   65.689027]  [<c02e6d1a>] ? tty_ldisc_deref+0x5a/0x80
      [   65.689027]  [<c014db9b>] ? trace_hardirqs_on+0xb/0x10
      [   65.689027]  [<c02e6d1a>] ? tty_ldisc_deref+0x5a/0x80
      [   65.689027]  [<c02e39b0>] ? n_tty_write+0x0/0x340
      [   65.689027]  [<c02e1812>] ? redirected_tty_write+0x82/0x90
      [   65.689027]  [<c019ee99>] ? vfs_write+0x99/0xd0
      [   65.689027]  [<c02e1790>] ? redirected_tty_write+0x0/0x90
      [   65.689027]  [<c019f342>] ? sys_write+0x42/0x70
      [   65.689027]  [<c01035ca>] ? work_notifysig+0x13/0x19
      [   65.689027] Code:  Bad EIP value.
      [   65.689027] EIP: [<00000000>] 0x0 SS:ESP 0068:f630be24
      
      This is because on do_exit(), kfree is called to free the return addresses stack
      but kfree is traced and stored its return address in this stack.
      This patch fixes it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      eae849ca
  8. 23 11月, 2008 1 次提交
  9. 19 11月, 2008 3 次提交
    • S
      ftrace: fix dyn ftrace filter selection · 32464779
      Steven Rostedt 提交于
      Impact: clean up and fix for dyn ftrace filter selection
      
      The previous logic of the dynamic ftrace selection of enabling
      or disabling functions was complex and incorrect. This patch simplifies
      the code and corrects the usage. This simplification also makes the
      code more robust.
      
      Here is the correct logic:
      
        Given a function that can be traced by dynamic ftrace:
      
        If the function is not to be traced, disable it if it was enabled.
        (this is if the function is in the set_ftrace_notrace file)
      
        (filter is on if there exists any functions in set_ftrace_filter file)
      
        If the filter is on, and we are enabling functions:
          If the function is in set_ftrace_filter, enable it if it is not
            already enabled.
          If the function is not in set_ftrace_filter, disable it if it is not
            already disabled.
      
        Otherwise, if the filter is off and we are enabling function tracing:
          Enable the function if it is not already enabled.
      
        Otherwise, if we are disabling function tracing:
          Disable the function if it is not already disabled.
      
      This code now sets or clears the ENABLED flag in the record, and at the
      end it will enable the function if the flag is set, or disable the function
      if the flag is cleared.
      
      The parameters for the function that does the above logic is also
      simplified. Instead of passing in confusing "new" and "old" where
      they might be swapped if the "enabled" flag is not set. The old logic
      even had one of the above always NULL and had to be filled in. The new
      logic simply passes in one parameter called "nop". A "call" is calculated
      in the code, and at the end of the logic, when we know we need to either
      disable or enable the function, we can then use the "nop" and "call"
      properly.
      
      This code is more robust than the previous version.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      32464779
    • S
      ftrace: make filtered functions effective on setting · 82043278
      Steven Rostedt 提交于
      Impact: fix filter selection to apply when set
      
      It can be confusing when the set_filter_functions is set (or cleared)
      and the functions being recorded by the dynamic tracer does not
      match.
      
      This patch causes the code to be updated if the function tracer is
      enabled and the filter is changed.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      82043278
    • S
      ftrace: fix set_ftrace_filter · f10ed36e
      Steven Rostedt 提交于
      Impact: fix of output of set_ftrace_filter
      
      The commit "ftrace: do not show freed records in
                   available_filter_functions"
      
      Removed a bit too much from the set_ftrace_filter code, where we now see
      all functions in the set_ftrace_filter file even when we set a filter.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f10ed36e
  10. 16 11月, 2008 9 次提交
    • W
      function tracing: fix wrong pos computing when read buffer has been fulfilled · 5821e1b7
      walimis 提交于
      Impact: make output of available_filter_functions complete
      
      phenomenon:
      
      The first value of dyn_ftrace_total_info is not equal with
      `cat available_filter_functions | wc -l`, but they should be equal.
      
      root cause:
      
      When printing functions with seq_printf in t_show, if the read buffer
      is just overflowed by current function record, then this function
      won't be printed to user space through read buffer, it will
      just be dropped. So we can't see this function printing.
      
      So, every time the last function to fill the read buffer, if overflowed,
      will be dropped.
      
      This also applies to set_ftrace_filter if set_ftrace_filter has
      more bytes than read buffer.
      
      fix:
      
      Through checking return value of seq_printf, if less than 0, we know
      this function doesn't be printed. Then we decrease position to force
      this function to be printed next time, in next read buffer.
      
      Another little fix is to show correct allocating pages count.
      Signed-off-by: Nwalimis <walimisdev@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5821e1b7
    • F
      tracing/function-return-tracer: support for dynamic ftrace on function return tracer · e7d3737e
      Frederic Weisbecker 提交于
      This patch adds the support for dynamic tracing on the function return tracer.
      The whole difference with normal dynamic function tracing is that we don't need
      to hook on a particular callback. The only pro that we want is to nop or set
      dynamically the calls to ftrace_caller (which is ftrace_return_caller here).
      
      Some security checks ensure that we are not trying to launch dynamic tracing for
      return tracing while normal function tracing is already running.
      
      An example of trace with getnstimeofday set as a filter:
      
      ktime_get_ts+0x22/0x50 -> getnstimeofday (2283 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1396 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1382 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1825 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1426 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1464 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1524 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1382 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1382 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1434 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1464 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1502 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1404 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1397 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1051 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1314 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1344 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1163 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1390 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1374 ns)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7d3737e
    • S
      ftrace: make filtered functions effective on setting · ee02a2e5
      Steven Rostedt 提交于
      Impact: set filtered functions at time the filter is set
      
      It can be confusing when the set_filter_functions is set (or cleared)
      and the functions being recorded by the dynamic tracer does not
      match.
      
      This patch causes the code to be updated if the function tracer is
      enabled and the filter is changed.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ee02a2e5
    • S
      ftrace: fix dyn ftrace filter · 982c350b
      Steven Rostedt 提交于
      Impact: correct implementation of dyn ftrace filter
      
      The old decisions made by the filter algorithm was complex and incorrect.
      This lead to inconsistent enabling or disabling of functions when
      the filter was used.
      
      This patch simplifies that code and in doing so, corrects the usage
      of the filters.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      982c350b
    • S
      ftrace: allow NULL pointers in mcount_loc · 20e5227e
      Steven Rostedt 提交于
      Impact: make ftrace_convert_nops() more permissive
      
      Due to the way different architecture linkers combine the data sections
      of the mcount_loc (the section that lists all the locations that
      call mcount), there may be zeros added in that section. This is usually
      due to strange alignments that the linker performs, that pads in zeros.
      
      This patch makes the conversion code to nops skip any pointer in
      the mcount_loc section that is NULL.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      20e5227e
    • S
      ftrace: pass module struct to arch dynamic ftrace functions · 31e88909
      Steven Rostedt 提交于
      Impact: allow archs more flexibility on dynamic ftrace implementations
      
      Dynamic ftrace has largly been developed on x86. Since x86 does not
      have the same limitations as other architectures, the ftrace interaction
      between the generic code and the architecture specific code was not
      flexible enough to handle some of the issues that other architectures
      have.
      
      Most notably, module trampolines. Due to the limited branch distance
      that archs make in calling kernel core code from modules, the module
      load code must create a trampoline to jump to what will make the
      larger jump into core kernel code.
      
      The problem arises when this happens to a call to mcount. Ftrace checks
      all code before modifying it and makes sure the current code is what
      it expects. Right now, there is not enough information to handle modifying
      module trampolines.
      
      This patch changes the API between generic dynamic ftrace code and
      the arch dependent code. There is now two functions for modifying code:
      
        ftrace_make_nop(mod, rec, addr) - convert the code at rec->ip into
             a nop, where the original text is calling addr. (mod is the
             module struct if called by module init)
      
        ftrace_make_caller(rec, addr) - convert the code rec->ip that should
             be a nop into a caller to addr.
      
      The record "rec" now has a new field called "arch" where the architecture
      can add any special attributes to each call site record.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      31e88909
    • S
      ftrace: do not process freed records · 918c1154
      Steven Rostedt 提交于
      Impact: keep from converting freed records
      
      When the tracer is started or stopped, it converts all code pointed
      to by the saved records into callers to ftrace or nops. When modules
      are unloaded, their records are freed, but they still exist within
      the record pages.
      
      This patch changes the code to skip over freed records.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      918c1154
    • S
      ftrace: disable ftrace on anomalies in trace start and stop · b17e8a37
      Steven Rostedt 提交于
      Impact: robust feature to disable ftrace on start or stop tracing on error
      
      Currently only the initial conversion to nops will disable ftrace
      on an anomaly. But if an anomaly happens on start or stopping of the
      tracer, it will silently fail.
      
      This patch adds a check there too, to disable ftrace and warn if the
      conversion fails.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b17e8a37
    • S
      ftrace: remove condition from ftrace_record_ip · f3c7ac40
      Steven Rostedt 提交于
      Impact: let module functions be recorded when dyn ftrace not enabled
      
      When dynamic ftrace had a daemon and a hash to record the locations
      of mcount callers at run time, the recording needed to stop when
      ftrace was disabled. But now that the recording is done at compile time
      and the ftrace_record_ip is only called at boot up and when a module
      is loaded, we no longer need to check if ftrace_enabled is set.
      In fact, this breaks module load if it is not set because we skip
      over module functions.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f3c7ac40
  11. 14 11月, 2008 1 次提交
  12. 12 11月, 2008 1 次提交
    • S
      ring-buffer: buffer record on/off switch · a3583244
      Steven Rostedt 提交于
      Impact: enable/disable ring buffer recording API added
      
      Several kernel developers have requested that there be a way to stop
      recording into the ring buffers with a simple switch that can also
      be enabled from userspace. This patch addes a new kernel API to the
      ring buffers called:
      
       tracing_on()
       tracing_off()
      
      When tracing_off() is called, all ring buffers will not be able to record
      into their buffers.
      
      tracing_on() will enable the ring buffers again.
      
      These two act like an on/off switch. That is, there is no counting of the
      number of times tracing_off or tracing_on has been called.
      
      A new file is added to the debugfs/tracing directory called
      
        tracing_on
      
      This allows for userspace applications to also flip the switch.
      
        echo 0 > debugfs/tracing/tracing_on
      
      disables the tracing.
      
        echo 1 > /debugfs/tracing/tracing_on
      
      enables it.
      
      Note, this does not disable or enable any tracers. It only sets or clears
      a flag that needs to be set in order for the ring buffers to write to
      their buffers. It is a global flag, and affects all ring buffers.
      
      The buffers start out with tracing_on enabled.
      
      There are now three flags that control recording into the buffers:
      
       tracing_on: which affects all ring buffer tracers.
      
       buffer->record_disabled: which affects an allocated buffer, which may be set
           if an anomaly is detected, and tracing is disabled.
      
       cpu_buffer->record_disabled: which is set by tracing_stop() or if an
           anomaly is detected. tracing_start can not reenable this if
           an anomaly occurred.
      
      The userspace debugfs/tracing/tracing_enabled is implemented with
      tracing_stop() but the user space code can not enable it if the kernel
      called tracing_stop().
      
      Userspace can enable the tracing_on even if the kernel disabled it.
      It is just a switch used to stop tracing if a condition was hit.
      tracing_on is not for protecting critical areas in the kernel nor is
      it for stopping tracing if an anomaly occurred. This is because userspace
      can reenable it at any time.
      
      Side effect: With this patch, I discovered a dead variable in ftrace.c
        called tracing_on. This patch removes it.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      a3583244
  13. 11 11月, 2008 1 次提交
    • F
      tracing: add a tracer to catch execution time of kernel functions · 15e6cb36
      Frederic Weisbecker 提交于
      Impact: add new tracing plugin which can trace full (entry+exit) function calls
      
      This tracer uses the low level function return ftrace plugin to
      measure the execution time of the kernel functions.
      
      The first field is the caller of the function, the second is the
      measured function, and the last one is the execution time in
      nanoseconds.
      
      - v3:
      
      - HAVE_FUNCTION_RET_TRACER have been added. Each arch that support ftrace return
        should enable it.
      - ftrace_return_stub becomes ftrace_stub.
      - CONFIG_FUNCTION_RET_TRACER depends now on CONFIG_FUNCTION_TRACER
      - Return traces printing can be used for other tracers on trace.c
      - Adapt to the new tracing API (no more ctrl_update callback)
      - Correct the check of "disabled" during insertion.
      - Minor changes...
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15e6cb36
  14. 08 11月, 2008 1 次提交
  15. 06 11月, 2008 1 次提交
    • S
      ftrace: add quick function trace stop · 60a7ecf4
      Steven Rostedt 提交于
      Impact: quick start and stop of function tracer
      
      This patch adds a way to disable the function tracer quickly without
      the need to run kstop_machine. It adds a new variable called
      function_trace_stop which will stop the calls to functions from mcount
      when set.  This is just an on/off switch and does not handle recursion
      like preempt_disable().
      
      It's main purpose is to help other tracers/debuggers start and stop tracing
      fuctions without the need to call kstop_machine.
      
      The config option HAVE_FUNCTION_TRACE_MCOUNT_TEST is added for archs
      that implement the testing of the function_trace_stop in the mcount
      arch dependent code. Otherwise, the test is done in the C code.
      
      x86 is the only arch at the moment that supports this.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60a7ecf4
  16. 29 10月, 2008 1 次提交
    • F
      ftrace: perform an initialization for ftrace to enable it · 0b6e4d56
      Frederic Weisbecker 提交于
      Impact: corrects a bug which made the non-dyn function tracer not functional
      
      With latest git, the non-dynamic function tracer didn't get any trace.
      
      The problem was the fact that ftrace_enabled wasn't initialized to 1
      because ftrace hasn't any init function when DYNAMIC_FTRACE is disabled.
      
      So when a tracer tries to register an ftrace_ops struct,
      __register_ftrace_function failed to set the hook.
      
      This patch corrects it by setting an init function to initialize
      ftrace during the boot.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0b6e4d56
  17. 24 10月, 2008 1 次提交
    • I
      ftrace: warning in kernel/trace/ftrace.c · f17845e5
      Ingo Molnar 提交于
      this warning:
      
        kernel/trace/ftrace.c:189: warning: ‘frozen_record_count’ defined but not used
      
      triggers because frozen_record_count is only used in the KCONFIG_MARKERS
      case. Move the variable it there.
      
      Alas, this frozen-record facility seems to have little use. The
      frozen_record_count variable is not used by anything, nor the flags.
      
      So this section might need a bit of dead-code-removal care as well.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f17845e5
  18. 23 10月, 2008 2 次提交