1. 20 7月, 2012 3 次提交
  2. 15 6月, 2012 1 次提交
  3. 17 5月, 2012 7 次提交
    • S
      ftrace/x86: Have x86 ftrace use the ftrace_modify_all_code() · e4f5d544
      Steven Rostedt 提交于
      To remove duplicate code, have the ftrace arch_ftrace_update_code()
      use the generic ftrace_modify_all_code(). This requires that the
      default ftrace_replace_code() becomes a weak function so that an
      arch may override it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e4f5d544
    • S
      ftrace: Make ftrace_modify_all_code() global for archs to use · 8ed3e2cf
      Steven Rostedt 提交于
      Rename __ftrace_modify_code() to ftrace_modify_all_code() and make
      it global for all archs to use. This will remove the duplication
      of code, as archs that can modify code without stop_machine()
      can use it directly outside of the stop_machine() call.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8ed3e2cf
    • S
      ftrace: Return record ip addr for ftrace_location() · f0cf973a
      Steven Rostedt 提交于
      ftrace_location() is passed an addr, and returns 1 if the addr is
      on a ftrace nop (or caller to ftrace_caller), and 0 otherwise.
      
      To let kprobes know if it should move a breakpoint or not, it
      must return the actual addr that is the start of the ftrace nop.
      This way a kprobe placed on the location of a ftrace nop, can
      instead be placed on the instruction after the nop. Even if the
      probe addr is on the second or later byte of the nop, it can
      simply be moved forward.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f0cf973a
    • S
      ftrace: Consolidate ftrace_location() and ftrace_text_reserved() · a650e02a
      Steven Rostedt 提交于
      Both ftrace_location() and ftrace_text_reserved() do basically the same thing.
      They search to see if an address is in the ftace table (contains an address
      that may change from nop to call ftrace_caller). The difference is
      that ftrace_location() searches a single address, but ftrace_text_reserved()
      searches a range.
      
      This also makes the ftrace_text_reserved() faster as it now uses a bsearch()
      instead of linearly searching all the addresses within a page.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a650e02a
    • S
      ftrace: Speed up search by skipping pages by address · 9644302e
      Steven Rostedt 提交于
      As all records in a page of the ftrace table are sorted, we can
      speed up the search algorithm by checking if the address to look for
      falls in between the first and last record ip on the page.
      
      This speeds up both the ftrace_location() and ftrace_text_reserved()
      algorithms, as it can skip full pages when the search address is
      not in them.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9644302e
    • S
      ftrace: Remove extra helper functions · 706c81f8
      Steven Rostedt 提交于
      The ftrace_record_ip() and ftrace_alloc_dyn_node() were from the
      time of the ftrace daemon. Although they were still used, they
      still make things a bit more complex than necessary.
      
      Move the code into the one function that uses it, and remove the
      helper functions.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      706c81f8
    • S
      ftrace: Sort all function addresses, not just per page · 9fd49328
      Steven Rostedt 提交于
      Instead of just sorting the ip's of the functions per ftrace page,
      sort the entire list before adding them to the ftrace pages.
      
      This will allow the bsearch algorithm to be sped up as it can
      also sort by pages, not just records within a page.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9fd49328
  4. 09 5月, 2012 1 次提交
  5. 14 3月, 2012 1 次提交
    • R
      ftrace: Fix function_graph for archs that test ftrace_trace_function · db6544e0
      Rajesh Bhagat 提交于
      When CONFIG_DYNAMIC_FTRACE is not set, some archs (ARM) test
      the variable function_trace_function to determine if it should
      call the function tracer. If it is not set to ftrace_stub, then
      it will call the function and return, and not call the function
      graph tracer.
      
      But some of these archs (ARM) do not have the assembly code
      to test if function tracing is enabled or not (quick stop of tracing)
      and it calls the helper routine ftrace_test_stop_func() instead.
      
      If function tracer is enabled and then disabled, the variable
      ftrace_trace_function is still set to the helper routine
      ftrace_test_stop_func(), and not to ftrace_stub. This will
      prevent the function graph tracer from ever running.
      
      Output before patch
      /debug/tracing # echo function > current_tracer
      /debug/tracing # echo function_graph > current_tracer
      /debug/tracing # cat trace
      
      Output after patch
      /debug/tracing # echo function > current_tracer
      /debug/tracing # echo function_graph > current_tracer
      /debug/tracing # cat trace
      0) ! 253.375 us | } /* irq_enter */
      0) | generic_handle_irq() {
      0) | handle_fasteoi_irq() {
      0) 9.208 us | _raw_spin_lock();
      0) | handle_irq_event() {
      0) | handle_irq_event_percpu() {
      Signed-off-by: NRajesh Bhagat <rajesh.lnx@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      db6544e0
  6. 22 2月, 2012 2 次提交
    • J
      ftrace, perf: Add filter support for function trace event · 5500fa51
      Jiri Olsa 提交于
      Adding support to filter function trace event via perf
      interface. It is now possible to use filter interface
      in the perf tool like:
      
        perf record -e ftrace:function --filter="(ip == mm_*)" ls
      
      The filter syntax is restricted to the the 'ip' field only,
      and following operators are accepted '==' '!=' '||', ending
      up with the filter strings like:
      
        ip == f1[, ]f2 ... || ip != f3[, ]f4 ...
      
      with comma ',' or space ' ' as a function separator. If the
      space ' ' is used as a separator, the right side of the
      assignment needs to be enclosed in double quotes '"', e.g.:
      
        perf record -e ftrace:function --filter '(ip == do_execve,sys_*,ext*)' ls
        perf record -e ftrace:function --filter '(ip == "do_execve,sys_*,ext*")' ls
        perf record -e ftrace:function --filter '(ip == "do_execve sys_* ext*")' ls
      
      The '==' operator adds trace filter with same effect as would
      be added via set_ftrace_filter file.
      
      The '!=' operator adds trace filter with same effect as would
      be added via set_ftrace_notrace file.
      
      The right side of the '!=', '==' operators is list of functions
      or regexp. to be added to filter separated by space.
      
      The '||' operator is used for connecting multiple filter definitions
      together. It is possible to have more than one '==' and '!='
      operators within one filter string.
      
      Link: http://lkml.kernel.org/r/1329317514-8131-8-git-send-email-jolsa@redhat.comSigned-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5500fa51
    • J
      ftrace: Add enable/disable ftrace_ops control interface · e248491a
      Jiri Olsa 提交于
      Adding a way to temporarily enable/disable ftrace_ops. The change
      follows the same way as 'global' ftrace_ops are done.
      
      Introducing 2 global ftrace_ops - control_ops and ftrace_control_list
      which take over all ftrace_ops registered with FTRACE_OPS_FL_CONTROL
      flag. In addition new per cpu flag called 'disabled' is also added to
      ftrace_ops to provide the control information for each cpu.
      
      When ftrace_ops with FTRACE_OPS_FL_CONTROL is registered, it is
      set as disabled for all cpus.
      
      The ftrace_control_list contains all the registered 'control' ftrace_ops.
      The control_ops provides function which iterates ftrace_control_list
      and does the check for 'disabled' flag on current cpu.
      
      Adding 3 inline functions:
        ftrace_function_local_disable/ftrace_function_local_enable
        - enable/disable the ftrace_ops on current cpu
        ftrace_function_local_disabled
        - get disabled ftrace_ops::disabled value for current cpu
      
      Link: http://lkml.kernel.org/r/1329317514-8131-2-git-send-email-jolsa@redhat.comAcked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e248491a
  7. 14 2月, 2012 1 次提交
  8. 03 2月, 2012 1 次提交
  9. 21 12月, 2011 12 次提交
    • S
      ftrace: Allow access to the boot time function enabling · 2a85a37f
      Steven Rostedt 提交于
      Change set_ftrace_early_filter() to ftrace_set_early_filter()
      and make it a global function. This will allow other subsystems
      in the kernel to be able to enable function tracing at start
      up and reuse the ftrace function parsing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a85a37f
    • S
      ftrace: Decouple hash items from showing filtered functions · 69a3083c
      Steven Rostedt 提交于
      The set_ftrace_filter shows "hashed" functions, which are functions
      that are added with operations to them (like traceon and traceoff).
      
      As other subsystems may be able to show what functions they are
      using for function tracing, the hash items should no longer
      be shown just because the FILTER flag is set. As they have nothing
      to do with other subsystems filters.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      69a3083c
    • S
      ftrace: Allow other users of function tracing to use the output listing · fc13cb0c
      Steven Rostedt 提交于
      The function tracer is set up to allow any other subsystem (like perf)
      to use it. Ftrace already has a way to list what functions are enabled
      by the global_ops. It would be very helpful to let other users of
      the function tracer to be able to use the same code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fc13cb0c
    • S
      ftrace: Create ftrace_hash_empty() helper routine · 06a51d93
      Steven Rostedt 提交于
      There are two types of hashes in the ftrace_ops; one type
      is the filter_hash and the other is the notrace_hash. Either
      one may be null, meaning it has no elements. But when elements
      are added, the hash is allocated.
      
      Throughout the code, a check needs to be made to see if a hash
      exists or the hash has elements, but the check if the hash exists
      is usually missing causing the possible "NULL pointer dereference bug".
      
      Add a helper routine called "ftrace_hash_empty()" that returns
      true if the hash doesn't exist or its count is zero. As they mean
      the same thing.
      Last-bug-reported-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      06a51d93
    • S
      ftrace: Fix ftrace hash record update with notrace · c842e975
      Steven Rostedt 提交于
      When disabling the "notrace" records, that means we want to trace them.
      If the notrace_hash is zero, it means that we want to trace all
      records. But to disable a zero notrace_hash means nothing.
      
      The check for the notrace_hash count was incorrect with:
      
      	if (hash && !hash->count)
      		return
      
      With the correct comment above it that states that we do nothing
      if the notrace_hash has zero count. But !hash also means that
      the notrace hash has zero count. I think this was done to
      protect against dereferencing NULL. But if !hash is true, then
      we go through the following loop without doing a single thing.
      
      Fix it to:
      
      	if (!hash || !hash->count)
      		return;
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c842e975
    • S
      ftrace: Use bsearch to find record ip · 5855fead
      Steven Rostedt 提交于
      Now that each set of pages in the function list are sorted by
      ip, we can use bsearch to find a record within each set of pages.
      This speeds up the ftrace_location() function by magnitudes.
      
      For archs (like x86) that need to add a breakpoint at every function
      that will be converted from a nop to a callback and vice versa,
      the breakpoint callback needs to know if the breakpoint was for
      ftrace or not. It requires finding the breakpoint ip within the
      records. Doing a linear search is extremely inefficient. It is
      a must to be able to do a fast binary search to find these locations.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5855fead
    • S
      ftrace: Sort the mcount records on each page · 68950619
      Steven Rostedt 提交于
      Sort records by ip locations of the ftrace mcount calls on each of the
      set of pages in the function list. This helps in localizing cache
      usuage when updating the function locations, as well as gives us
      the ability to quickly find an ip location in the list.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      68950619
    • S
      ftrace: Replace record newlist with record page list · 85ae32ae
      Steven Rostedt 提交于
      As new functions come in to be initalized from mcount to nop,
      they are done by groups of pages. Whether it is the core kernel
      or a module. There's no need to keep track of these on a per record
      basis.
      
      At startup, and as any module is loaded, the functions to be
      traced are stored in a group of pages and added to the function
      list at the end. We just need to keep a pointer to the first
      page of the list that was added, and use that to know where to
      start on the list for initializing functions.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      85ae32ae
    • S
      ftrace: Allocate the mcount record pages as groups · a7900875
      Steven Rostedt 提交于
      Allocate the mcount record pages as a group of pages as big
      as can be allocated and waste no more than a single page.
      
      Grouping the mcount pages as much as possible helps with cache
      locality, as we do not need to redirect with descriptors as we
      cross from page to page. It also allows us to do more with the
      records later on (sort them with bigger benefits).
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a7900875
    • S
      ftrace: Remove usage of "freed" records · 32082309
      Steven Rostedt 提交于
      Records that are added to the function trace table are
      permanently there, except for modules. By separating out the
      modules to their own pages that can be freed in one shot
      we can remove the "freed" flag and simplify some of the record
      management.
      
      Another benefit of doing this is that we can also move the
      records around; sort them.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      32082309
    • S
      ftrace: Allow archs to modify code without stop machine · c88fd863
      Steven Rostedt 提交于
      The stop machine method to modify all functions in the kernel
      (some 20,000 of them) is the safest way to do so across all archs.
      But some archs may not need this big hammer approach to modify code
      on SMP machines, and can simply just update the code it needs.
      
      Adding a weak function arch_ftrace_update_code() that now does the
      stop machine, will also let any arch override this method.
      
      If the arch needs to check the system and then decide if it can
      avoid stop machine, it can still call ftrace_run_stop_machine() to
      use the old method.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c88fd863
    • J
      ftrace: Fix unregister ftrace_ops accounting · 30fb6aa7
      Jiri Olsa 提交于
      Multiple users of the function tracer can register their functions
      with the ftrace_ops structure. The accounting within ftrace will
      update the counter on each function record that is being traced.
      When the ftrace_ops filtering adds or removes functions, the
      function records will be updated accordingly if the ftrace_ops is
      still registered.
      
      When a ftrace_ops is removed, the counter of the function records,
      that the ftrace_ops traces, are decremented. When they reach zero
      the functions that they represent are modified to stop calling the
      mcount code.
      
      When changes are made, the code is updated via stop_machine() with
      a command passed to the function to tell it what to do. There is an
      ENABLE and DISABLE command that tells the called function to enable
      or disable the functions. But the ENABLE is really a misnomer as it
      should just update the records, as records that have been enabled
      and now have a count of zero should be disabled.
      
      The DISABLE command is used to disable all functions regardless of
      their counter values. This is the big off switch and is not the
      complement of the ENABLE command.
      
      To make matters worse, when a ftrace_ops is unregistered and there
      is another ftrace_ops registered, neither the DISABLE nor the
      ENABLE command are set when calling into the stop_machine() function
      and the records will not be updated to match their counter. A command
      is passed to that function that will update the mcount code to call
      the registered callback directly if it is the only one left. This
      means that the ftrace_ops that is still registered will have its callback
      called by all functions that have been set for it as well as the ftrace_ops
      that was just unregistered.
      
      Here's a way to trigger this bug. Compile the kernel with
      CONFIG_FUNCTION_PROFILER set and with CONFIG_FUNCTION_GRAPH not set:
      
       CONFIG_FUNCTION_PROFILER=y
       # CONFIG_FUNCTION_GRAPH is not set
      
      This will force the function profiler to use the function tracer instead
      of the function graph tracer.
      
        # cd /sys/kernel/debug/tracing
        # echo schedule > set_ftrace_filter
        # echo function > current_tracer
        # cat set_ftrace_filter
       schedule
        # cat trace
       # tracer: nop
       #
       # entries-in-buffer/entries-written: 692/68108025   #P:4
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
            kworker/0:2-909   [000] ....   531.235574: schedule <-worker_thread
                 <idle>-0     [001] .N..   531.235575: schedule <-cpu_idle
            kworker/0:2-909   [000] ....   531.235597: schedule <-worker_thread
                   sshd-2563  [001] ....   531.235647: schedule <-schedule_hrtimeout_range_clock
      
        # echo 1 > function_profile_enabled
        # echo 0 > function_porfile_enabled
        # cat set_ftrace_filter
       schedule
        # cat trace
       # tracer: function
       #
       # entries-in-buffer/entries-written: 159701/118821262   #P:4
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
                 <idle>-0     [002] ...1   604.870655: local_touch_nmi <-cpu_idle
                 <idle>-0     [002] d..1   604.870655: enter_idle <-cpu_idle
                 <idle>-0     [002] d..1   604.870656: atomic_notifier_call_chain <-enter_idle
                 <idle>-0     [002] d..1   604.870656: __atomic_notifier_call_chain <-atomic_notifier_call_chain
      
      The same problem could have happened with the trace_probe_ops,
      but they are modified with the set_frace_filter file which does the
      update at closure of the file.
      
      The simple solution is to change ENABLE to UPDATE and call it every
      time an ftrace_ops is unregistered.
      
      Link: http://lkml.kernel.org/r/1323105776-26961-3-git-send-email-jolsa@redhat.com
      
      Cc: stable@vger.kernel.org # 3.0+
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      30fb6aa7
  10. 06 12月, 2011 2 次提交
    • S
      ftrace: Fix hash record accounting bug · ddf6e0e5
      Steven Rostedt 提交于
      If the set_ftrace_filter is cleared by writing just whitespace to
      it, then the filter hash refcounts will be decremented but not
      updated. This causes two bugs:
      
      1) No functions will be enabled for tracing when they all should be
      
      2) If the users clears the set_ftrace_filter twice, it will crash ftrace:
      
      ------------[ cut here ]------------
      WARNING: at /home/rostedt/work/git/linux-trace.git/kernel/trace/ftrace.c:1384 __ftrace_hash_rec_update.part.27+0x157/0x1a7()
      Modules linked in:
      Pid: 2330, comm: bash Not tainted 3.1.0-test+ #32
      Call Trace:
       [<ffffffff81051828>] warn_slowpath_common+0x83/0x9b
       [<ffffffff8105185a>] warn_slowpath_null+0x1a/0x1c
       [<ffffffff810ba362>] __ftrace_hash_rec_update.part.27+0x157/0x1a7
       [<ffffffff810ba6e8>] ? ftrace_regex_release+0xa7/0x10f
       [<ffffffff8111bdfe>] ? kfree+0xe5/0x115
       [<ffffffff810ba51e>] ftrace_hash_move+0x2e/0x151
       [<ffffffff810ba6fb>] ftrace_regex_release+0xba/0x10f
       [<ffffffff8112e49a>] fput+0xfd/0x1c2
       [<ffffffff8112b54c>] filp_close+0x6d/0x78
       [<ffffffff8113a92d>] sys_dup3+0x197/0x1c1
       [<ffffffff8113a9a6>] sys_dup2+0x4f/0x54
       [<ffffffff8150cac2>] system_call_fastpath+0x16/0x1b
      ---[ end trace 77a3a7ee73794a02 ]---
      
      Link: http://lkml.kernel.org/r/20111101141420.GA4918@debianReported-by: NRabin Vincent <rabin@rab.in>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ddf6e0e5
    • S
      ftrace: Remove force undef config value left for testing · c7c6ec8b
      Steven Rostedt 提交于
      A forced undef of a config value was used for testing and was
      accidently left in during the final commit. This causes x86 to
      run slower than needed while running function tracing as well
      as causes the function graph selftest to fail when DYNMAIC_FTRACE
      is not set. This is because the code in MCOUNT expects the ftrace
      code to be processed with the config value set that happened to
      be forced not set.
      
      The forced config option was left in by:
          commit 6331c28c
          ftrace: Fix dynamic selftest failure on some archs
      
      Link: http://lkml.kernel.org/r/20111102150255.GA6973@debian
      
      Cc: stable@vger.kernel.org
      Reported-by: NRabin Vincent <rabin@rab.in>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c7c6ec8b
  11. 08 11月, 2011 2 次提交
    • S
      ftrace: Fix hash record accounting bug · d4d34b98
      Steven Rostedt 提交于
      If the set_ftrace_filter is cleared by writing just whitespace to
      it, then the filter hash refcounts will be decremented but not
      updated. This causes two bugs:
      
      1) No functions will be enabled for tracing when they all should be
      
      2) If the users clears the set_ftrace_filter twice, it will crash ftrace:
      
      ------------[ cut here ]------------
      WARNING: at /home/rostedt/work/git/linux-trace.git/kernel/trace/ftrace.c:1384 __ftrace_hash_rec_update.part.27+0x157/0x1a7()
      Modules linked in:
      Pid: 2330, comm: bash Not tainted 3.1.0-test+ #32
      Call Trace:
       [<ffffffff81051828>] warn_slowpath_common+0x83/0x9b
       [<ffffffff8105185a>] warn_slowpath_null+0x1a/0x1c
       [<ffffffff810ba362>] __ftrace_hash_rec_update.part.27+0x157/0x1a7
       [<ffffffff810ba6e8>] ? ftrace_regex_release+0xa7/0x10f
       [<ffffffff8111bdfe>] ? kfree+0xe5/0x115
       [<ffffffff810ba51e>] ftrace_hash_move+0x2e/0x151
       [<ffffffff810ba6fb>] ftrace_regex_release+0xba/0x10f
       [<ffffffff8112e49a>] fput+0xfd/0x1c2
       [<ffffffff8112b54c>] filp_close+0x6d/0x78
       [<ffffffff8113a92d>] sys_dup3+0x197/0x1c1
       [<ffffffff8113a9a6>] sys_dup2+0x4f/0x54
       [<ffffffff8150cac2>] system_call_fastpath+0x16/0x1b
      ---[ end trace 77a3a7ee73794a02 ]---
      
      Link: http://lkml.kernel.org/r/20111101141420.GA4918@debianReported-by: NRabin Vincent <rabin@rab.in>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d4d34b98
    • S
      ftrace: Remove force undef config value left for testing · 8ee3c92b
      Steven Rostedt 提交于
      A forced undef of a config value was used for testing and was
      accidently left in during the final commit. This causes x86 to
      run slower than needed while running function tracing as well
      as causes the function graph selftest to fail when DYNMAIC_FTRACE
      is not set. This is because the code in MCOUNT expects the ftrace
      code to be processed with the config value set that happened to
      be forced not set.
      
      The forced config option was left in by:
          commit 6331c28c
          ftrace: Fix dynamic selftest failure on some archs
      
      Link: http://lkml.kernel.org/r/20111102150255.GA6973@debian
      
      Cc: stable@vger.kernel.org
      Reported-by: NRabin Vincent <rabin@rab.in>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8ee3c92b
  12. 31 10月, 2011 1 次提交
  13. 11 10月, 2011 1 次提交
    • S
      tracing: Warn on output if the function tracer was found corrupted · e0a413f6
      Steven Rostedt 提交于
      As the function tracer is very intrusive, lots of self checks are
      performed on the tracer and if something is found to be strange
      it will shut itself down keeping it from corrupting the rest of the
      kernel. This shutdown may still allow functions to be traced, as the
      tracing only stops new modifications from happening. Trying to stop
      the function tracer itself can cause more harm as it requires code
      modification.
      
      Although a WARN_ON() is executed, a user may not notice it. To help
      the user see that something isn't right with the tracing of the system
      a big warning is added to the output of the tracer that lets the user
      know that their data may be incomplete.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e0a413f6
  14. 15 7月, 2011 1 次提交
    • S
      ftrace: Fix regression where ftrace breaks when modules are loaded · f7bc8b61
      Steven Rostedt 提交于
      Enabling function tracer to trace all functions, then load a module and
      then disable function tracing will cause ftrace to fail.
      
      This can also happen by enabling function tracing on the command line:
      
        ftrace=function
      
      and during boot up, modules are loaded, then you disable function tracing
      with 'echo nop > current_tracer' you will trigger a bug in ftrace that
      will shut itself down.
      
      The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
      are registered for tracing. When one or more ftrace_ops are registered,
      all the records that represent the functions that the ftrace_ops will
      trace have a ref count incremented. If this ref count is not zero,
      when the code modification runs, that function will be enabled for tracing.
      If the ref count is zero, that function will be disabled from tracing.
      
      To make sure the accounting was working, FTRACE_WARN_ON()s were added
      to updating of the ref counts.
      
      If the ref count hits its max (> 2^30 ftrace_ops added), or if
      the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
      disables all modification of code.
      
      Since it is common for ftrace_ops to trace all functions in the kernel,
      instead of creating > 20,000 hash items for the ftrace_ops, the hash
      count is just set to zero, and it represents that the ftrace_ops is
      to trace all functions. This is where the issues arrise.
      
      If you enable function tracing to trace all functions, and then add
      a module, the modules function records do not get the ref count updated.
      When the function tracer is disabled, all function records ref counts
      are subtracted. Since the modules never had their ref counts incremented,
      they go below zero and the FTRACE_WARN_ON() is triggered.
      
      The solution to this is rather simple. When modules are loaded, and
      their functions are added to the the ftrace pool, look to see if any
      ftrace_ops are registered that trace all functions. And for those,
      update the ref count for the module function records.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f7bc8b61
  15. 14 7月, 2011 3 次提交
    • S
      ftrace: Fix dynamic selftest failure on some archs · 6331c28c
      Steven Rostedt 提交于
      Archs that do not implement CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST, will
      fail the dynamic ftrace selftest.
      
      The function tracer has a quick 'off' variable that will prevent
      the call back functions from being called. This variable is called
      function_trace_stop. In x86, this is implemented directly in the mcount
      assembly, but for other archs, an intermediate function is used called
      ftrace_test_stop_func().
      
      In dynamic ftrace, the function pointer variable ftrace_trace_function is
      used to update the caller code in the mcount caller. But for archs that
      do not have CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST set, it only calls
      ftrace_test_stop_func() instead, which in turn calls __ftrace_trace_function.
      
      When more than one ftrace_ops is registered, the function it calls is
      ftrace_ops_list_func(), which will iterate over all registered ftrace_ops
      and call the callbacks that have their hash matching.
      
      The issue happens when two ftrace_ops are registered for different functions
      and one is then unregistered. The __ftrace_trace_function is then pointed
      to the remaining ftrace_ops callback function directly. This mean it will
      be called for all functions that were registered to trace by both ftrace_ops
      that were registered.
      
      This is not an issue for archs with CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST,
      because the update of ftrace_trace_function doesn't happen until after all
      functions have been updated, and then the mcount caller is updated. But
      for those archs that do use the ftrace_test_stop_func(), the update is
      immediate.
      
      The dynamic selftest fails because it hits this situation, and the
      ftrace_ops that it registers fails to only trace what it was suppose to
      and instead traces all other functions.
      
      The solution is to delay the setting of __ftrace_trace_function until
      after all the functions have been updated according to the registered
      ftrace_ops. Also, function_trace_stop is set during the update to prevent
      function tracing from calling code that is caused by the function tracer
      itself.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6331c28c
    • S
      ftrace: Update filter when tracing enabled in set_ftrace_filter() · 072126f4
      Steven Rostedt 提交于
      Currently, if set_ftrace_filter() is called when the ftrace_ops is
      active, the function filters will not be updated. They will only be updated
      when tracing is disabled and re-enabled.
      
      Update the functions immediately during set_ftrace_filter().
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      072126f4
    • S
      ftrace: Balance records when updating the hash · 41fb61c2
      Steven Rostedt 提交于
      Whenever the hash of the ftrace_ops is updated, the record counts
      must be balance. This requires disabling the records that are set
      in the original hash, and then enabling the records that are set
      in the updated hash.
      
      Moving the update into ftrace_hash_move() removes the bug where the
      hash was updated but the records were not, which results in ftrace
      triggering a warning and disabling itself because the ftrace_ops filter
      is updated while the ftrace_ops was registered, and then the failure
      happens when the ftrace_ops is unregistered.
      
      The current code will not trigger this bug, but new code will.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      41fb61c2
  16. 08 7月, 2011 1 次提交
    • S
      ftrace: Do not disable interrupts for modules in mcount update · 4376cac6
      Steven Rostedt 提交于
      When I mounted an NFS directory, it caused several modules to be loaded. At the
      time I was running the preemptirqsoff tracer, and it showed the following
      output:
      
      # tracer: preemptirqsoff
      #
      # preemptirqsoff latency trace v1.1.5 on 2.6.33.9-rt30-mrg-test
      # --------------------------------------------------------------------
      # latency: 1177 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
      #    -----------------
      #    | task: modprobe-19370 (uid:0 nice:0 policy:0 rt_prio:0)
      #    -----------------
      #  => started at: ftrace_module_notify
      #  => ended at:   ftrace_module_notify
      #
      #
      #                  _------=> CPU#
      #                 / _-----=> irqs-off
      #                | / _----=> need-resched
      #                || / _---=> hardirq/softirq
      #                ||| / _--=> preempt-depth
      #                |||| /_--=> lock-depth
      #                |||||/     delay
      #  cmd     pid   |||||| time  |   caller
      #     \   /      ||||||   \   |   /
      modprobe-19370   3d....    0us!: ftrace_process_locs <-ftrace_module_notify
      modprobe-19370   3d.... 1176us : ftrace_process_locs <-ftrace_module_notify
      modprobe-19370   3d.... 1178us : trace_hardirqs_on <-ftrace_module_notify
      modprobe-19370   3d.... 1178us : <stack trace>
       => ftrace_process_locs
       => ftrace_module_notify
       => notifier_call_chain
       => __blocking_notifier_call_chain
       => blocking_notifier_call_chain
       => sys_init_module
       => system_call_fastpath
      
      That's over 1ms that interrupts are disabled on a Real-Time kernel!
      
      Looking at the cause (being the ftrace author helped), I found that the
      interrupts are disabled before the code modification of mcounts into nops. The
      interrupts only need to be disabled on start up around this code, not when
      modules are being loaded.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4376cac6