1. 01 7月, 2014 1 次提交
    • S
      ftrace: Optimize function graph to be called directly · 79922b80
      Steven Rostedt (Red Hat) 提交于
      Function graph tracing is a bit different than the function tracers, as
      it is processed after either the ftrace_caller or ftrace_regs_caller
      and we only have one place to modify the jump to ftrace_graph_caller,
      the jump needs to happen after the restore of registeres.
      
      The function graph tracer is dependent on the function tracer, where
      even if the function graph tracing is going on by itself, the save and
      restore of registers is still done for function tracing regardless of
      if function tracing is happening, before it calls the function graph
      code.
      
      If there's no function tracing happening, it is possible to just call
      the function graph tracer directly, and avoid the wasted effort to save
      and restore regs for function tracing.
      
      This requires adding new flags to the dyn_ftrace records:
      
        FTRACE_FL_TRAMP
        FTRACE_FL_TRAMP_EN
      
      The first is set if the count for the record is one, and the ftrace_ops
      associated to that record has its own trampoline. That way the mcount code
      can call that trampoline directly.
      
      In the future, trampolines can be added to arbitrary ftrace_ops, where you
      can have two or more ftrace_ops registered to ftrace (like kprobes and perf)
      and if they are not tracing the same functions, then instead of doing a
      loop to check all registered ftrace_ops against their hashes, just call the
      ftrace_ops trampoline directly, which would call the registered ftrace_ops
      function directly.
      
      Without this patch perf showed:
      
        0.05%  hackbench  [kernel.kallsyms]  [k] ftrace_caller
        0.05%  hackbench  [kernel.kallsyms]  [k] arch_local_irq_save
        0.05%  hackbench  [kernel.kallsyms]  [k] native_sched_clock
        0.04%  hackbench  [kernel.kallsyms]  [k] __buffer_unlock_commit
        0.04%  hackbench  [kernel.kallsyms]  [k] preempt_trace
        0.04%  hackbench  [kernel.kallsyms]  [k] prepare_ftrace_return
        0.04%  hackbench  [kernel.kallsyms]  [k] __this_cpu_preempt_check
        0.04%  hackbench  [kernel.kallsyms]  [k] ftrace_graph_caller
      
      See that the ftrace_caller took up more time than the ftrace_graph_caller
      did.
      
      With this patch:
      
        0.05%  hackbench  [kernel.kallsyms]  [k] __buffer_unlock_commit
        0.04%  hackbench  [kernel.kallsyms]  [k] call_filter_check_discard
        0.04%  hackbench  [kernel.kallsyms]  [k] ftrace_graph_caller
        0.04%  hackbench  [kernel.kallsyms]  [k] sched_clock
      
      The ftrace_caller is no where to be found and ftrace_graph_caller still
      takes up the same percentage.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      79922b80
  2. 30 6月, 2014 2 次提交
    • S
      ftrace: Add ftrace_rec_counter() macro to simplify the code · 0376bde1
      Steven Rostedt (Red Hat) 提交于
      The ftrace dynamic record has a flags element that also has a counter.
      Instead of hard coding "rec->flags & ~FTRACE_FL_MASK" all over the
      place. Use a macro instead.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0376bde1
    • S
      ftrace: Allow no regs if no more callbacks require it · 4fbb48cb
      Steven Rostedt (Red Hat) 提交于
      When registering a function callback for the function tracer, the ops
      can specify if it wants to save full regs (like an interrupt would)
      for each function that it traces, or if it does not care about regs
      and just wants to have the fastest return possible.
      
      Once a ops has registered a function, if other ops register that
      function they all will receive the regs too. That's because it does
      the work once, it does it for everyone.
      
      Now if the ops wanting regs unregisters the function so that there's
      only ops left that do not care about regs, those ops will still
      continue getting regs and going through the work for it on that
      function. This is because the disabling of the rec counter only
      sees the ops registered, and does not see the ops that are still
      attached, and does not know if the current ops that are still attached
      want regs or not. To play it safe, it just keeps regs being processed
      until no function is registered anymore.
      
      Instead of doing that, check the ops that are still registered for that
      function and if none want regs for it anymore, then disable the
      processing of regs.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4fbb48cb
  3. 14 5月, 2014 6 次提交
    • S
      ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag · f1b2f2bd
      Steven Rostedt (Red Hat) 提交于
      As the decision to what needs to be done (converting a call to the
      ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs
      to ftrace_caller) can easily be determined from the rec->flags of
      FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the
      ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a
      UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes
      more complexity than is required. Remove it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f1b2f2bd
    • S
      ftrace: Use the ftrace_addr helper functions to find the ftrace_addr · 7c0868e0
      Steven Rostedt (Red Hat) 提交于
      With the moving of the functions that determine what the mcount call site
      should be replaced with into the generic code, there is a few places
      in the generic code that can use them instead of hard coding it as it
      does.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7c0868e0
    • S
      ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global · 7413af1f
      Steven Rostedt (Red Hat) 提交于
      Move and rename get_ftrace_addr() and get_ftrace_addr_old() to
      ftrace_get_addr_new() and ftrace_get_addr_curr() respectively.
      
      This moves these two helper functions in the generic code out from
      the arch specific code, and renames them to have a better generic
      name. This will allow other archs to use them as well as makes it
      a bit easier to work on getting separate trampolines for different
      functions.
      
      ftrace_get_addr_new() returns the trampoline address that the mcount
      call address will be converted to.
      
      ftrace_get_addr_curr() returns the trampoline address of what the
      mcount call address currently jumps to.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7413af1f
    • S
      ftrace: Always inline ftrace_hash_empty() helper function · 68f40969
      Steven Rostedt (Red Hat) 提交于
      The ftrace_hash_empty() function is a simple test:
      
      	return !hash || !hash->count;
      
      But gcc seems to want to make it a call. As this is in an extreme
      hot path of the function tracer, there's no reason it needs to be
      a call. I only wrote it to be a helper function anyway, otherwise
      it would have been inlined manually.
      
      Force gcc to inline it, as it could have also been a macro.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      68f40969
    • S
      ftrace: Write in missing comment from a very old commit · 19eab4a4
      Steven Rostedt (Red Hat) 提交于
      Back in 2011 Commit ed926f9b "ftrace: Use counters to enable
      functions to trace" changed the way ftrace accounts for enabled
      and disabled traced functions. There was a comment started as:
      
      	/*
      	 *
      	 */
      
      But never finished. Well, that's rather useless. I probably forgot
      to save the file before committing it. And it passed review from all
      this time.
      
      Anyway, better late than never. I updated the comment to express what
      is happening in that somewhat complex code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      19eab4a4
    • S
      ftrace: Remove boolean of hash_enable and hash_disable · 66209a5b
      Steven Rostedt (Red Hat) 提交于
      Commit 4104d326 "ftrace: Remove global function list and call
      function directly" cleaned up the global_ops filtering and made
      the code simpler, but it left a variable "hash_enable" that was used
      to know if the hash functions should be updated or not. It was
      updated if the global_ops did not override them. As the global_ops
      are now no different than any other ftrace_ops, the hash always
      gets updated and there's no reason to use the hash_enable boolean.
      
      The same goes for hash_disable used in ftrace_shutdown().
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      66209a5b
  4. 06 5月, 2014 1 次提交
  5. 02 5月, 2014 1 次提交
  6. 28 4月, 2014 1 次提交
    • S
      ftrace/module: Hardcode ftrace_module_init() call into load_module() · a949ae56
      Steven Rostedt (Red Hat) 提交于
      A race exists between module loading and enabling of function tracer.
      
      	CPU 1				CPU 2
      	-----				-----
        load_module()
         module->state = MODULE_STATE_COMING
      
      				register_ftrace_function()
      				 mutex_lock(&ftrace_lock);
      				 ftrace_startup()
      				  update_ftrace_function();
      				   ftrace_arch_code_modify_prepare()
      				    set_all_module_text_rw();
      				   <enables-ftrace>
      				    ftrace_arch_code_modify_post_process()
      				     set_all_module_text_ro();
      
      				[ here all module text is set to RO,
      				  including the module that is
      				  loading!! ]
      
         blocking_notifier_call_chain(MODULE_STATE_COMING);
          ftrace_init_module()
      
           [ tries to modify code, but it's RO, and fails!
             ftrace_bug() is called]
      
      When this race happens, ftrace_bug() will produces a nasty warning and
      all of the function tracing features will be disabled until reboot.
      
      The simple solution is to treate module load the same way the core
      kernel is treated at boot. To hardcode the ftrace function modification
      of converting calls to mcount into nops. This is done in init/main.c
      there's no reason it could not be done in load_module(). This gives
      a better control of the changes and doesn't tie the state of the
      module to its notifiers as much. Ftrace is special, it needs to be
      treated as such.
      
      The reason this would work, is that the ftrace_module_init() would be
      called while the module is in MODULE_STATE_UNFORMED, which is ignored
      by the set_all_module_text_ro() call.
      
      Link: http://lkml.kernel.org/r/1395637826-3312-1-git-send-email-indou.takao@jp.fujitsu.comReported-by: NTakao Indoh <indou.takao@jp.fujitsu.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: stable@vger.kernel.org # 2.6.38+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a949ae56
  7. 25 4月, 2014 2 次提交
  8. 22 4月, 2014 2 次提交
  9. 12 3月, 2014 2 次提交
  10. 07 3月, 2014 5 次提交
  11. 21 2月, 2014 4 次提交
  12. 14 1月, 2014 1 次提交
    • S
      ftrace: Fix synchronization location disabling and freeing ftrace_ops · a4c35ed2
      Steven Rostedt (Red Hat) 提交于
      The synchronization needed after ftrace_ops are unregistered must happen
      after the callback is disabled from becing called by functions.
      
      The current location happens after the function is being removed from the
      internal lists, but not after the function callbacks were disabled, leaving
      the functions susceptible of being called after their callbacks are freed.
      
      This affects perf and any externel users of function tracing (LTTng and
      SystemTap).
      
      Cc: stable@vger.kernel.org # 3.0+
      Fixes: cdbe61bf "ftrace: Allow dynamically allocated function tracers"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a4c35ed2
  13. 13 1月, 2014 1 次提交
    • S
      ftrace: Have function graph only trace based on global_ops filters · 23a8e844
      Steven Rostedt (Red Hat) 提交于
      Doing some different tests, I discovered that function graph tracing, when
      filtered via the set_ftrace_filter and set_ftrace_notrace files, does
      not always keep with them if another function ftrace_ops is registered
      to trace functions.
      
      The reason is that function graph just happens to trace all functions
      that the function tracer enables. When there was only one user of
      function tracing, the function graph tracer did not need to worry about
      being called by functions that it did not want to trace. But now that there
      are other users, this becomes a problem.
      
      For example, one just needs to do the following:
      
       # cd /sys/kernel/debug/tracing
       # echo schedule > set_ftrace_filter
       # echo function_graph > current_tracer
       # cat trace
      [..]
       0)               |  schedule() {
       ------------------------------------------
       0)    <idle>-0    =>   rcu_pre-7
       ------------------------------------------
      
       0) ! 2980.314 us |  }
       0)               |  schedule() {
       ------------------------------------------
       0)   rcu_pre-7    =>    <idle>-0
       ------------------------------------------
      
       0) + 20.701 us   |  }
      
       # echo 1 > /proc/sys/kernel/stack_tracer_enabled
       # cat trace
      [..]
       1) + 20.825 us   |      }
       1) + 21.651 us   |    }
       1) + 30.924 us   |  } /* SyS_ioctl */
       1)               |  do_page_fault() {
       1)               |    __do_page_fault() {
       1)   0.274 us    |      down_read_trylock();
       1)   0.098 us    |      find_vma();
       1)               |      handle_mm_fault() {
       1)               |        _raw_spin_lock() {
       1)   0.102 us    |          preempt_count_add();
       1)   0.097 us    |          do_raw_spin_lock();
       1)   2.173 us    |        }
       1)               |        do_wp_page() {
       1)   0.079 us    |          vm_normal_page();
       1)   0.086 us    |          reuse_swap_page();
       1)   0.076 us    |          page_move_anon_rmap();
       1)               |          unlock_page() {
       1)   0.082 us    |            page_waitqueue();
       1)   0.086 us    |            __wake_up_bit();
       1)   1.801 us    |          }
       1)   0.075 us    |          ptep_set_access_flags();
       1)               |          _raw_spin_unlock() {
       1)   0.098 us    |            do_raw_spin_unlock();
       1)   0.105 us    |            preempt_count_sub();
       1)   1.884 us    |          }
       1)   9.149 us    |        }
       1) + 13.083 us   |      }
       1)   0.146 us    |      up_read();
      
      When the stack tracer was enabled, it enabled all functions to be traced, which
      now the function graph tracer also traces. This is a side effect that should
      not occur.
      
      To fix this a test is added when the function tracing is changed, as well as when
      the graph tracer is enabled, to see if anything other than the ftrace global_ops
      function tracer is enabled. If so, then the graph tracer calls a test trampoline
      that will look at the function that is being traced and compare it with the
      filters defined by the global_ops.
      
      As an optimization, if there's no other function tracers registered, or if
      the only registered function tracers also use the global ops, the function
      graph infrastructure will call the registered function graph callback directly
      and not go through the test trampoline.
      
      Cc: stable@vger.kernel.org # 3.3+
      Fixes: d2d45c7a "tracing: Have stack_tracer use a separate list of functions"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      23a8e844
  14. 10 1月, 2014 1 次提交
    • S
      ftrace: Synchronize setting function_trace_op with ftrace_trace_function · 405e1d83
      Steven Rostedt (Red Hat) 提交于
      ftrace_trace_function is a variable that holds what function will be called
      directly by the assembly code (mcount). If just a single function is
      registered and it handles recursion itself, then the assembly will call that
      function directly without any helper function. It also passes in the
      ftrace_op that was registered with the callback. The ftrace_op to send is
      stored in the function_trace_op variable.
      
      The ftrace_trace_function and function_trace_op needs to be coordinated such
      that the called callback wont be called with the wrong ftrace_op, otherwise
      bad things can happen if it expected a different op. Luckily, there's no
      callback that doesn't use the helper functions that requires this. But
      there soon will be and this needs to be fixed.
      
      Use a set_function_trace_op to store the ftrace_op to set the
      function_trace_op to when it is safe to do so (during the update function
      within the breakpoint or stop machine calls). Or if dynamic ftrace is not
      being used (static tracing) then we have to do a bit more synchronization
      when the ftrace_trace_function is set as that takes affect immediately
      (as oppose to dynamic ftrace doing it with the modification of the trampoline).
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      405e1d83
  15. 03 1月, 2014 1 次提交
  16. 16 12月, 2013 1 次提交
    • M
      ftrace: Initialize the ftrace profiler for each possible cpu · c4602c1c
      Miao Xie 提交于
      Ftrace currently initializes only the online CPUs. This implementation has
      two problems:
      - If we online a CPU after we enable the function profile, and then run the
        test, we will lose the trace information on that CPU.
        Steps to reproduce:
        # echo 0 > /sys/devices/system/cpu/cpu1/online
        # cd <debugfs>/tracing/
        # echo <some function name> >> set_ftrace_filter
        # echo 1 > function_profile_enabled
        # echo 1 > /sys/devices/system/cpu/cpu1/online
        # run test
      - If we offline a CPU before we enable the function profile, we will not clear
        the trace information when we enable the function profile. It will trouble
        the users.
        Steps to reproduce:
        # cd <debugfs>/tracing/
        # echo <some function name> >> set_ftrace_filter
        # echo 1 > function_profile_enabled
        # run test
        # cat trace_stat/function*
        # echo 0 > /sys/devices/system/cpu/cpu1/online
        # echo 0 > function_profile_enabled
        # echo 1 > function_profile_enabled
        # cat trace_stat/function*
        # run test
        # cat trace_stat/function*
      
      So it is better that we initialize the ftrace profiler for each possible cpu
      every time we enable the function profile instead of just the online ones.
      
      Link: http://lkml.kernel.org/r/1387178401-10619-1-git-send-email-miaox@cn.fujitsu.com
      
      Cc: stable@vger.kernel.org # 2.6.31+
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c4602c1c
  17. 26 11月, 2013 1 次提交
    • S
      ftrace: Fix function graph with loading of modules · 8a56d776
      Steven Rostedt (Red Hat) 提交于
      Commit 8c4f3c3f "ftrace: Check module functions being traced on reload"
      fixed module loading and unloading with respect to function tracing, but
      it missed the function graph tracer. If you perform the following
      
       # cd /sys/kernel/debug/tracing
       # echo function_graph > current_tracer
       # modprobe nfsd
       # echo nop > current_tracer
      
      You'll get the following oops message:
      
       ------------[ cut here ]------------
       WARNING: CPU: 2 PID: 2910 at /linux.git/kernel/trace/ftrace.c:1640 __ftrace_hash_rec_update.part.35+0x168/0x1b9()
       Modules linked in: nfsd exportfs nfs_acl lockd ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables uinput snd_hda_codec_idt
       CPU: 2 PID: 2910 Comm: bash Not tainted 3.13.0-rc1-test #7
       Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
        0000000000000668 ffff8800787efcf8 ffffffff814fe193 ffff88007d500000
        0000000000000000 ffff8800787efd38 ffffffff8103b80a 0000000000000668
        ffffffff810b2b9a ffffffff81a48370 0000000000000001 ffff880037aea000
       Call Trace:
        [<ffffffff814fe193>] dump_stack+0x4f/0x7c
        [<ffffffff8103b80a>] warn_slowpath_common+0x81/0x9b
        [<ffffffff810b2b9a>] ? __ftrace_hash_rec_update.part.35+0x168/0x1b9
        [<ffffffff8103b83e>] warn_slowpath_null+0x1a/0x1c
        [<ffffffff810b2b9a>] __ftrace_hash_rec_update.part.35+0x168/0x1b9
        [<ffffffff81502f89>] ? __mutex_lock_slowpath+0x364/0x364
        [<ffffffff810b2cc2>] ftrace_shutdown+0xd7/0x12b
        [<ffffffff810b47f0>] unregister_ftrace_graph+0x49/0x78
        [<ffffffff810c4b30>] graph_trace_reset+0xe/0x10
        [<ffffffff810bf393>] tracing_set_tracer+0xa7/0x26a
        [<ffffffff810bf5e1>] tracing_set_trace_write+0x8b/0xbd
        [<ffffffff810c501c>] ? ftrace_return_to_handler+0xb2/0xde
        [<ffffffff811240a8>] ? __sb_end_write+0x5e/0x5e
        [<ffffffff81122aed>] vfs_write+0xab/0xf6
        [<ffffffff8150a185>] ftrace_graph_caller+0x85/0x85
        [<ffffffff81122dbd>] SyS_write+0x59/0x82
        [<ffffffff8150a185>] ftrace_graph_caller+0x85/0x85
        [<ffffffff8150a2d2>] system_call_fastpath+0x16/0x1b
       ---[ end trace 940358030751eafb ]---
      
      The above mentioned commit didn't go far enough. Well, it covered the
      function tracer by adding checks in __register_ftrace_function(). The
      problem is that the function graph tracer circumvents that (for a slight
      efficiency gain when function graph trace is running with a function
      tracer. The gain was not worth this).
      
      The problem came with ftrace_startup() which should always be called after
      __register_ftrace_function(), if you want this bug to be completely fixed.
      
      Anyway, this solution moves __register_ftrace_function() inside of
      ftrace_startup() and removes the need to call them both.
      Reported-by: NDave Wysochanski <dwysocha@redhat.com>
      Fixes: ed926f9b ("ftrace: Use counters to enable functions to trace")
      Cc: stable@vger.kernel.org # 3.0+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8a56d776
  18. 06 11月, 2013 2 次提交
    • T
      tracing: Make register/unregister_ftrace_command __init · 38de93ab
      Tom Zanussi 提交于
      register/unregister_ftrace_command() are only ever called from __init
      functions, so can themselves be made __init.
      
      Also make register_snapshot_cmd() __init for the same reason.
      
      Link: http://lkml.kernel.org/r/d4042c8cadb7ae6f843ac9a89a24e1c6a3099727.1382620672.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38de93ab
    • S
      ftrace: Have control op function callback only trace when RCU is watching · b5aa3a47
      Steven Rostedt (Red Hat) 提交于
      Dave Jones reported that trinity would be able to trigger the following
      back trace:
      
       ===============================
       [ INFO: suspicious RCU usage. ]
       3.10.0-rc2+ #38 Not tainted
       -------------------------------
       include/linux/rcupdate.h:771 rcu_read_lock() used illegally while idle!
       other info that might help us debug this:
      
       RCU used illegally from idle CPU!  rcu_scheduler_active = 1, debug_locks = 0
       RCU used illegally from extended quiescent state!
       1 lock held by trinity-child1/18786:
        #0:  (rcu_read_lock){.+.+..}, at: [<ffffffff8113dd48>] __perf_event_overflow+0x108/0x310
       stack backtrace:
       CPU: 3 PID: 18786 Comm: trinity-child1 Not tainted 3.10.0-rc2+ #38
        0000000000000000 ffff88020767bac8 ffffffff816e2f6b ffff88020767baf8
        ffffffff810b5897 ffff88021de92520 0000000000000000 ffff88020767bbf8
        0000000000000000 ffff88020767bb78 ffffffff8113ded4 ffffffff8113dd48
       Call Trace:
        [<ffffffff816e2f6b>] dump_stack+0x19/0x1b
        [<ffffffff810b5897>] lockdep_rcu_suspicious+0xe7/0x120
        [<ffffffff8113ded4>] __perf_event_overflow+0x294/0x310
        [<ffffffff8113dd48>] ? __perf_event_overflow+0x108/0x310
        [<ffffffff81309289>] ? __const_udelay+0x29/0x30
        [<ffffffff81076054>] ? __rcu_read_unlock+0x54/0xa0
        [<ffffffff816f4000>] ? ftrace_call+0x5/0x2f
        [<ffffffff8113dfa1>] perf_swevent_overflow+0x51/0xe0
        [<ffffffff8113e08f>] perf_swevent_event+0x5f/0x90
        [<ffffffff8113e1c9>] perf_tp_event+0x109/0x4f0
        [<ffffffff8113e36f>] ? perf_tp_event+0x2af/0x4f0
        [<ffffffff81074630>] ? __rcu_read_lock+0x20/0x20
        [<ffffffff8112d79f>] perf_ftrace_function_call+0xbf/0xd0
        [<ffffffff8110e1e1>] ? ftrace_ops_control_func+0x181/0x210
        [<ffffffff81074630>] ? __rcu_read_lock+0x20/0x20
        [<ffffffff81100cae>] ? rcu_eqs_enter_common+0x5e/0x470
        [<ffffffff8110e1e1>] ftrace_ops_control_func+0x181/0x210
        [<ffffffff816f4000>] ftrace_call+0x5/0x2f
        [<ffffffff8110e229>] ? ftrace_ops_control_func+0x1c9/0x210
        [<ffffffff816f4000>] ? ftrace_call+0x5/0x2f
        [<ffffffff81074635>] ? debug_lockdep_rcu_enabled+0x5/0x40
        [<ffffffff81074635>] ? debug_lockdep_rcu_enabled+0x5/0x40
        [<ffffffff81100cae>] ? rcu_eqs_enter_common+0x5e/0x470
        [<ffffffff8110112a>] rcu_eqs_enter+0x6a/0xb0
        [<ffffffff81103673>] rcu_user_enter+0x13/0x20
        [<ffffffff8114541a>] user_enter+0x6a/0xd0
        [<ffffffff8100f6d8>] syscall_trace_leave+0x78/0x140
        [<ffffffff816f46af>] int_check_syscall_exit_work+0x34/0x3d
       ------------[ cut here ]------------
      
      Perf uses rcu_read_lock() but as the function tracer can trace functions
      even when RCU is not currently active, this makes the rcu_read_lock()
      used by perf ineffective.
      
      As perf is currently the only user of the ftrace_ops_control_func() and
      perf is also the only function callback that actively uses rcu_read_lock(),
      the quick fix is to prevent the ftrace_ops_control_func() from calling
      its callbacks if RCU is not active.
      
      With Paul's new "rcu_is_watching()" we can tell if RCU is active or not.
      Reported-by: NDave Jones <davej@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b5aa3a47
  19. 19 10月, 2013 4 次提交
  20. 04 9月, 2013 1 次提交
    • S
      ftrace: Fix a slight race in modifying what function callback gets traced · 59338f75
      Steven Rostedt (Red Hat) 提交于
      There's a slight race when going from a list function to a non list
      function. That is, when only one callback is registered to the function
      tracer, it gets called directly by the mcount trampoline. But if this
      function has filters, it may be called by the wrong functions.
      
      As the list ops callback that handles multiple callbacks that are
      registered to ftrace, it also handles what functions they call. While
      the transaction is taking place, use the list function always, and
      after all the updates are finished (only the functions that should be
      traced are being traced), then we can update the trampoline to call
      the function directly.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      59338f75