1. 10 9月, 2014 5 次提交
    • S
      ftrace: Annotate the ops operation on update · e1effa01
      Steven Rostedt (Red Hat) 提交于
      Add three new flags for ftrace_ops:
      
        FTRACE_OPS_FL_ADDING
        FTRACE_OPS_FL_REMOVING
        FTRACE_OPS_FL_MODIFYING
      
      These will be set for the ftrace_ops when they are first added
      to the function tracing, being removed from function tracing
      or just having their functions changed from function tracing,
      respectively.
      
      This will be needed to remove the tramp_hash, which can grow quite
      big. The tramp_hash is used to note what functions a ftrace_ops
      is using a trampoline for. Denoting which ftrace_ops is being
      modified, will allow us to use the ftrace_ops hashes themselves,
      which are much smaller as they have a global flag to denote if
      a ftrace_ops is tracing all functions, as well as a notrace hash
      if the ftrace_ops is tracing all but a few. The tramp_hash just
      creates a hash item for every function, which can go into the 10s
      of thousands if all functions are using the ftrace_ops trampoline.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1effa01
    • S
      ftrace: Grab any ops for a rec for enabled_functions output · 5fecaa04
      Steven Rostedt (Red Hat) 提交于
      When dumping the enabled_functions, use the first op that is
      found with a trampoline to the record, as there should only be
      one, as only one ops can be registered to a function that has
      a trampoline.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5fecaa04
    • S
      ftrace: Remove freeing of old_hash from ftrace_hash_move() · 3296fc4e
      Steven Rostedt (Red Hat) 提交于
      ftrace_hash_move() currently frees the old hash that is passed to it
      after replacing the pointer with the new hash. Instead of having the
      function do that chore, have the caller perform the free.
      
      This lets the ftrace_hash_move() be used a bit more freely, which
      is needed for changing the way the trampoline logic is done.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3296fc4e
    • S
      ftrace: Set callback to ftrace_stub when no ops are registered · f7aad4e1
      Steven Rostedt (Red Hat) 提交于
      The clean up that adds the helper function ftrace_ops_get_func()
      caused the default function to not change when DYNAMIC_FTRACE was not
      set and no ftrace_ops were registered. Although static tracing is
      not very useful (not having DYNAMIC_FTRACE set), it is still supported
      and we don't want to break it.
      
      Clean up the if statement even more to specifically have the default
      function call ftrace_stub when no ftrace_ops are registered. This
      fixes the small bug for static tracing as well as makes the code a
      bit more understandable.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f7aad4e1
    • S
      ftrace: Add helper function ftrace_ops_get_func() · 87354059
      Steven Rostedt (Red Hat) 提交于
      Add the helper function to what the mcount trampoline is to call
      for a ftrace_ops function. This helper will be used by arch code
      in the future to set up dynamic trampolines. But as this does the
      same tests that are performed in choosing what function to call for
      the default mcount trampoline, might as well use it to clean up
      the existing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87354059
  2. 09 9月, 2014 1 次提交
  3. 23 8月, 2014 5 次提交
    • S
      ftrace: Use current addr when converting to nop in __ftrace_replace_code() · 39b5552c
      Steven Rostedt (Red Hat) 提交于
      In __ftrace_replace_code(), when converting the call to a nop in a function
      it needs to compare against the "curr" (current) value of the ftrace ops, and
      not the "new" one. It currently does not affect x86 which is the only arch
      to do the trampolines with function graph tracer, but when other archs that do
      depend on this code implement the function graph trampoline, it can crash.
      
      Here's an example when ARM uses the trampolines (in the future):
      
       ------------[ cut here ]------------
       WARNING: CPU: 0 PID: 9 at kernel/trace/ftrace.c:1716 ftrace_bug+0x17c/0x1f4()
       Modules linked in: omap_rng rng_core ipv6
       CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.16.0-test-10959-gf0094b28-dirty #52
       [<c02188f4>] (unwind_backtrace) from [<c021343c>] (show_stack+0x20/0x24)
       [<c021343c>] (show_stack) from [<c095a674>] (dump_stack+0x78/0x94)
       [<c095a674>] (dump_stack) from [<c02532a0>] (warn_slowpath_common+0x7c/0x9c)
       [<c02532a0>] (warn_slowpath_common) from [<c02532ec>] (warn_slowpath_null+0x2c/0x34)
       [<c02532ec>] (warn_slowpath_null) from [<c02cbac4>] (ftrace_bug+0x17c/0x1f4)
       [<c02cbac4>] (ftrace_bug) from [<c02cc44c>] (ftrace_replace_code+0x80/0x9c)
       [<c02cc44c>] (ftrace_replace_code) from [<c02cc658>] (ftrace_modify_all_code+0xb8/0x164)
       [<c02cc658>] (ftrace_modify_all_code) from [<c02cc718>] (__ftrace_modify_code+0x14/0x1c)
       [<c02cc718>] (__ftrace_modify_code) from [<c02c7244>] (multi_cpu_stop+0xf4/0x134)
       [<c02c7244>] (multi_cpu_stop) from [<c02c6e90>] (cpu_stopper_thread+0x54/0x130)
       [<c02c6e90>] (cpu_stopper_thread) from [<c0271cd4>] (smpboot_thread_fn+0x1ac/0x1bc)
       [<c0271cd4>] (smpboot_thread_fn) from [<c026ddf0>] (kthread+0xe0/0xfc)
       [<c026ddf0>] (kthread) from [<c020f318>] (ret_from_fork+0x14/0x20)
       ---[ end trace dc9ce72c5b617d8f ]---
      [   65.047264] ftrace failed to modify [<c0208580>] asm_do_IRQ+0x10/0x1c
      [   65.054070]  actual: 85:1b:00:eb
      
      Fixes: 7413af1f "ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      39b5552c
    • S
      ftrace: Fix function_profiler and function tracer together · 5f151b24
      Steven Rostedt (Red Hat) 提交于
      The latest rewrite of ftrace removed the separate ftrace_ops of
      the function tracer and the function graph tracer and had them
      share the same ftrace_ops. This simplified the accounting by removing
      the multiple layers of functions called, where the global_ops func
      would call a special list that would iterate over the other ops that
      were registered within it (like function and function graph), which
      itself was registered to the ftrace ops list of all functions
      currently active. If that sounds confusing, the code that implemented
      it was also confusing and its removal is a good thing.
      
      The problem with this change was that it assumed that the function
      and function graph tracer can never be used at the same time.
      This is mostly true, but there is an exception. That is when the
      function profiler uses the function graph tracer to profile.
      The function profiler can be activated the same time as the function
      tracer, and this breaks the assumption and the result is that ftrace
      will crash (it detects the error and shuts itself down, it does not
      cause a kernel oops).
      
      To solve this issue, a previous change allowed the hash tables
      for the functions traced by a ftrace_ops to be a pointer and let
      multiple ftrace_ops share the same hash. This allows the function
      and function_graph tracer to have separate ftrace_ops, but still
      share the hash, which is what is done.
      
      Now the function and function graph tracers have separate ftrace_ops
      again, and the function tracer can be run while the function_profile
      is active.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5f151b24
    • S
      ftrace: Fix up trampoline accounting with looping on hash ops · bce0b6c5
      Steven Rostedt (Red Hat) 提交于
      Now that a ftrace_hash can be shared by multiple ftrace_ops, they can dec
      the rec->flags by more than once (one per those that share the ftrace_hash).
      This means that the tramp_hash may not have a hash item when it was added.
      
      For example, if two ftrace_ops share a hash for a ftrace record, and the
      first ops has a trampoline, when it adds itself it will set the rec->flags
      TRAMP flag and increments its nr_trampolines counter. When the second ops
      is added, it must clear that tramp flag but also decrement the other ops
      that shares its hash. As the update to the function callbacks has not yet
      been performed, the other ops will not have the tramp hash set yet and it
      can not be used to know to decrement its nr_trampolines.
      
      Luckily, the tramp_hash does not need to be used. As the ftrace_mutex is
      held, a ops with a trampoline to a record during an update of another ops
      that shares the record will have its func_hash pointing to it. Since a
      trampoline can only be set for a record if only one ops is attached to it,
      we can just check if the record has a trampoline (the FTRACE_FL_TRAMP flag
      is set) and then find the ops that has this record in its hashes.
      
      Also added some output to help debug when things go wrong.
      
      Cc: stable@vger.kernel.org # 3.16+ (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bce0b6c5
    • S
      ftrace: Update all ftrace_ops for a ftrace_hash_ops update · 84261912
      Steven Rostedt (Red Hat) 提交于
      When updating what an ftrace_ops traces, if it is registered (that is,
      actively tracing), and that ftrace_ops uses the shared global_ops
      local_hash, then we need to update all tracers that are active and
      also share the global_ops' ftrace_hash_ops.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      84261912
    • S
      ftrace: Allow ftrace_ops to use the hashes from other ops · 33b7f99c
      Steven Rostedt (Red Hat) 提交于
      Currently the top level debug file system function tracer shares its
      ftrace_ops with the function graph tracer. This was thought to be fine
      because the tracers are not used together, as one can only enable
      function or function_graph tracer in the current_tracer file.
      
      But that assumption proved to be incorrect. The function profiler
      can use the function graph tracer when function tracing is enabled.
      Since all function graph users uses the function tracing ftrace_ops
      this causes a conflict and when a user enables both function profiling
      as well as the function tracer it will crash ftrace and disable it.
      
      The quick solution so far is to move them as separate ftrace_ops like
      it was earlier. The problem though is to synchronize the functions that
      are traced because both function and function_graph tracer are limited
      by the selections made in the set_ftrace_filter and set_ftrace_notrace
      files.
      
      To handle this, a new structure is made called ftrace_ops_hash. This
      structure will now hold the filter_hash and notrace_hash, and the
      ftrace_ops will point to this structure. That will allow two ftrace_ops
      to share the same hashes.
      
      Since most ftrace_ops do not share the hashes, and to keep allocation
      simple, the ftrace_ops structure will include both a pointer to the
      ftrace_ops_hash called func_hash, as well as the structure itself,
      called local_hash. When the ops are registered, the func_hash pointer
      will be initialized to point to the local_hash within the ftrace_ops
      structure. Some of the ftrace internal ftrace_ops will be initialized
      statically. This will allow for the function and function_graph tracer
      to have separate ops but still share the same hash tables that determine
      what functions they trace.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      33b7f99c
  4. 24 7月, 2014 3 次提交
    • S
      ftrace: Add warning if tramp hash does not match nr_trampolines · dc6f03f2
      Steven Rostedt (Red Hat) 提交于
      After adding all the records to the tramp_hash, add a check that makes
      sure that the number of records added matches the number of records
      expected to match and do a WARN_ON and disable ftrace if they do
      not match.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dc6f03f2
    • S
      ftrace: Fix trampoline hash update check on rec->flags · 2a0343ba
      Steven Rostedt (Red Hat) 提交于
      In the loop of ftrace_save_ops_tramp_hash(), it adds all the recs
      to the ops hash if the rec has only one callback attached and the
      ops is connected to the rec. It gives a nasty warning and shuts down
      ftrace if the rec doesn't have a trampoline set for it. But this
      can happen with the following scenario:
      
        # cd /sys/kernel/debug/tracing
        # echo schedule do_IRQ > set_ftrace_filter
        # mkdir instances/foo
        # echo schedule > instances/foo/set_ftrace_filter
        # echo function_graph > current_function
        # echo function > instances/foo/current_function
        # echo nop > instances/foo/current_function
      
      The above would then trigger the following warning and disable
      ftrace:
      
       ------------[ cut here ]------------
       WARNING: CPU: 0 PID: 3145 at kernel/trace/ftrace.c:2212 ftrace_run_update_code+0xe4/0x15b()
       Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ip [...]
       CPU: 1 PID: 3145 Comm: bash Not tainted 3.16.0-rc3-test+ #136
       Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
        0000000000000000 ffffffff81808a88 ffffffff81502130 0000000000000000
        ffffffff81040ca1 ffff880077c08000 ffffffff810bd286 0000000000000001
        ffffffff81a56830 ffff88007a041be0 ffff88007a872d60 00000000000001be
       Call Trace:
        [<ffffffff81502130>] ? dump_stack+0x4a/0x75
        [<ffffffff81040ca1>] ? warn_slowpath_common+0x7e/0x97
        [<ffffffff810bd286>] ? ftrace_run_update_code+0xe4/0x15b
        [<ffffffff810bd286>] ? ftrace_run_update_code+0xe4/0x15b
        [<ffffffff810bda1a>] ? ftrace_shutdown+0x11c/0x16b
        [<ffffffff810bda87>] ? unregister_ftrace_function+0x1e/0x38
        [<ffffffff810cc7e1>] ? function_trace_reset+0x1a/0x28
        [<ffffffff810c924f>] ? tracing_set_tracer+0xc1/0x276
        [<ffffffff810c9477>] ? tracing_set_trace_write+0x73/0x91
        [<ffffffff81132383>] ? __sb_start_write+0x9a/0xcc
        [<ffffffff8120478f>] ? security_file_permission+0x1b/0x31
        [<ffffffff81130e49>] ? vfs_write+0xac/0x11c
        [<ffffffff8113115d>] ? SyS_write+0x60/0x8e
        [<ffffffff81508112>] ? system_call_fastpath+0x16/0x1b
       ---[ end trace 938c4415cbc7dc96 ]---
       ------------[ cut here ]------------
      
      Link: http://lkml.kernel.org/r/20140723120805.GB21376@redhat.comReported-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a0343ba
    • S
      ftrace: Rename ftrace_ops field from trampolines to nr_trampolines · 0162d621
      Steven Rostedt (Red Hat) 提交于
      Having two fields within the same struct that is off by one character
      can be confusing and error prone. Rename the counter "trampolines"
      to "nr_trampolines" to explicitly show it is a counter and not to
      be confused by the "trampoline" field.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0162d621
  5. 19 7月, 2014 4 次提交
  6. 17 7月, 2014 1 次提交
    • S
      ftrace-graph: Remove dependency of ftrace_stop() from ftrace_graph_stop() · 1b2f121c
      Steven Rostedt (Red Hat) 提交于
      ftrace_stop() is going away as it disables parts of function tracing
      that affects users that should not be affected. But ftrace_graph_stop()
      is built on ftrace_stop(). Here's another example of killing all of
      function tracing because something went wrong with function graph
      tracing.
      
      Instead of disabling all users of function tracing on function graph
      error, disable only function graph tracing.
      
      A new function is created called ftrace_graph_is_dead(). This is called
      in strategic paths to prevent function graph from doing more harm and
      allowing at least a warning to be printed before the system crashes.
      
      NOTE: ftrace_stop() is still used until all the archs are converted over
      to use ftrace_graph_is_dead(). After that, ftrace_stop() will be removed.
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1b2f121c
  7. 16 7月, 2014 1 次提交
  8. 15 7月, 2014 1 次提交
    • S
      tracing: Fix graph tracer with stack tracer on other archs · 5f8bf2d2
      Steven Rostedt (Red Hat) 提交于
      Running my ftrace tests on PowerPC, it failed the test that checks
      if function_graph tracer is affected by the stack tracer. It was.
      Looking into this, I found that the update_function_graph_func()
      must be called even if the trampoline function is not changed.
      This is because archs like PowerPC do not support ftrace_ops being
      passed by assembly and instead uses a helper function (what the
      trampoline function points to). Since this function is not changed
      even when multiple ftrace_ops are added to the code, the test that
      falls out before calling update_function_graph_func() will miss that
      the update must still be done.
      
      Call update_function_graph_function() for all calls to
      update_ftrace_function()
      
      Cc: stable@vger.kernel.org # 3.3+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5f8bf2d2
  9. 01 7月, 2014 9 次提交
  10. 30 6月, 2014 2 次提交
    • S
      ftrace: Add ftrace_rec_counter() macro to simplify the code · 0376bde1
      Steven Rostedt (Red Hat) 提交于
      The ftrace dynamic record has a flags element that also has a counter.
      Instead of hard coding "rec->flags & ~FTRACE_FL_MASK" all over the
      place. Use a macro instead.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0376bde1
    • S
      ftrace: Allow no regs if no more callbacks require it · 4fbb48cb
      Steven Rostedt (Red Hat) 提交于
      When registering a function callback for the function tracer, the ops
      can specify if it wants to save full regs (like an interrupt would)
      for each function that it traces, or if it does not care about regs
      and just wants to have the fastest return possible.
      
      Once a ops has registered a function, if other ops register that
      function they all will receive the regs too. That's because it does
      the work once, it does it for everyone.
      
      Now if the ops wanting regs unregisters the function so that there's
      only ops left that do not care about regs, those ops will still
      continue getting regs and going through the work for it on that
      function. This is because the disabling of the rec counter only
      sees the ops registered, and does not see the ops that are still
      attached, and does not know if the current ops that are still attached
      want regs or not. To play it safe, it just keeps regs being processed
      until no function is registered anymore.
      
      Instead of doing that, check the ops that are still registered for that
      function and if none want regs for it anymore, then disable the
      processing of regs.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4fbb48cb
  11. 14 5月, 2014 6 次提交
    • S
      ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag · f1b2f2bd
      Steven Rostedt (Red Hat) 提交于
      As the decision to what needs to be done (converting a call to the
      ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs
      to ftrace_caller) can easily be determined from the rec->flags of
      FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the
      ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a
      UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes
      more complexity than is required. Remove it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f1b2f2bd
    • S
      ftrace: Use the ftrace_addr helper functions to find the ftrace_addr · 7c0868e0
      Steven Rostedt (Red Hat) 提交于
      With the moving of the functions that determine what the mcount call site
      should be replaced with into the generic code, there is a few places
      in the generic code that can use them instead of hard coding it as it
      does.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7c0868e0
    • S
      ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global · 7413af1f
      Steven Rostedt (Red Hat) 提交于
      Move and rename get_ftrace_addr() and get_ftrace_addr_old() to
      ftrace_get_addr_new() and ftrace_get_addr_curr() respectively.
      
      This moves these two helper functions in the generic code out from
      the arch specific code, and renames them to have a better generic
      name. This will allow other archs to use them as well as makes it
      a bit easier to work on getting separate trampolines for different
      functions.
      
      ftrace_get_addr_new() returns the trampoline address that the mcount
      call address will be converted to.
      
      ftrace_get_addr_curr() returns the trampoline address of what the
      mcount call address currently jumps to.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7413af1f
    • S
      ftrace: Always inline ftrace_hash_empty() helper function · 68f40969
      Steven Rostedt (Red Hat) 提交于
      The ftrace_hash_empty() function is a simple test:
      
      	return !hash || !hash->count;
      
      But gcc seems to want to make it a call. As this is in an extreme
      hot path of the function tracer, there's no reason it needs to be
      a call. I only wrote it to be a helper function anyway, otherwise
      it would have been inlined manually.
      
      Force gcc to inline it, as it could have also been a macro.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      68f40969
    • S
      ftrace: Write in missing comment from a very old commit · 19eab4a4
      Steven Rostedt (Red Hat) 提交于
      Back in 2011 Commit ed926f9b "ftrace: Use counters to enable
      functions to trace" changed the way ftrace accounts for enabled
      and disabled traced functions. There was a comment started as:
      
      	/*
      	 *
      	 */
      
      But never finished. Well, that's rather useless. I probably forgot
      to save the file before committing it. And it passed review from all
      this time.
      
      Anyway, better late than never. I updated the comment to express what
      is happening in that somewhat complex code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      19eab4a4
    • S
      ftrace: Remove boolean of hash_enable and hash_disable · 66209a5b
      Steven Rostedt (Red Hat) 提交于
      Commit 4104d326 "ftrace: Remove global function list and call
      function directly" cleaned up the global_ops filtering and made
      the code simpler, but it left a variable "hash_enable" that was used
      to know if the hash functions should be updated or not. It was
      updated if the global_ops did not override them. As the global_ops
      are now no different than any other ftrace_ops, the hash always
      gets updated and there's no reason to use the hash_enable boolean.
      
      The same goes for hash_disable used in ftrace_shutdown().
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      66209a5b
  12. 06 5月, 2014 1 次提交
  13. 02 5月, 2014 1 次提交