1. 02 9月, 2016 1 次提交
  2. 24 8月, 2016 3 次提交
    • J
      ftrace: Add ftrace_graph_ret_addr() stack unwinding helpers · 223918e3
      Josh Poimboeuf 提交于
      When function graph tracing is enabled for a function, ftrace modifies
      the stack by replacing the original return address with the address of a
      hook function (return_to_handler).
      
      Stack unwinders need a way to get the original return address.  Add an
      arch-independent helper function for that named ftrace_graph_ret_addr().
      
      This adds two variations of the function: one depends on
      HAVE_FUNCTION_GRAPH_RET_ADDR_PTR, and the other relies on an index state
      variable.
      
      The former is recommended because, in some cases, the latter can cause
      problems when the unwinder skips stack frames.  It can get out of sync
      with the ret_stack index and wrong addresses can be reported for the
      stack trace.
      
      Once all arches have been ported to use
      HAVE_FUNCTION_GRAPH_RET_ADDR_PTR, we can get rid of the distinction.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/36bd90f762fc5e5af3929e3797a68a64906421cf.1471607358.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      223918e3
    • J
      ftrace: Add return address pointer to ftrace_ret_stack · 9a7c348b
      Josh Poimboeuf 提交于
      Storing this value will help prevent unwinders from getting out of sync
      with the function graph tracer ret_stack.  Now instead of needing a
      stateful iterator, they can compare the return address pointer to find
      the right ret_stack entry.
      
      Note that an array of 50 ftrace_ret_stack structs is allocated for every
      task.  So when an arch implements this, it will add either 200 or 400
      bytes of memory usage per task (depending on whether it's a 32-bit or
      64-bit platform).
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a95cfcc39e8f26b89a430c56926af0bb217bc0a1.1471607358.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9a7c348b
    • J
      ftrace: Only allocate the ret_stack 'fp' field when needed · daa460a8
      Josh Poimboeuf 提交于
      This saves some memory when HAVE_FUNCTION_GRAPH_FP_TEST isn't defined.
      On x86_64 with newer versions of gcc which have -mfentry, it saves 400
      bytes per task.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5c7747d9ea7b5cb47ef0a8ce8a6cea6bf7aa94bf.1471607358.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      daa460a8
  3. 06 7月, 2016 1 次提交
  4. 14 4月, 2016 1 次提交
  5. 26 3月, 2016 1 次提交
  6. 29 2月, 2016 1 次提交
  7. 18 2月, 2016 1 次提交
  8. 14 1月, 2016 1 次提交
  9. 08 1月, 2016 2 次提交
    • S
      ftrace: Add infrastructure for delayed enabling of module functions · b7ffffbb
      Steven Rostedt (Red Hat) 提交于
      Qiu Peiyang pointed out that there's a race when enabling function tracing
      and loading a module. In order to make the modifications of converting nops
      in the prologue of functions into callbacks, the text needs to be converted
      from read-only to read-write. When enabling function tracing, the text
      permission is updated, the functions are modified, and then they are put
      back.
      
      When loading a module, the updates to convert function calls to mcount is
      done before the module text is set to read-only. But after it is done, the
      module text is visible by the function tracer. Thus we have the following
      race:
      
      	CPU 0			CPU 1
      	-----			-----
         start function tracing
         set text to read-write
      			     load_module
      			     add functions to ftrace
      			     set module text read-only
      
         update all functions to callbacks
         modify module functions too
         < Can't it's read-only >
      
      When this happens, ftrace detects the issue and disables itself till the
      next reboot.
      
      To fix this, a new DISABLED flag is added for ftrace records, which all
      module functions get when they are added. Then later, after the module code
      is all set, the records will have the DISABLED flag cleared, and they will
      be enabled if any callback wants all functions to be traced.
      
      Note, this doesn't add the delay to later. It simply changes the
      ftrace_module_init() to do both the setting of DISABLED records, and then
      immediately calls the enable code. This helps with testing this new code as
      it has the same behavior as previously. Another change will come after this
      to have the ftrace_module_enable() called after the text is set to
      read-only.
      
      Cc: Qiu Peiyang <peiyangx.qiu@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b7ffffbb
    • S
      ftrace/module: Call clean up function when module init fails early · 049fb9bd
      Steven Rostedt (Red Hat) 提交于
      If the module init code fails after calling ftrace_module_init() and before
      calling do_init_module(), we can suffer from a memory leak. This is because
      ftrace_module_init() allocates pages to store the locations that ftrace
      hooks are placed in the module text. If do_init_module() fails, it still
      calls the MODULE_GOING notifiers which will tell ftrace to do a clean up of
      the pages it allocated for the module. But if load_module() fails before
      then, the pages allocated by ftrace_module_init() will never be freed.
      
      Call ftrace_release_mod() on the module if load_module() fails before
      getting to do_init_module().
      
      Link: http://lkml.kernel.org/r/567CEA31.1070507@intel.comReported-by: N"Qiu, PeiyangX" <peiyangx.qiu@intel.com>
      Fixes: a949ae56 "ftrace/module: Hardcode ftrace_module_init() call into load_module()"
      Cc: stable@vger.kernel.org # v2.6.38+
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      049fb9bd
  10. 24 12月, 2015 1 次提交
    • S
      ftrace: Remove use of control list and ops · ba27f2bc
      Steven Rostedt (Red Hat) 提交于
      Currently perf has its own list function within the ftrace infrastructure
      that seems to be used only to allow for it to have per-cpu disabling as well
      as a check to make sure that it's not called while RCU is not watching. It
      uses something called the "control_ops" which is used to iterate over ops
      under it with the control_list_func().
      
      The problem is that this control_ops and control_list_func unnecessarily
      complicates the code. By replacing FTRACE_OPS_FL_CONTROL with two new flags
      (FTRACE_OPS_FL_RCU and FTRACE_OPS_FL_PER_CPU) we can remove all the code
      that is special with the control ops and add the needed checks within the
      generic ftrace_list_func().
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ba27f2bc
  11. 26 11月, 2015 2 次提交
    • S
      ftrace: Add variable ftrace_expected for archs to show expected code · b05086c7
      Steven Rostedt (Red Hat) 提交于
      When an anomaly is found while modifying function code, ftrace_bug() is
      called which disables the function tracing infrastructure and reports
      information about what failed. If the code that is to be replaced does not
      match what is expected, then actual code is shown. Currently there is no
      arch generic way to show what was expected.
      
      Add a new variable pointer calld ftrace_expected that the arch code can set
      to point to what it expected so that ftrace_bug() can report the actual text
      as well as the text that was expected to be there.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b05086c7
    • S
      ftrace: Add new type to distinguish what kind of ftrace_bug() · 02a392a0
      Steven Rostedt (Red Hat) 提交于
      The ftrace function hook utility has several internal checks to make sure
      that whatever it modifies is exactly what it expects to be modifying. This
      is essential as modifying running code can be extremely dangerous to the
      system.
      
      When an anomaly is detected, ftrace_bug() is called which sends a splat to
      the console and disables function tracing. There's some extra information
      that is printed to help diagnose the issue.
      
      One thing that is missing though is output of what ftrace was doing at the
      time of the crash. Was it updating a call site or perhaps converting a call
      site to a nop? A new global enum variable is created to state what ftrace
      was doing at the time of the anomaly, and this is reported in ftrace_bug().
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      02a392a0
  12. 04 11月, 2015 2 次提交
  13. 25 7月, 2015 1 次提交
  14. 15 12月, 2014 2 次提交
  15. 22 11月, 2014 1 次提交
    • M
      ftrace, kprobes: Support IPMODIFY flag to find IP modify conflict · f8b8be8a
      Masami Hiramatsu 提交于
      Introduce FTRACE_OPS_FL_IPMODIFY to avoid conflict among
      ftrace users who may modify regs->ip to change the execution
      path. If two or more users modify the regs->ip on the same
      function entry, one of them will be broken. So they must add
      IPMODIFY flag and make sure that ftrace_set_filter_ip() succeeds.
      
      Note that ftrace doesn't allow ftrace_ops which has IPMODIFY
      flag to have notrace hash, and the ftrace_ops must have a
      filter hash (so that the ftrace_ops can hook only specific
      entries), because it strongly depends on the address and
      must be allowed for only few selected functions.
      
      Link: http://lkml.kernel.org/r/20141121102516.11844.27829.stgit@localhost.localdomain
      
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Petr Mladek <pmladek@suse.cz>
      Cc: Vojtech Pavlik <vojtech@suse.cz>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      [ fixed up some of the comments ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f8b8be8a
  16. 20 11月, 2014 1 次提交
    • S
      ftrace/x86/extable: Add is_ftrace_trampoline() function · aec0be2d
      Steven Rostedt (Red Hat) 提交于
      Stack traces that happen from function tracing check if the address
      on the stack is a __kernel_text_address(). That is, is the address
      kernel code. This calls core_kernel_text() which returns true
      if the address is part of the builtin kernel code. It also calls
      is_module_text_address() which returns true if the address belongs
      to module code.
      
      But what is missing is ftrace dynamically allocated trampolines.
      These trampolines are allocated for individual ftrace_ops that
      call the ftrace_ops callback functions directly. But if they do a
      stack trace, the code checking the stack wont detect them as they
      are neither core kernel code nor module address space.
      
      Adding another field to ftrace_ops that also stores the size of
      the trampoline assigned to it we can create a new function called
      is_ftrace_trampoline() that returns true if the address is a
      dynamically allocate ftrace trampoline. Note, it ignores trampolines
      that are not dynamically allocated as they will return true with
      the core_kernel_text() function.
      
      Link: http://lkml.kernel.org/r/20141119034829.497125839@goodmis.org
      
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      aec0be2d
  17. 12 11月, 2014 1 次提交
  18. 01 11月, 2014 1 次提交
    • S
      ftrace/x86: Add dynamic allocated trampoline for ftrace_ops · f3bea491
      Steven Rostedt (Red Hat) 提交于
      The current method of handling multiple function callbacks is to register
      a list function callback that calls all the other callbacks based on
      their hash tables and compare it to the function that the callback was
      called on. But this is very inefficient.
      
      For example, if you are tracing all functions in the kernel and then
      add a kprobe to a function such that the kprobe uses ftrace, the
      mcount trampoline will switch from calling the function trace callback
      to calling the list callback that will iterate over all registered
      ftrace_ops (in this case, the function tracer and the kprobes callback).
      That means for every function being traced it checks the hash of the
      ftrace_ops for function tracing and kprobes, even though the kprobes
      is only set at a single function. The kprobes ftrace_ops is checked
      for every function being traced!
      
      Instead of calling the list function for functions that are only being
      traced by a single callback, we can call a dynamically allocated
      trampoline that calls the callback directly. The function graph tracer
      already uses a direct call trampoline when it is being traced by itself
      but it is not dynamically allocated. It's trampoline is static in the
      kernel core. The infrastructure that called the function graph trampoline
      can also be used to call a dynamically allocated one.
      
      For now, only ftrace_ops that are not dynamically allocated can have
      a trampoline. That is, users such as function tracer or stack tracer.
      kprobes and perf allocate their ftrace_ops, and until there's a safe
      way to free the trampoline, it can not be used. The dynamically allocated
      ftrace_ops may, although, use the trampoline if the kernel is not
      compiled with CONFIG_PREEMPT. But that will come later.
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f3bea491
  19. 10 9月, 2014 3 次提交
    • S
      ftrace: Replace tramp_hash with old_*_hash to save space · fef5aeee
      Steven Rostedt (Red Hat) 提交于
      Allowing function callbacks to declare their own trampolines requires
      that each ftrace_ops that has a trampoline must have some sort of
      accounting that keeps track of which ops has a trampoline attached
      to a record.
      
      The easy way to solve this was to add a "tramp_hash" that created a
      hash entry for every function that a ops uses with a trampoline.
      But since we can have literally tens of thousands of functions being
      traced, that means we need tens of thousands of descriptors to map
      the ops to the function in the hash. This is quite expensive and
      can cause enabling and disabling the function graph tracer to take
      some time to start and stop. It can take up to several seconds to
      disable or enable all functions in the function graph tracer for this
      reason.
      
      The better approach albeit more complex, is to keep track of how ops
      are being enabled and disabled, and use that along with the counting
      of the number of ops attached to records, to determive what ops has
      a trampoline attached to a record at enabling and disabling of
      tracing.
      
      To do this, the tramp_hash has been replaced with an old_filter_hash
      and old_notrace_hash, which get the copy of the ops filter_hash and
      notrace_hash respectively. The old hashes is kept until the ops has
      been modified or removed and the old hashes are used with the logic
      of the accounting to determine the ops that have the trampoline of
      a record. The reason this has less of a footprint is due to the trick
      that an "empty" hash in the filter_hash means "all functions" and
      an empty hash in the notrace hash means "no functions" in the hash.
      
      This is much more efficienct, doesn't have the delay, and takes up
      much less memory, as we do not need to map all the functions but
      just figure out which functions are mapped at the time it is
      enabled or disabled.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fef5aeee
    • S
      ftrace: Annotate the ops operation on update · e1effa01
      Steven Rostedt (Red Hat) 提交于
      Add three new flags for ftrace_ops:
      
        FTRACE_OPS_FL_ADDING
        FTRACE_OPS_FL_REMOVING
        FTRACE_OPS_FL_MODIFYING
      
      These will be set for the ftrace_ops when they are first added
      to the function tracing, being removed from function tracing
      or just having their functions changed from function tracing,
      respectively.
      
      This will be needed to remove the tramp_hash, which can grow quite
      big. The tramp_hash is used to note what functions a ftrace_ops
      is using a trampoline for. Denoting which ftrace_ops is being
      modified, will allow us to use the ftrace_ops hashes themselves,
      which are much smaller as they have a global flag to denote if
      a ftrace_ops is tracing all functions, as well as a notrace hash
      if the ftrace_ops is tracing all but a few. The tramp_hash just
      creates a hash item for every function, which can go into the 10s
      of thousands if all functions are using the ftrace_ops trampoline.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1effa01
    • S
      ftrace: Add helper function ftrace_ops_get_func() · 87354059
      Steven Rostedt (Red Hat) 提交于
      Add the helper function to what the mcount trampoline is to call
      for a ftrace_ops function. This helper will be used by arch code
      in the future to set up dynamic trampolines. But as this does the
      same tests that are performed in choosing what function to call for
      the default mcount trampoline, might as well use it to clean up
      the existing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87354059
  20. 23 8月, 2014 1 次提交
    • S
      ftrace: Allow ftrace_ops to use the hashes from other ops · 33b7f99c
      Steven Rostedt (Red Hat) 提交于
      Currently the top level debug file system function tracer shares its
      ftrace_ops with the function graph tracer. This was thought to be fine
      because the tracers are not used together, as one can only enable
      function or function_graph tracer in the current_tracer file.
      
      But that assumption proved to be incorrect. The function profiler
      can use the function graph tracer when function tracing is enabled.
      Since all function graph users uses the function tracing ftrace_ops
      this causes a conflict and when a user enables both function profiling
      as well as the function tracer it will crash ftrace and disable it.
      
      The quick solution so far is to move them as separate ftrace_ops like
      it was earlier. The problem though is to synchronize the functions that
      are traced because both function and function_graph tracer are limited
      by the selections made in the set_ftrace_filter and set_ftrace_notrace
      files.
      
      To handle this, a new structure is made called ftrace_ops_hash. This
      structure will now hold the filter_hash and notrace_hash, and the
      ftrace_ops will point to this structure. That will allow two ftrace_ops
      to share the same hashes.
      
      Since most ftrace_ops do not share the hashes, and to keep allocation
      simple, the ftrace_ops structure will include both a pointer to the
      ftrace_ops_hash called func_hash, as well as the structure itself,
      called local_hash. When the ops are registered, the func_hash pointer
      will be initialized to point to the local_hash within the ftrace_ops
      structure. Some of the ftrace internal ftrace_ops will be initialized
      statically. This will allow for the function and function_graph tracer
      to have separate ops but still share the same hash tables that determine
      what functions they trace.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      33b7f99c
  21. 24 7月, 2014 1 次提交
  22. 19 7月, 2014 3 次提交
  23. 17 7月, 2014 1 次提交
    • S
      ftrace-graph: Remove dependency of ftrace_stop() from ftrace_graph_stop() · 1b2f121c
      Steven Rostedt (Red Hat) 提交于
      ftrace_stop() is going away as it disables parts of function tracing
      that affects users that should not be affected. But ftrace_graph_stop()
      is built on ftrace_stop(). Here's another example of killing all of
      function tracing because something went wrong with function graph
      tracing.
      
      Instead of disabling all users of function tracing on function graph
      error, disable only function graph tracing.
      
      A new function is created called ftrace_graph_is_dead(). This is called
      in strategic paths to prevent function graph from doing more harm and
      allowing at least a warning to be printed before the system crashes.
      
      NOTE: ftrace_stop() is still used until all the archs are converted over
      to use ftrace_graph_is_dead(). After that, ftrace_stop() will be removed.
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1b2f121c
  24. 16 7月, 2014 1 次提交
  25. 01 7月, 2014 1 次提交
    • S
      ftrace: Optimize function graph to be called directly · 79922b80
      Steven Rostedt (Red Hat) 提交于
      Function graph tracing is a bit different than the function tracers, as
      it is processed after either the ftrace_caller or ftrace_regs_caller
      and we only have one place to modify the jump to ftrace_graph_caller,
      the jump needs to happen after the restore of registeres.
      
      The function graph tracer is dependent on the function tracer, where
      even if the function graph tracing is going on by itself, the save and
      restore of registers is still done for function tracing regardless of
      if function tracing is happening, before it calls the function graph
      code.
      
      If there's no function tracing happening, it is possible to just call
      the function graph tracer directly, and avoid the wasted effort to save
      and restore regs for function tracing.
      
      This requires adding new flags to the dyn_ftrace records:
      
        FTRACE_FL_TRAMP
        FTRACE_FL_TRAMP_EN
      
      The first is set if the count for the record is one, and the ftrace_ops
      associated to that record has its own trampoline. That way the mcount code
      can call that trampoline directly.
      
      In the future, trampolines can be added to arbitrary ftrace_ops, where you
      can have two or more ftrace_ops registered to ftrace (like kprobes and perf)
      and if they are not tracing the same functions, then instead of doing a
      loop to check all registered ftrace_ops against their hashes, just call the
      ftrace_ops trampoline directly, which would call the registered ftrace_ops
      function directly.
      
      Without this patch perf showed:
      
        0.05%  hackbench  [kernel.kallsyms]  [k] ftrace_caller
        0.05%  hackbench  [kernel.kallsyms]  [k] arch_local_irq_save
        0.05%  hackbench  [kernel.kallsyms]  [k] native_sched_clock
        0.04%  hackbench  [kernel.kallsyms]  [k] __buffer_unlock_commit
        0.04%  hackbench  [kernel.kallsyms]  [k] preempt_trace
        0.04%  hackbench  [kernel.kallsyms]  [k] prepare_ftrace_return
        0.04%  hackbench  [kernel.kallsyms]  [k] __this_cpu_preempt_check
        0.04%  hackbench  [kernel.kallsyms]  [k] ftrace_graph_caller
      
      See that the ftrace_caller took up more time than the ftrace_graph_caller
      did.
      
      With this patch:
      
        0.05%  hackbench  [kernel.kallsyms]  [k] __buffer_unlock_commit
        0.04%  hackbench  [kernel.kallsyms]  [k] call_filter_check_discard
        0.04%  hackbench  [kernel.kallsyms]  [k] ftrace_graph_caller
        0.04%  hackbench  [kernel.kallsyms]  [k] sched_clock
      
      The ftrace_caller is no where to be found and ftrace_graph_caller still
      takes up the same percentage.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      79922b80
  26. 30 6月, 2014 2 次提交
  27. 21 5月, 2014 1 次提交
  28. 14 5月, 2014 2 次提交
    • S
      ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag · f1b2f2bd
      Steven Rostedt (Red Hat) 提交于
      As the decision to what needs to be done (converting a call to the
      ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs
      to ftrace_caller) can easily be determined from the rec->flags of
      FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the
      ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a
      UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes
      more complexity than is required. Remove it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f1b2f2bd
    • S
      ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global · 7413af1f
      Steven Rostedt (Red Hat) 提交于
      Move and rename get_ftrace_addr() and get_ftrace_addr_old() to
      ftrace_get_addr_new() and ftrace_get_addr_curr() respectively.
      
      This moves these two helper functions in the generic code out from
      the arch specific code, and renames them to have a better generic
      name. This will allow other archs to use them as well as makes it
      a bit easier to work on getting separate trampolines for different
      functions.
      
      ftrace_get_addr_new() returns the trampoline address that the mcount
      call address will be converted to.
      
      ftrace_get_addr_curr() returns the trampoline address of what the
      mcount call address currently jumps to.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7413af1f