1. 10 9月, 2014 3 次提交
    • S
      ftrace: Replace tramp_hash with old_*_hash to save space · fef5aeee
      Steven Rostedt (Red Hat) 提交于
      Allowing function callbacks to declare their own trampolines requires
      that each ftrace_ops that has a trampoline must have some sort of
      accounting that keeps track of which ops has a trampoline attached
      to a record.
      
      The easy way to solve this was to add a "tramp_hash" that created a
      hash entry for every function that a ops uses with a trampoline.
      But since we can have literally tens of thousands of functions being
      traced, that means we need tens of thousands of descriptors to map
      the ops to the function in the hash. This is quite expensive and
      can cause enabling and disabling the function graph tracer to take
      some time to start and stop. It can take up to several seconds to
      disable or enable all functions in the function graph tracer for this
      reason.
      
      The better approach albeit more complex, is to keep track of how ops
      are being enabled and disabled, and use that along with the counting
      of the number of ops attached to records, to determive what ops has
      a trampoline attached to a record at enabling and disabling of
      tracing.
      
      To do this, the tramp_hash has been replaced with an old_filter_hash
      and old_notrace_hash, which get the copy of the ops filter_hash and
      notrace_hash respectively. The old hashes is kept until the ops has
      been modified or removed and the old hashes are used with the logic
      of the accounting to determine the ops that have the trampoline of
      a record. The reason this has less of a footprint is due to the trick
      that an "empty" hash in the filter_hash means "all functions" and
      an empty hash in the notrace hash means "no functions" in the hash.
      
      This is much more efficienct, doesn't have the delay, and takes up
      much less memory, as we do not need to map all the functions but
      just figure out which functions are mapped at the time it is
      enabled or disabled.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fef5aeee
    • S
      ftrace: Annotate the ops operation on update · e1effa01
      Steven Rostedt (Red Hat) 提交于
      Add three new flags for ftrace_ops:
      
        FTRACE_OPS_FL_ADDING
        FTRACE_OPS_FL_REMOVING
        FTRACE_OPS_FL_MODIFYING
      
      These will be set for the ftrace_ops when they are first added
      to the function tracing, being removed from function tracing
      or just having their functions changed from function tracing,
      respectively.
      
      This will be needed to remove the tramp_hash, which can grow quite
      big. The tramp_hash is used to note what functions a ftrace_ops
      is using a trampoline for. Denoting which ftrace_ops is being
      modified, will allow us to use the ftrace_ops hashes themselves,
      which are much smaller as they have a global flag to denote if
      a ftrace_ops is tracing all functions, as well as a notrace hash
      if the ftrace_ops is tracing all but a few. The tramp_hash just
      creates a hash item for every function, which can go into the 10s
      of thousands if all functions are using the ftrace_ops trampoline.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1effa01
    • S
      ftrace: Add helper function ftrace_ops_get_func() · 87354059
      Steven Rostedt (Red Hat) 提交于
      Add the helper function to what the mcount trampoline is to call
      for a ftrace_ops function. This helper will be used by arch code
      in the future to set up dynamic trampolines. But as this does the
      same tests that are performed in choosing what function to call for
      the default mcount trampoline, might as well use it to clean up
      the existing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87354059
  2. 23 8月, 2014 1 次提交
    • S
      ftrace: Allow ftrace_ops to use the hashes from other ops · 33b7f99c
      Steven Rostedt (Red Hat) 提交于
      Currently the top level debug file system function tracer shares its
      ftrace_ops with the function graph tracer. This was thought to be fine
      because the tracers are not used together, as one can only enable
      function or function_graph tracer in the current_tracer file.
      
      But that assumption proved to be incorrect. The function profiler
      can use the function graph tracer when function tracing is enabled.
      Since all function graph users uses the function tracing ftrace_ops
      this causes a conflict and when a user enables both function profiling
      as well as the function tracer it will crash ftrace and disable it.
      
      The quick solution so far is to move them as separate ftrace_ops like
      it was earlier. The problem though is to synchronize the functions that
      are traced because both function and function_graph tracer are limited
      by the selections made in the set_ftrace_filter and set_ftrace_notrace
      files.
      
      To handle this, a new structure is made called ftrace_ops_hash. This
      structure will now hold the filter_hash and notrace_hash, and the
      ftrace_ops will point to this structure. That will allow two ftrace_ops
      to share the same hashes.
      
      Since most ftrace_ops do not share the hashes, and to keep allocation
      simple, the ftrace_ops structure will include both a pointer to the
      ftrace_ops_hash called func_hash, as well as the structure itself,
      called local_hash. When the ops are registered, the func_hash pointer
      will be initialized to point to the local_hash within the ftrace_ops
      structure. Some of the ftrace internal ftrace_ops will be initialized
      statically. This will allow for the function and function_graph tracer
      to have separate ops but still share the same hash tables that determine
      what functions they trace.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      33b7f99c
  3. 24 7月, 2014 1 次提交
  4. 19 7月, 2014 3 次提交
  5. 17 7月, 2014 1 次提交
    • S
      ftrace-graph: Remove dependency of ftrace_stop() from ftrace_graph_stop() · 1b2f121c
      Steven Rostedt (Red Hat) 提交于
      ftrace_stop() is going away as it disables parts of function tracing
      that affects users that should not be affected. But ftrace_graph_stop()
      is built on ftrace_stop(). Here's another example of killing all of
      function tracing because something went wrong with function graph
      tracing.
      
      Instead of disabling all users of function tracing on function graph
      error, disable only function graph tracing.
      
      A new function is created called ftrace_graph_is_dead(). This is called
      in strategic paths to prevent function graph from doing more harm and
      allowing at least a warning to be printed before the system crashes.
      
      NOTE: ftrace_stop() is still used until all the archs are converted over
      to use ftrace_graph_is_dead(). After that, ftrace_stop() will be removed.
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1b2f121c
  6. 16 7月, 2014 1 次提交
  7. 01 7月, 2014 1 次提交
    • S
      ftrace: Optimize function graph to be called directly · 79922b80
      Steven Rostedt (Red Hat) 提交于
      Function graph tracing is a bit different than the function tracers, as
      it is processed after either the ftrace_caller or ftrace_regs_caller
      and we only have one place to modify the jump to ftrace_graph_caller,
      the jump needs to happen after the restore of registeres.
      
      The function graph tracer is dependent on the function tracer, where
      even if the function graph tracing is going on by itself, the save and
      restore of registers is still done for function tracing regardless of
      if function tracing is happening, before it calls the function graph
      code.
      
      If there's no function tracing happening, it is possible to just call
      the function graph tracer directly, and avoid the wasted effort to save
      and restore regs for function tracing.
      
      This requires adding new flags to the dyn_ftrace records:
      
        FTRACE_FL_TRAMP
        FTRACE_FL_TRAMP_EN
      
      The first is set if the count for the record is one, and the ftrace_ops
      associated to that record has its own trampoline. That way the mcount code
      can call that trampoline directly.
      
      In the future, trampolines can be added to arbitrary ftrace_ops, where you
      can have two or more ftrace_ops registered to ftrace (like kprobes and perf)
      and if they are not tracing the same functions, then instead of doing a
      loop to check all registered ftrace_ops against their hashes, just call the
      ftrace_ops trampoline directly, which would call the registered ftrace_ops
      function directly.
      
      Without this patch perf showed:
      
        0.05%  hackbench  [kernel.kallsyms]  [k] ftrace_caller
        0.05%  hackbench  [kernel.kallsyms]  [k] arch_local_irq_save
        0.05%  hackbench  [kernel.kallsyms]  [k] native_sched_clock
        0.04%  hackbench  [kernel.kallsyms]  [k] __buffer_unlock_commit
        0.04%  hackbench  [kernel.kallsyms]  [k] preempt_trace
        0.04%  hackbench  [kernel.kallsyms]  [k] prepare_ftrace_return
        0.04%  hackbench  [kernel.kallsyms]  [k] __this_cpu_preempt_check
        0.04%  hackbench  [kernel.kallsyms]  [k] ftrace_graph_caller
      
      See that the ftrace_caller took up more time than the ftrace_graph_caller
      did.
      
      With this patch:
      
        0.05%  hackbench  [kernel.kallsyms]  [k] __buffer_unlock_commit
        0.04%  hackbench  [kernel.kallsyms]  [k] call_filter_check_discard
        0.04%  hackbench  [kernel.kallsyms]  [k] ftrace_graph_caller
        0.04%  hackbench  [kernel.kallsyms]  [k] sched_clock
      
      The ftrace_caller is no where to be found and ftrace_graph_caller still
      takes up the same percentage.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      79922b80
  8. 30 6月, 2014 2 次提交
  9. 21 5月, 2014 1 次提交
  10. 14 5月, 2014 2 次提交
    • S
      ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag · f1b2f2bd
      Steven Rostedt (Red Hat) 提交于
      As the decision to what needs to be done (converting a call to the
      ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs
      to ftrace_caller) can easily be determined from the rec->flags of
      FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the
      ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a
      UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes
      more complexity than is required. Remove it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f1b2f2bd
    • S
      ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global · 7413af1f
      Steven Rostedt (Red Hat) 提交于
      Move and rename get_ftrace_addr() and get_ftrace_addr_old() to
      ftrace_get_addr_new() and ftrace_get_addr_curr() respectively.
      
      This moves these two helper functions in the generic code out from
      the arch specific code, and renames them to have a better generic
      name. This will allow other archs to use them as well as makes it
      a bit easier to work on getting separate trampolines for different
      functions.
      
      ftrace_get_addr_new() returns the trampoline address that the mcount
      call address will be converted to.
      
      ftrace_get_addr_curr() returns the trampoline address of what the
      mcount call address currently jumps to.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7413af1f
  11. 28 4月, 2014 1 次提交
    • S
      ftrace/module: Hardcode ftrace_module_init() call into load_module() · a949ae56
      Steven Rostedt (Red Hat) 提交于
      A race exists between module loading and enabling of function tracer.
      
      	CPU 1				CPU 2
      	-----				-----
        load_module()
         module->state = MODULE_STATE_COMING
      
      				register_ftrace_function()
      				 mutex_lock(&ftrace_lock);
      				 ftrace_startup()
      				  update_ftrace_function();
      				   ftrace_arch_code_modify_prepare()
      				    set_all_module_text_rw();
      				   <enables-ftrace>
      				    ftrace_arch_code_modify_post_process()
      				     set_all_module_text_ro();
      
      				[ here all module text is set to RO,
      				  including the module that is
      				  loading!! ]
      
         blocking_notifier_call_chain(MODULE_STATE_COMING);
          ftrace_init_module()
      
           [ tries to modify code, but it's RO, and fails!
             ftrace_bug() is called]
      
      When this race happens, ftrace_bug() will produces a nasty warning and
      all of the function tracing features will be disabled until reboot.
      
      The simple solution is to treate module load the same way the core
      kernel is treated at boot. To hardcode the ftrace function modification
      of converting calls to mcount into nops. This is done in init/main.c
      there's no reason it could not be done in load_module(). This gives
      a better control of the changes and doesn't tie the state of the
      module to its notifiers as much. Ftrace is special, it needs to be
      treated as such.
      
      The reason this would work, is that the ftrace_module_init() would be
      called while the module is in MODULE_STATE_UNFORMED, which is ignored
      by the set_all_module_text_ro() call.
      
      Link: http://lkml.kernel.org/r/1395637826-3312-1-git-send-email-indou.takao@jp.fujitsu.comReported-by: NTakao Indoh <indou.takao@jp.fujitsu.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: stable@vger.kernel.org # 2.6.38+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a949ae56
  12. 22 4月, 2014 1 次提交
    • S
      ftrace: Remove global function list and call function directly · 4104d326
      Steven Rostedt (Red Hat) 提交于
      Instead of having a list of global functions that are called,
      as only one global function is allow to be enabled at a time, there's
      no reason to have a list.
      
      Instead, simply have all the users of the global ops, use the global ops
      directly, instead of registering their own ftrace_ops. Just switch what
      function is used before enabling the function tracer.
      
      This removes a lot of code as well as the complexity involved with it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4104d326
  13. 12 3月, 2014 1 次提交
  14. 07 3月, 2014 2 次提交
  15. 21 2月, 2014 2 次提交
  16. 03 1月, 2014 1 次提交
  17. 06 11月, 2013 1 次提交
  18. 19 10月, 2013 1 次提交
    • N
      ftrace: Add set_graph_notrace filter · 29ad23b0
      Namhyung Kim 提交于
      The set_graph_notrace filter is analogous to set_ftrace_notrace and
      can be used for eliminating uninteresting part of function graph trace
      output.  It also works with set_graph_function nicely.
      
        # cd /sys/kernel/debug/tracing/
        # echo do_page_fault > set_graph_function
        # perf ftrace live true
         2)               |  do_page_fault() {
         2)               |    __do_page_fault() {
         2)   0.381 us    |      down_read_trylock();
         2)   0.055 us    |      __might_sleep();
         2)   0.696 us    |      find_vma();
         2)               |      handle_mm_fault() {
         2)               |        handle_pte_fault() {
         2)               |          __do_fault() {
         2)               |            filemap_fault() {
         2)               |              find_get_page() {
         2)   0.033 us    |                __rcu_read_lock();
         2)   0.035 us    |                __rcu_read_unlock();
         2)   1.696 us    |              }
         2)   0.031 us    |              __might_sleep();
         2)   2.831 us    |            }
         2)               |            _raw_spin_lock() {
         2)   0.046 us    |              add_preempt_count();
         2)   0.841 us    |            }
         2)   0.033 us    |            page_add_file_rmap();
         2)               |            _raw_spin_unlock() {
         2)   0.057 us    |              sub_preempt_count();
         2)   0.568 us    |            }
         2)               |            unlock_page() {
         2)   0.084 us    |              page_waitqueue();
         2)   0.126 us    |              __wake_up_bit();
         2)   1.117 us    |            }
         2)   7.729 us    |          }
         2)   8.397 us    |        }
         2)   8.956 us    |      }
         2)   0.085 us    |      up_read();
         2) + 12.745 us   |    }
         2) + 13.401 us   |  }
        ...
      
        # echo handle_mm_fault > set_graph_notrace
        # perf ftrace live true
         1)               |  do_page_fault() {
         1)               |    __do_page_fault() {
         1)   0.205 us    |      down_read_trylock();
         1)   0.041 us    |      __might_sleep();
         1)   0.344 us    |      find_vma();
         1)   0.069 us    |      up_read();
         1)   4.692 us    |    }
         1)   5.311 us    |  }
        ...
      
      Link: http://lkml.kernel.org/r/1381739066-7531-5-git-send-email-namhyung@kernel.orgSigned-off-by: NNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      29ad23b0
  19. 20 6月, 2013 1 次提交
    • S
      tracing: Disable tracing on warning · de7edd31
      Steven Rostedt (Red Hat) 提交于
      Add a traceoff_on_warning option in both the kernel command line as well
      as a sysctl option. When set, any WARN*() function that is hit will cause
      the tracing_on variable to be cleared, which disables writing to the
      ring buffer.
      
      This is useful especially when tracing a bug with function tracing. When
      a warning is hit, the print caused by the warning can flood the trace with
      the functions that producing the output for the warning. This can make the
      resulting trace useless by either hiding where the bug happened, or worse,
      by overflowing the buffer and losing the trace of the bug totally.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      de7edd31
  20. 12 6月, 2013 1 次提交
  21. 10 5月, 2013 1 次提交
    • M
      ftrace, kprobes: Fix a deadlock on ftrace_regex_lock · f04f24fb
      Masami Hiramatsu 提交于
      Fix a deadlock on ftrace_regex_lock which happens when setting
      an enable_event trigger on dynamic kprobe event as below.
      
      ----
      sh-2.05b# echo p vfs_symlink > kprobe_events
      sh-2.05b# echo vfs_symlink:enable_event:kprobes:p_vfs_symlink_0 > set_ftrace_filter
      
      =============================================
      [ INFO: possible recursive locking detected ]
      3.9.0+ #35 Not tainted
      ---------------------------------------------
      sh/72 is trying to acquire lock:
       (ftrace_regex_lock){+.+.+.}, at: [<ffffffff810ba6c1>] ftrace_set_hash+0x81/0x1f0
      
      but task is already holding lock:
       (ftrace_regex_lock){+.+.+.}, at: [<ffffffff810b7cbd>] ftrace_regex_write.isra.29.part.30+0x3d/0x220
      
      other info that might help us debug this:
       Possible unsafe locking scenario:
      
             CPU0
             ----
        lock(ftrace_regex_lock);
        lock(ftrace_regex_lock);
      
       *** DEADLOCK ***
      ----
      
      To fix that, this introduces a finer regex_lock for each ftrace_ops.
      ftrace_regex_lock is too big of a lock which protects all
      filter/notrace_hash operations, but it doesn't need to be a global
      lock after supporting multiple ftrace_ops because each ftrace_ops
      has its own filter/notrace_hash.
      
      Link: http://lkml.kernel.org/r/20130509054417.30398.84254.stgit@mhiramat-M0-7522
      
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Tom Zanussi <tom.zanussi@intel.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      [ Added initialization flag and automate mutex initialization for
        non ftrace.c ftrace_probes. ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f04f24fb
  22. 13 4月, 2013 2 次提交
  23. 09 4月, 2013 1 次提交
    • S
      ftrace: Do not call stub functions in control loop · 395b97a3
      Steven Rostedt (Red Hat) 提交于
      The function tracing control loop used by perf spits out a warning
      if the called function is not a control function. This is because
      the control function references a per cpu allocated data structure
      on struct ftrace_ops that is not allocated for other types of
      functions.
      
      commit 0a016409 "ftrace: Optimize the function tracer list loop"
      
      Had an optimization done to all function tracing loops to optimize
      for a single registered ops. Unfortunately, this allows for a slight
      race when tracing starts or ends, where the stub function might be
      called after the current registered ops is removed. In this case we
      get the following dump:
      
      root# perf stat -e ftrace:function sleep 1
      [   74.339105] WARNING: at include/linux/ftrace.h:209 ftrace_ops_control_func+0xde/0xf0()
      [   74.349522] Hardware name: PRIMERGY RX200 S6
      [   74.357149] Modules linked in: sg igb iTCO_wdt ptp pps_core iTCO_vendor_support i7core_edac dca lpc_ich i2c_i801 coretemp edac_core crc32c_intel mfd_core ghash_clmulni_intel dm_multipath acpi_power_meter pcspk
      r microcode vhost_net tun macvtap macvlan nfsd kvm_intel kvm auth_rpcgss nfs_acl lockd sunrpc uinput xfs libcrc32c sd_mod crc_t10dif sr_mod cdrom mgag200 i2c_algo_bit drm_kms_helper ttm qla2xxx mptsas ahci drm li
      bahci scsi_transport_sas mptscsih libata scsi_transport_fc i2c_core mptbase scsi_tgt dm_mirror dm_region_hash dm_log dm_mod
      [   74.446233] Pid: 1377, comm: perf Tainted: G        W    3.9.0-rc1 #1
      [   74.453458] Call Trace:
      [   74.456233]  [<ffffffff81062e3f>] warn_slowpath_common+0x7f/0xc0
      [   74.462997]  [<ffffffff810fbc60>] ? rcu_note_context_switch+0xa0/0xa0
      [   74.470272]  [<ffffffff811041a2>] ? __unregister_ftrace_function+0xa2/0x1a0
      [   74.478117]  [<ffffffff81062e9a>] warn_slowpath_null+0x1a/0x20
      [   74.484681]  [<ffffffff81102ede>] ftrace_ops_control_func+0xde/0xf0
      [   74.491760]  [<ffffffff8162f400>] ftrace_call+0x5/0x2f
      [   74.497511]  [<ffffffff8162f400>] ? ftrace_call+0x5/0x2f
      [   74.503486]  [<ffffffff8162f400>] ? ftrace_call+0x5/0x2f
      [   74.509500]  [<ffffffff810fbc65>] ? synchronize_sched+0x5/0x50
      [   74.516088]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.522268]  [<ffffffff810fbc65>] ? synchronize_sched+0x5/0x50
      [   74.528837]  [<ffffffff811041a2>] ? __unregister_ftrace_function+0xa2/0x1a0
      [   74.536696]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.542878]  [<ffffffff8162402d>] ? mutex_lock+0x1d/0x50
      [   74.548869]  [<ffffffff81105c67>] unregister_ftrace_function+0x27/0x50
      [   74.556243]  [<ffffffff8111eadf>] perf_ftrace_event_register+0x9f/0x140
      [   74.563709]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.569887]  [<ffffffff8162402d>] ? mutex_lock+0x1d/0x50
      [   74.575898]  [<ffffffff8111e94e>] perf_trace_destroy+0x2e/0x50
      [   74.582505]  [<ffffffff81127ba9>] tp_perf_event_destroy+0x9/0x10
      [   74.589298]  [<ffffffff811295d0>] free_event+0x70/0x1a0
      [   74.595208]  [<ffffffff8112a579>] perf_event_release_kernel+0x69/0xa0
      [   74.602460]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.608667]  [<ffffffff8112a640>] put_event+0x90/0xc0
      [   74.614373]  [<ffffffff8112a740>] perf_release+0x10/0x20
      [   74.620367]  [<ffffffff811a3044>] __fput+0xf4/0x280
      [   74.625894]  [<ffffffff811a31de>] ____fput+0xe/0x10
      [   74.631387]  [<ffffffff81083697>] task_work_run+0xa7/0xe0
      [   74.637452]  [<ffffffff81014981>] do_notify_resume+0x71/0xb0
      [   74.643843]  [<ffffffff8162fa92>] int_signal+0x12/0x17
      
      To fix this a new ftrace_ops flag is added that denotes the ftrace_list_end
      ftrace_ops stub as just that, a stub. This flag is now checked in the
      control loop and the function is not called if the flag is set.
      
      Thanks to Jovi for not just reporting the bug, but also pointing out
      where the bug was in the code.
      
      Link: http://lkml.kernel.org/r/514A8855.7090402@redhat.com
      Link: http://lkml.kernel.org/r/1364377499-1900-15-git-send-email-jovi.zhangwei@huawei.comTested-by: NWANG Chao <chaowang@redhat.com>
      Reported-by: NWANG Chao <chaowang@redhat.com>
      Reported-by: Nzhangwei(Jovi) <jovi.zhangwei@huawei.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      395b97a3
  24. 15 3月, 2013 1 次提交
    • S
      ftrace: Clean up function probe methods · e67efb93
      Steven Rostedt (Red Hat) 提交于
      When a function probe is created, each function that the probe is
      attached to, a "callback" method is called. On release of the probe,
      each function entry calls the "free" method.
      
      First, "callback" is a confusing name and does not really match what
      it does. Callback sounds like it will be called when the probe
      triggers. But that's not the case. This is really an "init" function,
      so lets rename it as such.
      
      Secondly, both "init" and "free" do not pass enough information back
      to the handlers. Pass back the ops, ip and data for each time the
      method is called. We have the information, might as well use it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e67efb93
  25. 22 1月, 2013 1 次提交
  26. 18 12月, 2012 1 次提交
  27. 31 7月, 2012 4 次提交
  28. 20 7月, 2012 1 次提交
    • S
      ftrace/x86: Add separate function to save regs · 08f6fba5
      Steven Rostedt 提交于
      Add a way to have different functions calling different trampolines.
      If a ftrace_ops wants regs saved on the return, then have only the
      functions with ops registered to save regs. Functions registered by
      other ops would not be affected, unless the functions overlap.
      
      If one ftrace_ops registered functions A, B and C and another ops
      registered fucntions to save regs on A, and D, then only functions
      A and D would be saving regs. Function B and C would work as normal.
      Although A is registered by both ops: normal and saves regs; this is fine
      as saving the regs is needed to satisfy one of the ops that calls it
      but the regs are ignored by the other ops function.
      
      x86_64 implements the full regs saving, and i386 just passes a NULL
      for regs to satisfy the ftrace_ops passing. Where an arch must supply
      both regs and ftrace_ops parameters, even if regs is just NULL.
      
      It is OK for an arch to pass NULL regs. All function trace users that
      require regs passing must add the flag FTRACE_OPS_FL_SAVE_REGS when
      registering the ftrace_ops. If the arch does not support saving regs
      then the ftrace_ops will fail to register. The flag
      FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED may be set that will prevent the
      ftrace_ops from failing to register. In this case, the handler may
      either check if regs is not NULL or check if ARCH_SUPPORTS_FTRACE_SAVE_REGS.
      If the arch supports passing regs it will set this macro and pass regs
      for ops that request them. All other archs will just pass NULL.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.107705970@goodmis.org
      
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08f6fba5