1. 09 4月, 2013 1 次提交
  2. 15 3月, 2013 3 次提交
    • S
      ftrace: Use manual free after synchronize_sched() not call_rcu_sched() · 7818b388
      Steven Rostedt (Red Hat) 提交于
      The entries to the probe hash must be freed after a synchronize_sched()
      after the entry has been removed from the hash.
      
      As the entries are registered with ops that may have their own callbacks,
      and these callbacks may sleep, we can not use call_rcu_sched() because
      the rcu callbacks registered with that are called from a softirq context.
      
      Instead of using call_rcu_sched(), manually save the entries on a free_list
      and at the end of the loop that removes the entries, do a synchronize_sched()
      and then go through the free_list, freeing the entries.
      
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7818b388
    • S
      ftrace: Clean up function probe methods · e67efb93
      Steven Rostedt (Red Hat) 提交于
      When a function probe is created, each function that the probe is
      attached to, a "callback" method is called. On release of the probe,
      each function entry calls the "free" method.
      
      First, "callback" is a confusing name and does not really match what
      it does. Callback sounds like it will be called when the probe
      triggers. But that's not the case. This is really an "init" function,
      so lets rename it as such.
      
      Secondly, both "init" and "free" do not pass enough information back
      to the handlers. Pass back the ops, ip and data for each time the
      method is called. We have the information, might as well use it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e67efb93
    • S
      ftrace: Fix function probe to only enable needed functions · e1df4cb6
      Steven Rostedt (Red Hat) 提交于
      Currently the function probe enables all functions and runs a "hash"
      against every function call to see if it should call a probe. This
      is extremely wasteful.
      
      Note, a probe is something like:
      
        echo schedule:traceoff > /debug/tracing/set_ftrace_filter
      
      When schedule is called, the probe will disable tracing. But currently,
      it has a call back for *all* functions, and checks to see if the
      called function is the probe that is needed.
      
      The probe function has been created before ftrace was rewritten to
      allow for more than one "op" to be registered by the function tracer.
      When probes were created, it couldn't limit the functions without also
      limiting normal function calls. But now we can, it's about time
      to update the probe code.
      
      Todo, have separate ops for different entries. That is, assign
      a ftrace_ops per probe, instead of one op for all probes. But
      as there's not many probes assigned, this may not be that urgent.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1df4cb6
  3. 14 3月, 2013 1 次提交
    • S
      tracing: Fix free of probe entry by calling call_rcu_sched() · 740466bc
      Steven Rostedt (Red Hat) 提交于
      Because function tracing is very invasive, and can even trace
      calls to rcu_read_lock(), RCU access in function tracing is done
      with preempt_disable_notrace(). This requires a synchronize_sched()
      for updates and not a synchronize_rcu().
      
      Function probes (traceon, traceoff, etc) must be freed after
      a synchronize_sched() after its entry has been removed from the
      hash. But call_rcu() is used. Fix this by using call_rcu_sched().
      
      Also fix the usage to use hlist_del_rcu() instead of hlist_del().
      
      Cc: stable@vger.kernel.org
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      740466bc
  4. 19 2月, 2013 1 次提交
    • S
      ftrace: Call ftrace cleanup module notifier after all other notifiers · 8c189ea6
      Steven Rostedt (Red Hat) 提交于
      Commit: c1bf08ac "ftrace: Be first to run code modification on modules"
      
      changed ftrace module notifier's priority to INT_MAX in order to
      process the ftrace nops before anything else could touch them
      (namely kprobes). This was the correct thing to do.
      
      Unfortunately, the ftrace module notifier also contains the ftrace
      clean up code. As opposed to the set up code, this code should be
      run *after* all the module notifiers have run in case a module is doing
      correct clean-up and unregisters its ftrace hooks. Basically, ftrace
      needs to do clean up on module removal, as it needs to know about code
      being removed so that it doesn't try to modify that code. But after it
      removes the module from its records, if a ftrace user tries to remove
      a probe, that removal will fail due as the record of that code segment
      no longer exists.
      
      Nothing really bad happens if the probe removal is called after ftrace
      did the clean up, but the ftrace removal function will return an error.
      Correct code (such as kprobes) will produce a WARN_ON() if it fails
      to remove the probe. As people get annoyed by frivolous warnings, it's
      best to do the ftrace clean up after everything else.
      
      By splitting the ftrace_module_notifier into two notifiers, one that
      does the module load setup that is run at high priority, and the other
      that is called for module clean up that is run at low priority, the
      problem is solved.
      
      Cc: stable@vger.kernel.org
      Reported-by: NFrank Ch. Eigler <fche@redhat.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8c189ea6
  5. 23 1月, 2013 4 次提交
    • S
      tracing: Avoid unnecessary multiple recursion checks · edc15caf
      Steven Rostedt 提交于
      When function tracing occurs, the following steps are made:
        If arch does not support a ftrace feature:
         call internal function (uses INTERNAL bits) which calls...
        If callback is registered to the "global" list, the list
         function is called and recursion checks the GLOBAL bits.
         then this function calls...
        The function callback, which can use the FTRACE bits to
         check for recursion.
      
      Now if the arch does not suppport a feature, and it calls
      the global list function which calls the ftrace callback
      all three of these steps will do a recursion protection.
      There's no reason to do one if the previous caller already
      did. The recursion that we are protecting against will
      go through the same steps again.
      
      To prevent the multiple recursion checks, if a recursion
      bit is set that is higher than the MAX bit of the current
      check, then we know that the check was made by the previous
      caller, and we can skip the current check.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      edc15caf
    • S
      ftrace: Add context level recursion bit checking · c29f122c
      Steven Rostedt 提交于
      Currently for recursion checking in the function tracer, ftrace
      tests a task_struct bit to determine if the function tracer had
      recursed or not. If it has, then it will will return without going
      further.
      
      But this leads to races. If an interrupt came in after the bit
      was set, the functions being traced would see that bit set and
      think that the function tracer recursed on itself, and would return.
      
      Instead add a bit for each context (normal, softirq, irq and nmi).
      
      A check of which context the task is in is made before testing the
      associated bit. Now if an interrupt preempts the function tracer
      after the previous context has been set, the interrupt functions
      can still be traced.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c29f122c
    • S
      ftrace: Optimize the function tracer list loop · 0a016409
      Steven Rostedt 提交于
      There is lots of places that perform:
      
             op = rcu_dereference_raw(ftrace_control_list);
             while (op != &ftrace_list_end) {
      
      Add a helper macro to do this, and also optimize for a single
      entity. That is, gcc will optimize a loop for either no iterations
      or more than one iteration. But usually only a single callback
      is registered to the function tracer, thus the optimized case
      should be a single pass. to do this we now do:
      
      	op = rcu_dereference_raw(list);
      	do {
      		[...]
      	} while (likely(op = rcu_dereference_raw((op)->next)) &&
      	       unlikely((op) != &ftrace_list_end));
      
      An op is always registered (ftrace_list_end when no callbacks is
      registered), thus when a single callback is registered, the link
      list looks like:
      
       top => callback => ftrace_list_end => NULL.
      
      The likely(op = op->next) still must be performed due to the race
      of removing the callback, where the first op assignment could
      equal ftrace_list_end. In that case, the op->next would be NULL.
      But this is unlikely (only happens in a race condition when
      removing the callback).
      
      But it is very likely that the next op would be ftrace_list_end,
      unless more than one callback has been registered. This tells
      gcc what the most common case is and makes the fast path with
      the least amount of branches.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0a016409
    • S
      ftrace: Fix global function tracers that are not recursion safe · 63503794
      Steven Rostedt 提交于
      If one of the function tracers set by the global ops is not recursion
      safe, it can still be called directly without the added recursion
      supplied by the ftrace infrastructure.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      63503794
  6. 22 1月, 2013 2 次提交
    • M
      ftrace: Move ARCH_SUPPORTS_FTRACE_SAVE_REGS in Kconfig · 06aeaaea
      Masami Hiramatsu 提交于
      Move SAVE_REGS support flag into Kconfig and rename
      it to CONFIG_DYNAMIC_FTRACE_WITH_REGS. This also introduces
      CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS which indicates
      the architecture depending part of ftrace has a code
      that saves full registers.
      On the other hand, CONFIG_DYNAMIC_FTRACE_WITH_REGS indicates
      the code is enabled.
      
      Link: http://lkml.kernel.org/r/20120928081516.3560.72534.stgit@ltc138.sdl.hitachi.co.jp
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      06aeaaea
    • S
      ftrace: Be first to run code modification on modules · c1bf08ac
      Steven Rostedt 提交于
      If some other kernel subsystem has a module notifier, and adds a kprobe
      to a ftrace mcount point (now that kprobes work on ftrace points),
      when the ftrace notifier runs it will fail and disable ftrace, as well
      as kprobes that are attached to ftrace points.
      
      Here's the error:
      
       WARNING: at kernel/trace/ftrace.c:1618 ftrace_bug+0x239/0x280()
       Hardware name: Bochs
       Modules linked in: fat(+) stap_56d28a51b3fe546293ca0700b10bcb29__8059(F) nfsv4 auth_rpcgss nfs dns_resolver fscache xt_nat iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack lockd sunrpc ppdev parport_pc parport microcode virtio_net i2c_piix4 drm_kms_helper ttm drm i2c_core [last unloaded: bid_shared]
       Pid: 8068, comm: modprobe Tainted: GF            3.7.0-0.rc8.git0.1.fc19.x86_64 #1
       Call Trace:
        [<ffffffff8105e70f>] warn_slowpath_common+0x7f/0xc0
        [<ffffffff81134106>] ? __probe_kernel_read+0x46/0x70
        [<ffffffffa0180000>] ? 0xffffffffa017ffff
        [<ffffffffa0180000>] ? 0xffffffffa017ffff
        [<ffffffff8105e76a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff810fd189>] ftrace_bug+0x239/0x280
        [<ffffffff810fd626>] ftrace_process_locs+0x376/0x520
        [<ffffffff810fefb7>] ftrace_module_notify+0x47/0x50
        [<ffffffff8163912d>] notifier_call_chain+0x4d/0x70
        [<ffffffff810882f8>] __blocking_notifier_call_chain+0x58/0x80
        [<ffffffff81088336>] blocking_notifier_call_chain+0x16/0x20
        [<ffffffff810c2a23>] sys_init_module+0x73/0x220
        [<ffffffff8163d719>] system_call_fastpath+0x16/0x1b
       ---[ end trace 9ef46351e53bbf80 ]---
       ftrace failed to modify [<ffffffffa0180000>] init_once+0x0/0x20 [fat]
        actual: cc:bb:d2:4b:e1
      
      A kprobe was added to the init_once() function in the fat module on load.
      But this happened before ftrace could have touched the code. As ftrace
      didn't run yet, the kprobe system had no idea it was a ftrace point and
      simply added a breakpoint to the code (0xcc in the cc:bb:d2:4b:e1).
      
      Then when ftrace went to modify the location from a call to mcount/fentry
      into a nop, it didn't see a call op, but instead it saw the breakpoint op
      and not knowing what to do with it, ftrace shut itself down.
      
      The solution is to simply give the ftrace module notifier the max priority.
      This should have been done regardless, as the core code ftrace modification
      also happens very early on in boot up. This makes the module modification
      closer to core modification.
      
      Link: http://lkml.kernel.org/r/20130107140333.593683061@goodmis.org
      
      Cc: stable@vger.kernel.org
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reported-by: NFrank Ch. Eigler <fche@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c1bf08ac
  7. 18 12月, 2012 1 次提交
  8. 06 12月, 2012 1 次提交
  9. 16 11月, 2012 1 次提交
  10. 01 11月, 2012 2 次提交
  11. 31 7月, 2012 3 次提交
  12. 20 7月, 2012 4 次提交
    • S
      ftrace/x86: Add separate function to save regs · 08f6fba5
      Steven Rostedt 提交于
      Add a way to have different functions calling different trampolines.
      If a ftrace_ops wants regs saved on the return, then have only the
      functions with ops registered to save regs. Functions registered by
      other ops would not be affected, unless the functions overlap.
      
      If one ftrace_ops registered functions A, B and C and another ops
      registered fucntions to save regs on A, and D, then only functions
      A and D would be saving regs. Function B and C would work as normal.
      Although A is registered by both ops: normal and saves regs; this is fine
      as saving the regs is needed to satisfy one of the ops that calls it
      but the regs are ignored by the other ops function.
      
      x86_64 implements the full regs saving, and i386 just passes a NULL
      for regs to satisfy the ftrace_ops passing. Where an arch must supply
      both regs and ftrace_ops parameters, even if regs is just NULL.
      
      It is OK for an arch to pass NULL regs. All function trace users that
      require regs passing must add the flag FTRACE_OPS_FL_SAVE_REGS when
      registering the ftrace_ops. If the arch does not support saving regs
      then the ftrace_ops will fail to register. The flag
      FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED may be set that will prevent the
      ftrace_ops from failing to register. In this case, the handler may
      either check if regs is not NULL or check if ARCH_SUPPORTS_FTRACE_SAVE_REGS.
      If the arch supports passing regs it will set this macro and pass regs
      for ops that request them. All other archs will just pass NULL.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.107705970@goodmis.org
      
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08f6fba5
    • S
      ftrace: Return pt_regs to function trace callback · a1e2e31d
      Steven Rostedt 提交于
      Return as the 4th paramater to the function tracer callback the pt_regs.
      
      Later patches that implement regs passing for the architectures will require
      having the ftrace_ops set the SAVE_REGS flag, which will tell the arch
      to take the time to pass a full set of pt_regs to the ftrace_ops callback
      function. If the arch does not support it then it should pass NULL.
      
      If an arch can pass full regs, then it should define:
       ARCH_SUPPORTS_FTRACE_SAVE_REGS to 1
      
      Link: http://lkml.kernel.org/r/20120702201821.019966811@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a1e2e31d
    • S
      ftrace: Consolidate arch dependent functions with 'list' function · ccf3672d
      Steven Rostedt 提交于
      As the function tracer starts to get more features, the support for
      theses features will spread out throughout the different architectures
      over time. These features boil down to what each arch does in the
      mcount trampoline (the ftrace_caller).
      
      Currently there's two features that are not the same throughout the
      archs.
      
       1) Support to stop function tracing before the callback
       2) passing of the ftrace ops
      
      Both of these require placing an indirect function to support the
      features if the mcount trampoline does not.
      
      On a side note, for all architectures, when more than one callback
      is registered to the function tracer, an intermediate 'list' function
      is called by the mcount trampoline to iterate through the callbacks
      that are registered.
      
      Instead of making a separate function for each of these features,
      and requiring several indirect calls, just use the single 'list' function
      as the intermediate, to handle all cases. If an arch does not support
      the 'stop function tracing' or the passing of ftrace ops, just force
      it to use the list function that will handle the features required.
      
      This makes the code cleaner and simpler and removes a lot of
       #ifdefs in the code.
      
      Link: http://lkml.kernel.org/r/20120612225424.495625483@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ccf3672d
    • S
      ftrace: Pass ftrace_ops as third parameter to function trace callback · 2f5f6ad9
      Steven Rostedt 提交于
      Currently the function trace callback receives only the ip and parent_ip
      of the function that it traced. It would be more powerful to also return
      the ops that registered the function as well. This allows the same function
      to act differently depending on what ftrace_ops registered it.
      
      Link: http://lkml.kernel.org/r/20120612225424.267254552@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f5f6ad9
  13. 15 6月, 2012 1 次提交
  14. 17 5月, 2012 7 次提交
    • S
      ftrace/x86: Have x86 ftrace use the ftrace_modify_all_code() · e4f5d544
      Steven Rostedt 提交于
      To remove duplicate code, have the ftrace arch_ftrace_update_code()
      use the generic ftrace_modify_all_code(). This requires that the
      default ftrace_replace_code() becomes a weak function so that an
      arch may override it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e4f5d544
    • S
      ftrace: Make ftrace_modify_all_code() global for archs to use · 8ed3e2cf
      Steven Rostedt 提交于
      Rename __ftrace_modify_code() to ftrace_modify_all_code() and make
      it global for all archs to use. This will remove the duplication
      of code, as archs that can modify code without stop_machine()
      can use it directly outside of the stop_machine() call.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8ed3e2cf
    • S
      ftrace: Return record ip addr for ftrace_location() · f0cf973a
      Steven Rostedt 提交于
      ftrace_location() is passed an addr, and returns 1 if the addr is
      on a ftrace nop (or caller to ftrace_caller), and 0 otherwise.
      
      To let kprobes know if it should move a breakpoint or not, it
      must return the actual addr that is the start of the ftrace nop.
      This way a kprobe placed on the location of a ftrace nop, can
      instead be placed on the instruction after the nop. Even if the
      probe addr is on the second or later byte of the nop, it can
      simply be moved forward.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f0cf973a
    • S
      ftrace: Consolidate ftrace_location() and ftrace_text_reserved() · a650e02a
      Steven Rostedt 提交于
      Both ftrace_location() and ftrace_text_reserved() do basically the same thing.
      They search to see if an address is in the ftace table (contains an address
      that may change from nop to call ftrace_caller). The difference is
      that ftrace_location() searches a single address, but ftrace_text_reserved()
      searches a range.
      
      This also makes the ftrace_text_reserved() faster as it now uses a bsearch()
      instead of linearly searching all the addresses within a page.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a650e02a
    • S
      ftrace: Speed up search by skipping pages by address · 9644302e
      Steven Rostedt 提交于
      As all records in a page of the ftrace table are sorted, we can
      speed up the search algorithm by checking if the address to look for
      falls in between the first and last record ip on the page.
      
      This speeds up both the ftrace_location() and ftrace_text_reserved()
      algorithms, as it can skip full pages when the search address is
      not in them.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9644302e
    • S
      ftrace: Remove extra helper functions · 706c81f8
      Steven Rostedt 提交于
      The ftrace_record_ip() and ftrace_alloc_dyn_node() were from the
      time of the ftrace daemon. Although they were still used, they
      still make things a bit more complex than necessary.
      
      Move the code into the one function that uses it, and remove the
      helper functions.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      706c81f8
    • S
      ftrace: Sort all function addresses, not just per page · 9fd49328
      Steven Rostedt 提交于
      Instead of just sorting the ip's of the functions per ftrace page,
      sort the entire list before adding them to the ftrace pages.
      
      This will allow the bsearch algorithm to be sped up as it can
      also sort by pages, not just records within a page.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9fd49328
  15. 09 5月, 2012 1 次提交
  16. 14 3月, 2012 1 次提交
    • R
      ftrace: Fix function_graph for archs that test ftrace_trace_function · db6544e0
      Rajesh Bhagat 提交于
      When CONFIG_DYNAMIC_FTRACE is not set, some archs (ARM) test
      the variable function_trace_function to determine if it should
      call the function tracer. If it is not set to ftrace_stub, then
      it will call the function and return, and not call the function
      graph tracer.
      
      But some of these archs (ARM) do not have the assembly code
      to test if function tracing is enabled or not (quick stop of tracing)
      and it calls the helper routine ftrace_test_stop_func() instead.
      
      If function tracer is enabled and then disabled, the variable
      ftrace_trace_function is still set to the helper routine
      ftrace_test_stop_func(), and not to ftrace_stub. This will
      prevent the function graph tracer from ever running.
      
      Output before patch
      /debug/tracing # echo function > current_tracer
      /debug/tracing # echo function_graph > current_tracer
      /debug/tracing # cat trace
      
      Output after patch
      /debug/tracing # echo function > current_tracer
      /debug/tracing # echo function_graph > current_tracer
      /debug/tracing # cat trace
      0) ! 253.375 us | } /* irq_enter */
      0) | generic_handle_irq() {
      0) | handle_fasteoi_irq() {
      0) 9.208 us | _raw_spin_lock();
      0) | handle_irq_event() {
      0) | handle_irq_event_percpu() {
      Signed-off-by: NRajesh Bhagat <rajesh.lnx@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      db6544e0
  17. 22 2月, 2012 2 次提交
    • J
      ftrace, perf: Add filter support for function trace event · 5500fa51
      Jiri Olsa 提交于
      Adding support to filter function trace event via perf
      interface. It is now possible to use filter interface
      in the perf tool like:
      
        perf record -e ftrace:function --filter="(ip == mm_*)" ls
      
      The filter syntax is restricted to the the 'ip' field only,
      and following operators are accepted '==' '!=' '||', ending
      up with the filter strings like:
      
        ip == f1[, ]f2 ... || ip != f3[, ]f4 ...
      
      with comma ',' or space ' ' as a function separator. If the
      space ' ' is used as a separator, the right side of the
      assignment needs to be enclosed in double quotes '"', e.g.:
      
        perf record -e ftrace:function --filter '(ip == do_execve,sys_*,ext*)' ls
        perf record -e ftrace:function --filter '(ip == "do_execve,sys_*,ext*")' ls
        perf record -e ftrace:function --filter '(ip == "do_execve sys_* ext*")' ls
      
      The '==' operator adds trace filter with same effect as would
      be added via set_ftrace_filter file.
      
      The '!=' operator adds trace filter with same effect as would
      be added via set_ftrace_notrace file.
      
      The right side of the '!=', '==' operators is list of functions
      or regexp. to be added to filter separated by space.
      
      The '||' operator is used for connecting multiple filter definitions
      together. It is possible to have more than one '==' and '!='
      operators within one filter string.
      
      Link: http://lkml.kernel.org/r/1329317514-8131-8-git-send-email-jolsa@redhat.comSigned-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5500fa51
    • J
      ftrace: Add enable/disable ftrace_ops control interface · e248491a
      Jiri Olsa 提交于
      Adding a way to temporarily enable/disable ftrace_ops. The change
      follows the same way as 'global' ftrace_ops are done.
      
      Introducing 2 global ftrace_ops - control_ops and ftrace_control_list
      which take over all ftrace_ops registered with FTRACE_OPS_FL_CONTROL
      flag. In addition new per cpu flag called 'disabled' is also added to
      ftrace_ops to provide the control information for each cpu.
      
      When ftrace_ops with FTRACE_OPS_FL_CONTROL is registered, it is
      set as disabled for all cpus.
      
      The ftrace_control_list contains all the registered 'control' ftrace_ops.
      The control_ops provides function which iterates ftrace_control_list
      and does the check for 'disabled' flag on current cpu.
      
      Adding 3 inline functions:
        ftrace_function_local_disable/ftrace_function_local_enable
        - enable/disable the ftrace_ops on current cpu
        ftrace_function_local_disabled
        - get disabled ftrace_ops::disabled value for current cpu
      
      Link: http://lkml.kernel.org/r/1329317514-8131-2-git-send-email-jolsa@redhat.comAcked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e248491a
  18. 14 2月, 2012 1 次提交
  19. 03 2月, 2012 1 次提交
  20. 21 12月, 2011 2 次提交