1. 21 7月, 2015 1 次提交
  2. 03 4月, 2015 1 次提交
  3. 09 3月, 2015 3 次提交
    • S
      ftrace: Fix ftrace enable ordering of sysctl ftrace_enabled · 524a3868
      Steven Rostedt (Red Hat) 提交于
      Some archs (specifically PowerPC), are sensitive with the ordering of
      the enabling of the calls to function tracing and setting of the
      function to use to be traced.
      
      That is, update_ftrace_function() sets what function the ftrace_caller
      trampoline should call. Some archs require this to be set before
      calling ftrace_run_update_code().
      
      Another bug was discovered, that ftrace_startup_sysctl() called
      ftrace_run_update_code() directly. If the function the ftrace_caller
      trampoline changes, then it will not be updated. Instead a call
      to ftrace_startup_enable() should be called because it tests to see
      if the callback changed since the code was disabled, and will
      tell the arch to update appropriately. Most archs do not need this
      notification, but PowerPC does.
      
      The problem could be seen by the following commands:
      
       # echo 0 > /proc/sys/kernel/ftrace_enabled
       # echo function > /sys/kernel/debug/tracing/current_tracer
       # echo 1 > /proc/sys/kernel/ftrace_enabled
       # cat /sys/kernel/debug/tracing/trace
      
      The trace will show that function tracing was not active.
      
      Cc: stable@vger.kernel.org # 2.6.27+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      524a3868
    • P
      ftrace: Fix en(dis)able graph caller when en(dis)abling record via sysctl · 1619dc3f
      Pratyush Anand 提交于
      When ftrace is enabled globally through the proc interface, we must check if
      ftrace_graph_active is set. If it is set, then we should also pass the
      FTRACE_START_FUNC_RET command to ftrace_run_update_code(). Similarly, when
      ftrace is disabled globally through the proc interface, we must check if
      ftrace_graph_active is set. If it is set, then we should also pass the
      FTRACE_STOP_FUNC_RET command to ftrace_run_update_code().
      
      Consider the following situation.
      
       # echo 0 > /proc/sys/kernel/ftrace_enabled
      
      After this ftrace_enabled = 0.
      
       # echo function_graph > /sys/kernel/debug/tracing/current_tracer
      
      Since ftrace_enabled = 0, ftrace_enable_ftrace_graph_caller() is never
      called.
      
       # echo 1 > /proc/sys/kernel/ftrace_enabled
      
      Now ftrace_enabled will be set to true, but still
      ftrace_enable_ftrace_graph_caller() will not be called, which is not
      desired.
      
      Further if we execute the following after this:
        # echo nop > /sys/kernel/debug/tracing/current_tracer
      
      Now since ftrace_enabled is set it will call
      ftrace_disable_ftrace_graph_caller(), which causes a kernel warning on
      the ARM platform.
      
      On the ARM platform, when ftrace_enable_ftrace_graph_caller() is called,
      it checks whether the old instruction is a nop or not. If it's not a nop,
      then it returns an error. If it is a nop then it replaces instruction at
      that address with a branch to ftrace_graph_caller.
      ftrace_disable_ftrace_graph_caller() behaves just the opposite. Therefore,
      if generic ftrace code ever calls either ftrace_enable_ftrace_graph_caller()
      or ftrace_disable_ftrace_graph_caller() consecutively two times in a row,
      then it will return an error, which will cause the generic ftrace code to
      raise a warning.
      
      Note, x86 does not have an issue with this because the architecture
      specific code for ftrace_enable_ftrace_graph_caller() and
      ftrace_disable_ftrace_graph_caller() does not check the previous state,
      and calling either of these functions twice in a row has no ill effect.
      
      Link: http://lkml.kernel.org/r/e4fbe64cdac0dd0e86a3bf914b0f83c0b419f146.1425666454.git.panand@redhat.com
      
      Cc: stable@vger.kernel.org # 2.6.31+
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      [
        removed extra if (ftrace_start_up) and defined ftrace_graph_active as 0
        if CONFIG_FUNCTION_GRAPH_TRACER is not set.
      ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1619dc3f
    • S
      ftrace: Clear REGS_EN and TRAMP_EN flags on disabling record via sysctl · b24d443b
      Steven Rostedt (Red Hat) 提交于
      When /proc/sys/kernel/ftrace_enabled is set to zero, all function
      tracing is disabled. But the records that represent the functions
      still hold information about the ftrace_ops that are hooked to them.
      
      ftrace_ops may request "REGS" (have a full set of pt_regs passed to
      the callback), or "TRAMP" (the ops has its own trampoline to use).
      When the record is updated to represent the state of the ops hooked
      to it, it sets "REGS_EN" and/or "TRAMP_EN" to state that the callback
      points to the correct trampoline (REGS has its own trampoline).
      
      When ftrace_enabled is set to zero, all ftrace locations are a nop,
      so they do not point to any trampoline. But the _EN flags are still
      set. This can cause the accounting to go wrong when ftrace_enabled
      is cleared and an ops that has a trampoline is registered or unregistered.
      
      For example, the following will cause ftrace to crash:
      
       # echo function_graph > /sys/kernel/debug/tracing/current_tracer
       # echo 0 > /proc/sys/kernel/ftrace_enabled
       # echo nop > /sys/kernel/debug/tracing/current_tracer
       # echo 1 > /proc/sys/kernel/ftrace_enabled
       # echo function_graph > /sys/kernel/debug/tracing/current_tracer
      
      As function_graph uses a trampoline, when ftrace_enabled is set to zero
      the updates to the record are not done. When enabling function_graph
      again, the record will still have the TRAMP_EN flag set, and it will
      look for an op that has a trampoline other than the function_graph
      ops, and fail to find one.
      
      Cc: stable@vger.kernel.org # 3.17+
      Reported-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b24d443b
  4. 04 2月, 2015 1 次提交
    • S
      tracing: Convert the tracing facility over to use tracefs · 8434dc93
      Steven Rostedt (Red Hat) 提交于
      debugfs was fine for the tracing facility as a quick way to get
      an interface. Now that tracing has matured, it should separate itself
      from debugfs such that it can be mounted separately without needing
      to mount all of debugfs with it. That is, users resist using tracing
      because it requires mounting debugfs. Having tracing have its own file
      system lets users get the features of tracing without needing to bring
      in the rest of the kernel's debug infrastructure.
      
      Another reason for tracefs is that debubfs does not support mkdir.
      Currently, to create instances, one does a mkdir in the tracing/instance
      directory. This is implemented via a hack that forces debugfs to do
      something it is not intended on doing. By converting over to tracefs, this
      hack can be removed and mkdir can be properly implemented. This patch does
      not address this yet, but it lays the ground work for that to be done.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8434dc93
  5. 23 1月, 2015 1 次提交
  6. 15 1月, 2015 2 次提交
  7. 22 11月, 2014 1 次提交
    • M
      ftrace, kprobes: Support IPMODIFY flag to find IP modify conflict · f8b8be8a
      Masami Hiramatsu 提交于
      Introduce FTRACE_OPS_FL_IPMODIFY to avoid conflict among
      ftrace users who may modify regs->ip to change the execution
      path. If two or more users modify the regs->ip on the same
      function entry, one of them will be broken. So they must add
      IPMODIFY flag and make sure that ftrace_set_filter_ip() succeeds.
      
      Note that ftrace doesn't allow ftrace_ops which has IPMODIFY
      flag to have notrace hash, and the ftrace_ops must have a
      filter hash (so that the ftrace_ops can hook only specific
      entries), because it strongly depends on the address and
      must be allowed for only few selected functions.
      
      Link: http://lkml.kernel.org/r/20141121102516.11844.27829.stgit@localhost.localdomain
      
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Petr Mladek <pmladek@suse.cz>
      Cc: Vojtech Pavlik <vojtech@suse.cz>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      [ fixed up some of the comments ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f8b8be8a
  8. 20 11月, 2014 2 次提交
  9. 14 11月, 2014 2 次提交
    • R
      tracing: Replace seq_printf by simpler equivalents · fa6f0cc7
      Rasmus Villemoes 提交于
      Using seq_printf to print a simple string or a single character is a
      lot more expensive than it needs to be, since seq_puts and seq_putc
      exist.
      
      These patches do
      
        seq_printf(m, s) -> seq_puts(m, s)
        seq_printf(m, "%s", s) -> seq_puts(m, s)
        seq_printf(m, "%c", c) -> seq_putc(m, c)
      
      Subsequent patches will simplify further.
      
      Link: http://lkml.kernel.org/r/1415479332-25944-2-git-send-email-linux@rasmusvillemoes.dkSigned-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fa6f0cc7
    • S
      ftrace: Have the control_ops get a trampoline · fe578ba3
      Steven Rostedt (Red Hat) 提交于
      With the new logic, if only a single user of ftrace function hooks is
      used, it will get its own trampoline assigned to it.
      
      The problem is that the control_ops is an indirect ops that perf ops
      uses. What that means is that when perf registers its ops with
      register_ftrace_function(), it has the CONTROL flag set and gets added
      to the control list instead of the global ftrace list. The control_ops
      gets added to that instead and the mcount trampoline calls the control_ops
      function. The control_ops function will iterate the control list and
      call the ops functions that are attached to it.
      
      But currently the trampoline is added to the perf ops and not the
      control ops, and when ftrace tries to find a trampoline hook for it,
      it fails to find one and gives the following splat:
      
       ------------[ cut here ]------------
       WARNING: CPU: 0 PID: 10133 at kernel/trace/ftrace.c:2033 ftrace_get_addr_new+0x6f/0xc0()
       Modules linked in: [...]
       CPU: 0 PID: 10133 Comm: perf Tainted: P               3.18.0-rc1-test+ #388
       Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
        00000000000007f1 ffff8800c2643bc8 ffffffff814fca6e ffff88011ea0ed01
        0000000000000000 ffff8800c2643c08 ffffffff81041ffd 0000000000000000
        ffffffff810c388c ffffffff81a5a350 ffff880119b00000 ffffffff810001c8
       Call Trace:
        [<ffffffff814fca6e>] dump_stack+0x46/0x58
        [<ffffffff81041ffd>] warn_slowpath_common+0x81/0x9b
        [<ffffffff810c388c>] ? ftrace_get_addr_new+0x6f/0xc0
        [<ffffffff810001c8>] ? 0xffffffff810001c8
        [<ffffffff81042031>] warn_slowpath_null+0x1a/0x1c
        [<ffffffff810c388c>] ftrace_get_addr_new+0x6f/0xc0
        [<ffffffff8102e938>] ftrace_replace_code+0xd6/0x334
        [<ffffffff810c4116>] ftrace_modify_all_code+0x41/0xc5
        [<ffffffff8102eba6>] arch_ftrace_update_code+0x10/0x19
        [<ffffffff810c293c>] ftrace_run_update_code+0x21/0x42
        [<ffffffff810c298f>] ftrace_startup_enable+0x32/0x34
        [<ffffffff810c3049>] ftrace_startup+0x14e/0x15a
        [<ffffffff810c307c>] register_ftrace_function+0x27/0x40
        [<ffffffff810dc118>] perf_ftrace_event_register+0x3e/0xee
        [<ffffffff810dbfbe>] perf_trace_init+0x29d/0x2a9
        [<ffffffff810eb422>] perf_tp_event_init+0x27/0x3a
        [<ffffffff810f18bc>] perf_init_event+0x9e/0xed
        [<ffffffff810f1ba4>] perf_event_alloc+0x299/0x330
        [<ffffffff810f236b>] SYSC_perf_event_open+0x3ee/0x816
        [<ffffffff8115a066>] ? mntput+0x2d/0x2f
        [<ffffffff81142b00>] ? __fput+0xa7/0x1b2
        [<ffffffff81091300>] ? do_gettimeofday+0x22/0x3a
        [<ffffffff810f279c>] SyS_perf_event_open+0x9/0xb
        [<ffffffff81502a92>] system_call_fastpath+0x12/0x17
       ---[ end trace 81a53565150e4982 ]---
       Bad trampoline accounting at: ffffffff810001c8 (run_init_process+0x0/0x2d) (10000001)
      
      Update the control_ops trampoline instead of the perf ops one.
      
      Reported-by: lkp@01.org
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fe578ba3
  10. 12 11月, 2014 2 次提交
    • S
      ftrace: Add more information to ftrace_bug() output · 4fd3279b
      Steven Rostedt (Red Hat) 提交于
      With the introduction of the dynamic trampolines, it is useful that if
      things go wrong that ftrace_bug() produces more information about what
      the current state is. This can help debug issues that may arise.
      
      Ftrace has lots of checks to make sure that the state of the system it
      touchs is exactly what it expects it to be. When it detects an abnormality
      it calls ftrace_bug() and disables itself to prevent any further damage.
      It is crucial that ftrace_bug() produces sufficient information that
      can be used to debug the situation.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4fd3279b
    • S
      ftrace/x86: Allow !CONFIG_PREEMPT dynamic ops to use allocated trampolines · 12cce594
      Steven Rostedt (Red Hat) 提交于
      When the static ftrace_ops (like function tracer) enables tracing, and it
      is the only callback that is referencing a function, a trampoline is
      dynamically allocated to the function that calls the callback directly
      instead of calling a loop function that iterates over all the registered
      ftrace ops (if more than one ops is registered).
      
      But when it comes to dynamically allocated ftrace_ops, where they may be
      freed, on a CONFIG_PREEMPT kernel there's no way to know when it is safe
      to free the trampoline. If a task was preempted while executing on the
      trampoline, there's currently no way to know when it will be off that
      trampoline.
      
      But this is not true when it comes to !CONFIG_PREEMPT. The current method
      of calling schedule_on_each_cpu() will force tasks off the trampoline,
      becaues they can not schedule while on it (kernel preemption is not
      configured). That means it is safe to free a dynamically allocated
      ftrace ops trampoline when CONFIG_PREEMPT is not configured.
      
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12cce594
  11. 01 11月, 2014 2 次提交
    • S
      ftrace/x86: Show trampoline call function in enabled_functions · 15d5b02c
      Steven Rostedt (Red Hat) 提交于
      The file /sys/kernel/debug/tracing/eneabled_functions is used to debug
      ftrace function hooks. Add to the output what function is being called
      by the trampoline if the arch supports it.
      
      Add support for this feature in x86_64.
      
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      15d5b02c
    • S
      ftrace/x86: Add dynamic allocated trampoline for ftrace_ops · f3bea491
      Steven Rostedt (Red Hat) 提交于
      The current method of handling multiple function callbacks is to register
      a list function callback that calls all the other callbacks based on
      their hash tables and compare it to the function that the callback was
      called on. But this is very inefficient.
      
      For example, if you are tracing all functions in the kernel and then
      add a kprobe to a function such that the kprobe uses ftrace, the
      mcount trampoline will switch from calling the function trace callback
      to calling the list callback that will iterate over all registered
      ftrace_ops (in this case, the function tracer and the kprobes callback).
      That means for every function being traced it checks the hash of the
      ftrace_ops for function tracing and kprobes, even though the kprobes
      is only set at a single function. The kprobes ftrace_ops is checked
      for every function being traced!
      
      Instead of calling the list function for functions that are only being
      traced by a single callback, we can call a dynamically allocated
      trampoline that calls the callback directly. The function graph tracer
      already uses a direct call trampoline when it is being traced by itself
      but it is not dynamically allocated. It's trampoline is static in the
      kernel core. The infrastructure that called the function graph trampoline
      can also be used to call a dynamically allocated one.
      
      For now, only ftrace_ops that are not dynamically allocated can have
      a trampoline. That is, users such as function tracer or stack tracer.
      kprobes and perf allocate their ftrace_ops, and until there's a safe
      way to free the trampoline, it can not be used. The dynamically allocated
      ftrace_ops may, although, use the trampoline if the kernel is not
      compiled with CONFIG_PREEMPT. But that will come later.
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f3bea491
  12. 25 10月, 2014 2 次提交
    • S
      ftrace: Fix checking of trampoline ftrace_ops in finding trampoline · 4fc40904
      Steven Rostedt (Red Hat) 提交于
      When modifying code, ftrace has several checks to make sure things
      are being done correctly. One of them is to make sure any code it
      modifies is exactly what it expects it to be before it modifies it.
      In order to do so with the new trampoline logic, it must be able
      to find out what trampoline a function is hooked to in order to
      see if the code that hooks to it is what's expected.
      
      The logic to find the trampoline from a record (accounting descriptor
      for a function that is hooked) needs to only look at the "old_hash"
      of an ops that is being modified. The old_hash is the list of function
      an ops is hooked to before its update. Since a record would only be
      pointing to an ops that is being modified if it was already hooked
      before.
      
      Currently, it can pick a modified ops based on its new functions it
      will be hooked to, and this picks the wrong trampoline and causes
      the check to fail, disabling ftrace.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      
      ftrace: squash into ordering of ops for modification
      4fc40904
    • S
      ftrace: Set ops->old_hash on modifying what an ops hooks to · 8252ecf3
      Steven Rostedt (Red Hat) 提交于
      The code that checks for trampolines when modifying function hooks
      tests against a modified ops "old_hash". But the ops old_hash pointer
      is not being updated before the changes are made, making it possible
      to not find the right hash to the callback and possibly causing
      ftrace to break in accounting and disable itself.
      
      Have the ops set its old_hash before the modifying takes place.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8252ecf3
  13. 13 9月, 2014 1 次提交
  14. 10 9月, 2014 6 次提交
    • S
      ftrace: Replace tramp_hash with old_*_hash to save space · fef5aeee
      Steven Rostedt (Red Hat) 提交于
      Allowing function callbacks to declare their own trampolines requires
      that each ftrace_ops that has a trampoline must have some sort of
      accounting that keeps track of which ops has a trampoline attached
      to a record.
      
      The easy way to solve this was to add a "tramp_hash" that created a
      hash entry for every function that a ops uses with a trampoline.
      But since we can have literally tens of thousands of functions being
      traced, that means we need tens of thousands of descriptors to map
      the ops to the function in the hash. This is quite expensive and
      can cause enabling and disabling the function graph tracer to take
      some time to start and stop. It can take up to several seconds to
      disable or enable all functions in the function graph tracer for this
      reason.
      
      The better approach albeit more complex, is to keep track of how ops
      are being enabled and disabled, and use that along with the counting
      of the number of ops attached to records, to determive what ops has
      a trampoline attached to a record at enabling and disabling of
      tracing.
      
      To do this, the tramp_hash has been replaced with an old_filter_hash
      and old_notrace_hash, which get the copy of the ops filter_hash and
      notrace_hash respectively. The old hashes is kept until the ops has
      been modified or removed and the old hashes are used with the logic
      of the accounting to determine the ops that have the trampoline of
      a record. The reason this has less of a footprint is due to the trick
      that an "empty" hash in the filter_hash means "all functions" and
      an empty hash in the notrace hash means "no functions" in the hash.
      
      This is much more efficienct, doesn't have the delay, and takes up
      much less memory, as we do not need to map all the functions but
      just figure out which functions are mapped at the time it is
      enabled or disabled.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fef5aeee
    • S
      ftrace: Annotate the ops operation on update · e1effa01
      Steven Rostedt (Red Hat) 提交于
      Add three new flags for ftrace_ops:
      
        FTRACE_OPS_FL_ADDING
        FTRACE_OPS_FL_REMOVING
        FTRACE_OPS_FL_MODIFYING
      
      These will be set for the ftrace_ops when they are first added
      to the function tracing, being removed from function tracing
      or just having their functions changed from function tracing,
      respectively.
      
      This will be needed to remove the tramp_hash, which can grow quite
      big. The tramp_hash is used to note what functions a ftrace_ops
      is using a trampoline for. Denoting which ftrace_ops is being
      modified, will allow us to use the ftrace_ops hashes themselves,
      which are much smaller as they have a global flag to denote if
      a ftrace_ops is tracing all functions, as well as a notrace hash
      if the ftrace_ops is tracing all but a few. The tramp_hash just
      creates a hash item for every function, which can go into the 10s
      of thousands if all functions are using the ftrace_ops trampoline.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1effa01
    • S
      ftrace: Grab any ops for a rec for enabled_functions output · 5fecaa04
      Steven Rostedt (Red Hat) 提交于
      When dumping the enabled_functions, use the first op that is
      found with a trampoline to the record, as there should only be
      one, as only one ops can be registered to a function that has
      a trampoline.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5fecaa04
    • S
      ftrace: Remove freeing of old_hash from ftrace_hash_move() · 3296fc4e
      Steven Rostedt (Red Hat) 提交于
      ftrace_hash_move() currently frees the old hash that is passed to it
      after replacing the pointer with the new hash. Instead of having the
      function do that chore, have the caller perform the free.
      
      This lets the ftrace_hash_move() be used a bit more freely, which
      is needed for changing the way the trampoline logic is done.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3296fc4e
    • S
      ftrace: Set callback to ftrace_stub when no ops are registered · f7aad4e1
      Steven Rostedt (Red Hat) 提交于
      The clean up that adds the helper function ftrace_ops_get_func()
      caused the default function to not change when DYNAMIC_FTRACE was not
      set and no ftrace_ops were registered. Although static tracing is
      not very useful (not having DYNAMIC_FTRACE set), it is still supported
      and we don't want to break it.
      
      Clean up the if statement even more to specifically have the default
      function call ftrace_stub when no ftrace_ops are registered. This
      fixes the small bug for static tracing as well as makes the code a
      bit more understandable.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f7aad4e1
    • S
      ftrace: Add helper function ftrace_ops_get_func() · 87354059
      Steven Rostedt (Red Hat) 提交于
      Add the helper function to what the mcount trampoline is to call
      for a ftrace_ops function. This helper will be used by arch code
      in the future to set up dynamic trampolines. But as this does the
      same tests that are performed in choosing what function to call for
      the default mcount trampoline, might as well use it to clean up
      the existing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87354059
  15. 09 9月, 2014 1 次提交
  16. 23 8月, 2014 5 次提交
    • S
      ftrace: Use current addr when converting to nop in __ftrace_replace_code() · 39b5552c
      Steven Rostedt (Red Hat) 提交于
      In __ftrace_replace_code(), when converting the call to a nop in a function
      it needs to compare against the "curr" (current) value of the ftrace ops, and
      not the "new" one. It currently does not affect x86 which is the only arch
      to do the trampolines with function graph tracer, but when other archs that do
      depend on this code implement the function graph trampoline, it can crash.
      
      Here's an example when ARM uses the trampolines (in the future):
      
       ------------[ cut here ]------------
       WARNING: CPU: 0 PID: 9 at kernel/trace/ftrace.c:1716 ftrace_bug+0x17c/0x1f4()
       Modules linked in: omap_rng rng_core ipv6
       CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.16.0-test-10959-gf0094b28-dirty #52
       [<c02188f4>] (unwind_backtrace) from [<c021343c>] (show_stack+0x20/0x24)
       [<c021343c>] (show_stack) from [<c095a674>] (dump_stack+0x78/0x94)
       [<c095a674>] (dump_stack) from [<c02532a0>] (warn_slowpath_common+0x7c/0x9c)
       [<c02532a0>] (warn_slowpath_common) from [<c02532ec>] (warn_slowpath_null+0x2c/0x34)
       [<c02532ec>] (warn_slowpath_null) from [<c02cbac4>] (ftrace_bug+0x17c/0x1f4)
       [<c02cbac4>] (ftrace_bug) from [<c02cc44c>] (ftrace_replace_code+0x80/0x9c)
       [<c02cc44c>] (ftrace_replace_code) from [<c02cc658>] (ftrace_modify_all_code+0xb8/0x164)
       [<c02cc658>] (ftrace_modify_all_code) from [<c02cc718>] (__ftrace_modify_code+0x14/0x1c)
       [<c02cc718>] (__ftrace_modify_code) from [<c02c7244>] (multi_cpu_stop+0xf4/0x134)
       [<c02c7244>] (multi_cpu_stop) from [<c02c6e90>] (cpu_stopper_thread+0x54/0x130)
       [<c02c6e90>] (cpu_stopper_thread) from [<c0271cd4>] (smpboot_thread_fn+0x1ac/0x1bc)
       [<c0271cd4>] (smpboot_thread_fn) from [<c026ddf0>] (kthread+0xe0/0xfc)
       [<c026ddf0>] (kthread) from [<c020f318>] (ret_from_fork+0x14/0x20)
       ---[ end trace dc9ce72c5b617d8f ]---
      [   65.047264] ftrace failed to modify [<c0208580>] asm_do_IRQ+0x10/0x1c
      [   65.054070]  actual: 85:1b:00:eb
      
      Fixes: 7413af1f "ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      39b5552c
    • S
      ftrace: Fix function_profiler and function tracer together · 5f151b24
      Steven Rostedt (Red Hat) 提交于
      The latest rewrite of ftrace removed the separate ftrace_ops of
      the function tracer and the function graph tracer and had them
      share the same ftrace_ops. This simplified the accounting by removing
      the multiple layers of functions called, where the global_ops func
      would call a special list that would iterate over the other ops that
      were registered within it (like function and function graph), which
      itself was registered to the ftrace ops list of all functions
      currently active. If that sounds confusing, the code that implemented
      it was also confusing and its removal is a good thing.
      
      The problem with this change was that it assumed that the function
      and function graph tracer can never be used at the same time.
      This is mostly true, but there is an exception. That is when the
      function profiler uses the function graph tracer to profile.
      The function profiler can be activated the same time as the function
      tracer, and this breaks the assumption and the result is that ftrace
      will crash (it detects the error and shuts itself down, it does not
      cause a kernel oops).
      
      To solve this issue, a previous change allowed the hash tables
      for the functions traced by a ftrace_ops to be a pointer and let
      multiple ftrace_ops share the same hash. This allows the function
      and function_graph tracer to have separate ftrace_ops, but still
      share the hash, which is what is done.
      
      Now the function and function graph tracers have separate ftrace_ops
      again, and the function tracer can be run while the function_profile
      is active.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5f151b24
    • S
      ftrace: Fix up trampoline accounting with looping on hash ops · bce0b6c5
      Steven Rostedt (Red Hat) 提交于
      Now that a ftrace_hash can be shared by multiple ftrace_ops, they can dec
      the rec->flags by more than once (one per those that share the ftrace_hash).
      This means that the tramp_hash may not have a hash item when it was added.
      
      For example, if two ftrace_ops share a hash for a ftrace record, and the
      first ops has a trampoline, when it adds itself it will set the rec->flags
      TRAMP flag and increments its nr_trampolines counter. When the second ops
      is added, it must clear that tramp flag but also decrement the other ops
      that shares its hash. As the update to the function callbacks has not yet
      been performed, the other ops will not have the tramp hash set yet and it
      can not be used to know to decrement its nr_trampolines.
      
      Luckily, the tramp_hash does not need to be used. As the ftrace_mutex is
      held, a ops with a trampoline to a record during an update of another ops
      that shares the record will have its func_hash pointing to it. Since a
      trampoline can only be set for a record if only one ops is attached to it,
      we can just check if the record has a trampoline (the FTRACE_FL_TRAMP flag
      is set) and then find the ops that has this record in its hashes.
      
      Also added some output to help debug when things go wrong.
      
      Cc: stable@vger.kernel.org # 3.16+ (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bce0b6c5
    • S
      ftrace: Update all ftrace_ops for a ftrace_hash_ops update · 84261912
      Steven Rostedt (Red Hat) 提交于
      When updating what an ftrace_ops traces, if it is registered (that is,
      actively tracing), and that ftrace_ops uses the shared global_ops
      local_hash, then we need to update all tracers that are active and
      also share the global_ops' ftrace_hash_ops.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      84261912
    • S
      ftrace: Allow ftrace_ops to use the hashes from other ops · 33b7f99c
      Steven Rostedt (Red Hat) 提交于
      Currently the top level debug file system function tracer shares its
      ftrace_ops with the function graph tracer. This was thought to be fine
      because the tracers are not used together, as one can only enable
      function or function_graph tracer in the current_tracer file.
      
      But that assumption proved to be incorrect. The function profiler
      can use the function graph tracer when function tracing is enabled.
      Since all function graph users uses the function tracing ftrace_ops
      this causes a conflict and when a user enables both function profiling
      as well as the function tracer it will crash ftrace and disable it.
      
      The quick solution so far is to move them as separate ftrace_ops like
      it was earlier. The problem though is to synchronize the functions that
      are traced because both function and function_graph tracer are limited
      by the selections made in the set_ftrace_filter and set_ftrace_notrace
      files.
      
      To handle this, a new structure is made called ftrace_ops_hash. This
      structure will now hold the filter_hash and notrace_hash, and the
      ftrace_ops will point to this structure. That will allow two ftrace_ops
      to share the same hashes.
      
      Since most ftrace_ops do not share the hashes, and to keep allocation
      simple, the ftrace_ops structure will include both a pointer to the
      ftrace_ops_hash called func_hash, as well as the structure itself,
      called local_hash. When the ops are registered, the func_hash pointer
      will be initialized to point to the local_hash within the ftrace_ops
      structure. Some of the ftrace internal ftrace_ops will be initialized
      statically. This will allow for the function and function_graph tracer
      to have separate ops but still share the same hash tables that determine
      what functions they trace.
      
      Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      33b7f99c
  17. 24 7月, 2014 3 次提交
    • S
      ftrace: Add warning if tramp hash does not match nr_trampolines · dc6f03f2
      Steven Rostedt (Red Hat) 提交于
      After adding all the records to the tramp_hash, add a check that makes
      sure that the number of records added matches the number of records
      expected to match and do a WARN_ON and disable ftrace if they do
      not match.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dc6f03f2
    • S
      ftrace: Fix trampoline hash update check on rec->flags · 2a0343ba
      Steven Rostedt (Red Hat) 提交于
      In the loop of ftrace_save_ops_tramp_hash(), it adds all the recs
      to the ops hash if the rec has only one callback attached and the
      ops is connected to the rec. It gives a nasty warning and shuts down
      ftrace if the rec doesn't have a trampoline set for it. But this
      can happen with the following scenario:
      
        # cd /sys/kernel/debug/tracing
        # echo schedule do_IRQ > set_ftrace_filter
        # mkdir instances/foo
        # echo schedule > instances/foo/set_ftrace_filter
        # echo function_graph > current_function
        # echo function > instances/foo/current_function
        # echo nop > instances/foo/current_function
      
      The above would then trigger the following warning and disable
      ftrace:
      
       ------------[ cut here ]------------
       WARNING: CPU: 0 PID: 3145 at kernel/trace/ftrace.c:2212 ftrace_run_update_code+0xe4/0x15b()
       Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ip [...]
       CPU: 1 PID: 3145 Comm: bash Not tainted 3.16.0-rc3-test+ #136
       Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
        0000000000000000 ffffffff81808a88 ffffffff81502130 0000000000000000
        ffffffff81040ca1 ffff880077c08000 ffffffff810bd286 0000000000000001
        ffffffff81a56830 ffff88007a041be0 ffff88007a872d60 00000000000001be
       Call Trace:
        [<ffffffff81502130>] ? dump_stack+0x4a/0x75
        [<ffffffff81040ca1>] ? warn_slowpath_common+0x7e/0x97
        [<ffffffff810bd286>] ? ftrace_run_update_code+0xe4/0x15b
        [<ffffffff810bd286>] ? ftrace_run_update_code+0xe4/0x15b
        [<ffffffff810bda1a>] ? ftrace_shutdown+0x11c/0x16b
        [<ffffffff810bda87>] ? unregister_ftrace_function+0x1e/0x38
        [<ffffffff810cc7e1>] ? function_trace_reset+0x1a/0x28
        [<ffffffff810c924f>] ? tracing_set_tracer+0xc1/0x276
        [<ffffffff810c9477>] ? tracing_set_trace_write+0x73/0x91
        [<ffffffff81132383>] ? __sb_start_write+0x9a/0xcc
        [<ffffffff8120478f>] ? security_file_permission+0x1b/0x31
        [<ffffffff81130e49>] ? vfs_write+0xac/0x11c
        [<ffffffff8113115d>] ? SyS_write+0x60/0x8e
        [<ffffffff81508112>] ? system_call_fastpath+0x16/0x1b
       ---[ end trace 938c4415cbc7dc96 ]---
       ------------[ cut here ]------------
      
      Link: http://lkml.kernel.org/r/20140723120805.GB21376@redhat.comReported-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a0343ba
    • S
      ftrace: Rename ftrace_ops field from trampolines to nr_trampolines · 0162d621
      Steven Rostedt (Red Hat) 提交于
      Having two fields within the same struct that is off by one character
      can be confusing and error prone. Rename the counter "trampolines"
      to "nr_trampolines" to explicitly show it is a counter and not to
      be confused by the "trampoline" field.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0162d621
  18. 19 7月, 2014 4 次提交